[Linkpost] A Case for AI Consciousness

post by cdkg, Simon Goldstein (simon-goldstein) · 2024-07-06T14:52:21.704Z · LW · GW · 2 comments

This is a link post for https://philpapers.org/archive/GOLACF-2.pdf

Contents

2 comments

Just wanted to share a new paper on AI consciousness with Simon Goldstein that members of this community might be interested in. Here's the abstract:

It is generally assumed that existing artificial systems are not phenomenally conscious, and that the construction of phenomenally conscious artificial systems would require significant technological progress if it is possible at all. We challenge this assumption by arguing that if Global Workspace Theory (GWT) — a leading scientific theory of phenomenal consciousness — is correct, then instances of one widely implemented AI architecture, the artificial language agent, might easily be made phenomenally conscious if they are not already. Along the way, we articulate an explicit methodology for thinking about how to apply scientific theories of consciousness to artificial systems and employ this methodology to arrive at a set of necessary and sufficient conditions for phenomenal consciousness according to GWT.

2 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2024-07-07T22:50:54.962Z · LW(p) · GW(p)

This is a mix of nitpicks and more in-depth comments. I just finished reading the paper and these are my notes. I liked that it was quite technical, suggesting specific changes to existing systems and precise enough to make testeable predictions.  

 

Some discussion about the type of consciousness applied in the paper: Dehaene's Phenomenal Consciousness And The Brain reviewed on ACX

In the introduction the verb "to token" is used two times without it being clear what that means:

a being is conscious in the access sense to the extent that it tokens access-conscious states

The authors analyze multiple criteria for conscious Global Workspace systems and based on plausibility checks synthesize the following criteria:

A system is phenomenally conscious just in case: 

(1) It contains a set of parallel processing modules. 
(2) These modules generate representations that compete for entry through an information bottleneck into a workspace module, where the outcome of this competition is influenced both by the activity of the parallel processing modules (bottom-up attention) and by the state of the workspace module (top-down attention). 
(3) The workspace maintains and manipulates these representations, including in ways that improve synchronic and diachronic coherence. 
(4) The workspace broadcasts the resulting representations back to sufficiently many of the system’s modules.

In the introduction, the authors write:

We take it to be uncontroversial that any artificial system meeting these conditions [for phenomenal consciousness] would be access conscious

Access conscious means the ability to report on perceptions. We can apply this statement in reverse as the condition that a system that isn't access conscious can't meet all of the above conditions. 

To do so, we apply the thought experiment from the paper to test this: Young babies do not yet represent their desires and while they do show some awareness, many people would agree that they are not conscious (yet) and are likely not aware of their own existence, specifically, indepenent of their caretakers. Also, late stage dementia patients lose the ability to recognize themselves, in this model resulting from the loss of the ability to represent the self-concept in GWS. This indicates that something is missing.

Indeed, in section 7, the authors discuss the consequence that their four criteria could be fulfilled by very simple systems:

As we understand them, both objections [the small model objection and another one] are motivated by the idea that there may be some further necessary condition X on consciousness that is not described by GWT. The proponent of the small model objection takes X to be what is lacked by small models which prevents them from being conscious

The authors get quite close to an additional criteria in their discussion:

it has been suggested to us that X might be the capacity to represent, or the capacity to think, or the capacity for agency [...]

[...] Peter Godfrey-Smith’s [...] emphasizes the emergence of self-models in animals. In one picture, the essence of consciousness is having a point of view

But refrain from offering one:

while we have argued that no choice of X plausibly precludes consciousness in language agents, several of the choices do help with the small model objection.

As in the thought experiements above, there are readily available examples where too simple or impaired neuronal nets fail to appear conscious and to me this suggests the following criteria:

(5) To be phenomenally conscious, the system needs to have sufficient structure or learning capabilities to represent (specific or general) observations or perceptions as concepts and determine that the concept applies to the system itself

In terms of the Smallville system discussed, this criteria may already be fulfilled by the modeling strength of GTP-3.5 and would likely be fulfilled by later LLM versions. And as required, it is not fulfilled by simple neuronal networks that can't represent self-representation. This doesn't rule out system with far fewer neurons than humans, e.g., by avoiding all the complexities of sense processing and interacting purely textually with simulated worlds.

comment by A1987dM (army1987) · 2024-07-08T14:00:53.137Z · LW(p) · GW(p)

Am I the only one who, upon reading the title, wondered "do they mean arguments that conscious AIs would be better than unconscious AIs, or do they mean arguments that existing AIs are conscious?"