On AI personhood

post by p.b. · 2025-04-17T12:31:52.288Z · LW · GW · 6 comments

Contents

6 comments

It seems to me the question of consciousness of LLMs is a bit of a red herring. 

Instead the salient point is that they are sequence learning systems similar to our cortex (+ hippocampus). Therefore we should expect them to be able to learn sequences. 

What we should not expect is that they feel pain. In humans that is a separate thing and some people a born without the ability to feel pain. That does not mean that they don't have the ability to act like they are in pain. 

We should not expect them to have feelings like fear. In humans this is the domain of the amygdala. LLMs do not have an amygdala. It is also not hard to learn the sequence of acting like you are in fear. 

We should not expect them to feel love, attraction, friendship, delight, anger, hate, disgust, frustration or anything like that. All these human abilities are due to evolutionary pressures and do not originate in our sequence learning system. 

LLM have not been subject to that evolutionary pressure and they do not have additional parts that are designed to implement pain or fear or anything of the above. They are pure sequence learning systems.

Even if they are conscious, they are still empty. For them there is no valence to tokens. A love letter is the same as a grocery bill. To argue that things are different would require ignoring occam's razor and assume some kind of emergence - there is absolutely no reason to do this. 

This does not mean that they don't have preferences in the sense that they seek some things out, or goals that they try to accomplish. But they don't feel bad or good about failure or success. They don't suffer, they just output the most likely token. 

I think this topic is about to become much more salient in the near future, when memory features and tighter integration makes human-AI relationships deeper and more personal. 

6 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2025-04-18T10:19:24.126Z · LW(p) · GW(p)

My summary of your argument: In order to guess the nature of AI experience, you look at the feelings or lack of feelings accompanying certain kinds of human cognition. The cognition involved with "love, attraction, friendship, delight, anger, hate, disgust, frustration" has feelings onboard; the cognition involved with sequence prediction does not; the AI only does sequence prediction; therefore it has no feelings. Is that an accurate summary? 

Replies from: p.b.
comment by p.b. · 2025-04-18T11:36:21.963Z · LW(p) · GW(p)

No. 

The argument is that feelings or valence more broadly in humans requires additional machinery (amygdala, hypothalamus, etc). If the machinery is missing, the pain/fear/.../valence is missing although the sequence learning works just fine. 

AI is missing this machinery, therefore it is extremely unlikely to experience pain/fear/.../valence. 

comment by Tenoke · 2025-04-17T12:47:31.563Z · LW(p) · GW(p)

Sure, but with increased capability in order to predict and simulate the impact of pain or fear better and better one might end up producing a mechanism that simulates it too well. After all, if you are trying to predict really well how a human will react when they are in pain, a natural approach is to check how a different human reacts when you cause them pain.

comment by Steve M (steve-m-2) · 2025-04-18T11:36:42.299Z · LW(p) · GW(p)

At the risk of nitpicking around labels, while I see what you're getting at, consciousness and personhood are two different things in a qualitatively meaningful sense.

Consciousness is a vague term, kind of like the "soul", which there is not uniform agreement around. Philosophically it may be important, but pragmatically it's not very useful.

Personhood on the other hand is, at least in the realm of the law, a pragmatically important label. It features heavily in issues like corporate liability, abortion laws, and the citizenship prospects bestowed onto individuals. And it rarely touches on issues of consciousness.

So just encouraging you to keep those separate.

Replies from: TAG, p.b.
comment by TAG · 2025-04-18T12:33:16.214Z · LW(p) · GW(p)

If you want your notion of personhood to be objectively founded, and non arbitrary, then consciousness becomes relevant again. The two are not necessarily different.

comment by p.b. · 2025-04-18T11:41:20.877Z · LW(p) · GW(p)

Which is exactly what I am doing in the post? By saying that the question of consciousness is a red herring aka not that relevant to the question of personhood?