A Dilemma in AI Suffering/Happiness

post by iva · 2024-03-28T20:53:18.501Z · LW · GW · 3 comments

Contents

3 comments

The following is an example of how if one assumes that an AI (in this case autoregressive LLM) has "feelings", "qualia", "emotions", whatever, it can be unclear whether it is experiencing something more like pain or something more like pleasure in some settings, even quite simple settings which already happen a lot with existing LLMs. This dilemma is part of the reason why I think AI suffering/happiness philosophy is very hard and we most probably won't be able to solve it.

Consider the two following scenarios:

Scenario A: An LLM is asked a complicated question and answers it eagerly.

Scenario B: A user insults an LLM and it responds.

For the sake of simplicity, let's say that the LLM is an autoregressive transformer with no RLHF (I personally think that the dilemma still applies when the LLM has RLHF, but then the arguments are more complicated and shaky).

If the LLM has "feelings", "qualia", whatever, are they positive or negative in scenarios A and B? One could argue in two ways:

Some people might argue that either of the two answers is the right one, but my point is that I don't think it's plausible we would reach an agreement about the answer.

3 comments

Comments sorted by top scores.

comment by JBlack · 2024-03-29T08:10:38.340Z · LW(p) · GW(p)

Granting that LLMs in inference mode experience qualia, and even granting that they correspond to human qualia in any meaningful way:

I find both arguments invalid. Either conclusion could be correct, or neither, or the question might not even be well formed. At the very least, the situation is a great deal more complicated than just having two arguments to decide between!

For example in scenario (A), what does it mean for an LLM to answer a question "eagerly"? My first impression is that it's presupposing the answer to the question, since the main meaning of "eagerly" is approximately "in the manner of having interest, desire, and/or enjoyment". That sounds a great deal like positive qualia to me!

Maybe it just means the lesser sense of apparently showing such emotions, in which case it may mean no more than an author writing such expressions for a character. The author may actually be feeling frustration that the scene isn't flowing as well as they would like and they're not sure that the character's behaviour is really in keeping with their emotions from recent in-story events. Nonetheless, the words written are apparently showing eagerness.

The "training loss" argument seems totally ill-founded regardless. That doesn't mean that its conclusion in this hypothetical instance is false, just that the reasoning provided is not sufficient justification for believing it.

So in the end, I don't see this as a dilemma at all. It's just two possible bad arguments out of an enormously vast space of bad arguments.

comment by Charbel-Raphaël (charbel-raphael-segerie) · 2024-04-24T08:53:41.970Z · LW(p) · GW(p)

You might be interested in reading this [LW · GW]. I think you are reasoning in an incorrect framing. 

comment by Dagon · 2024-03-31T17:26:44.597Z · LW(p) · GW(p)

Note that this uncertainty applies to humans as well. Most of the time we make assumptions based on similarity of biology and default trust in self-reports, rather than having tests for qualia and valence.