Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall
post by SebastianG (JohnBuridan) · 2019-02-23T17:48:46.837Z · LW · GW · 19 commentsContents
Summary of my thoughts: Premises The question: can an AI have feelings? None 19 comments
This is an exploratory article on the nature of emotions and how that relates to AI and qualia. I am not a professional AI or ML researcher, and I approach the issue as a philosopher. I am here to learn. Rebuttals and clarifications are strongly encouraged.
Prior reading (or at least skimming).
https://plato.stanford.edu/entries/emotion/#ThreTradStudEmotEmotFeelEvalMoti
In the reading “Traditions in the Study of Emotions” and “Concluding Remarks” are the most essential. There we can see the fault line emerge between what would have to be true for AI to have emotional experience. The emotions of humans and so-called emotions of AI have more differences than similarities.
--
The significance of this problem is that emotions, whatever they are, are essential to homo sapiens. I wanted to make the inquiry with something typically associated with humans, emotions, to see if we could shed any more light on AI Safety research by approaching from a different angle.
Human emotions have six qualities to them:
"At first blush, we can distinguish in the complex event that is fear an evaluative component (e.g., appraising the bear as dangerous), a physiological component (e.g., increased heart rate and blood pressure), a phenomenological component (e.g., an unpleasant feeling), an expressive component (e.g., upper eyelids raised, jaw dropped open, lips stretched horizontally), a behavioral component (e.g., a tendency to flee), and a mental component (e.g., focusing attention)." – SEP “Emotion” 2018
It is not clear which of these is prior to which and exactly how correlated they are. An AI would only actually "need" the evaluative component in order to do act well. Perhaps you could include the behavioral component for an AI as well if you consider it using heuristics, however a heuristic is a tool for evaluating information while a "tendency to flee" resulting from fear does not require any amount of reflection, only instinct. You might object that "it all takes place in the brain" and therefore is evaluative. But the brain is not a single processing system. The "primal" and instinctual parts of the brain (the amygdala) scream out to flee, but fear is not evidence in the same way that deliberation provides evidence. What we call rationality concerns becoming better at overriding the emotions in favor of heuristics, abstraction, and explicit reasoning.
An AI would need evaluative judgment, but it would not need a phenomenological component in order to motivate behavior, nor would it need behavioral tendencies which precede and sometimes jumpstart rational processing. It's the phenomenological component where qualia/consciousness would come in. It seems against the spirit of Occam's Razor to say that because a machine can successfully imitate a feeling it has the feeling (assuming the feeling is a distinct event from the imitation of it.) (Notice I use the word feeling which indicates a subjective qualitative experience.) Of course, how could we know? The obvious fact is that we don't have access to the qualitative experience of others. Induction from both my own experience and the study of biology/ evolution tells me that humans and many animals have qualitative experience. I could go into more detail here if needed.
Using the same inductive process that allows me to consider fellow humans and my dog conscious, I may induce that an AI would not be conscious. I know that an AI passes integers to non-linear equations to perform calculations. As the calculations become more complex (usually thanks to additional computing power and nodes) and the ordering of the algorithms (the framework) become more sophisticated, ever more inputs can be evaluated and meaningful (to us) outputs can be outputted. At no point are physiological, phenomenological, or expressive components needed in order to motivate the process and move it along. If additional components are not needed, why posit they will develop?
If there are no emotions as motivations or feelings for an AI, an AGI should still be fully capable of doing anything and fooling anyone AND having horrific alignment problems, BUT it won’t have feelings.
However, if for some reason emotions are primarily evaluative, then we might expect emotion as motivation AND emotion as a feeling to emerge as a later consequence in AI. An interesting consequence of this view will be that it will be hardly possible to align an AGI. Here's why: Imagine human brains as primarily evaluative machines. They are not that different from higher apes'. In fact, the biggest difference is that we can use complex language to coordinate and pass on discoveries to the next generation. However, our individual evaluative potential is extremely limited. Some people can do 1 differential equation in their head without paper. They are rare. In any case our motivations and consciousness is built on extremely weak evaluative power compared to even present artificial systems. The complexity of the motivations and consciousness that would emerge from such a future AGI would be as far beyond our comprehension as our minds are beyond a paramecium.
Summary of my thoughts:
Premises
a. Emotions are primarily either evaluations, feelings, or motivations.
b. Evaluations are power and are input/output related. Motivations are instinctual. Feelings are qualitative, born out of consciousness, and probably began much later in evolutionary history, although some philosophers think even some very simple creatures have rudimentary subjective experience.
c. Carbon-based life forms which evolve into humans start with physiological and behavioral motivations, then much later develop expressive, mental, and an evaluative components. Somewhere in there the phenomenological component develops.
d. Computers and current AI evaluate without feeling or motivation.
The question: can an AI have feelings?
1. If emotions are not primarily based upon evaluations and evaluations do not cause consciousness,
2. Then evaluations of any complexity can exist without feelings,
3. And there is no AI consciousness.
OR
1. If emotions are based upon evaluations and evaluations of some as yet unknown type are the cause of consciousness,
2. Then evaluations of some complexity will cause feelings and motivations,
3. And given enough variations, there will be at least one AI consciousness.
LASTLY
1. If emotions are based upon evaluations, but evaluations which produce motivations and feeling require a brain with strange hierarchies caused by the differences in reaction of the parts of the brain,
2. Then those strange hierarchies are imitable, but this needs to be puzzled out more...
FUTURE INVESTIGATION
For future investigation I want to chart out the differences among types of brains, differences between types of ANN. One thing about ML which I find to be brutal is we are constantly describing the new thing in terms of the old thing. It's very difficult to tell where our descriptors misleading us.
19 comments
Comments sorted by top scores.
comment by avturchin · 2019-02-23T18:43:58.551Z · LW(p) · GW(p)
I think this post is confusing qualia and emotions. We could have many different qualia about colors, sounds, even maybe for meanings. There is nothing emotion-specific in qualia.
When we experience an emotions we - maybe - have a separate qualia for that emotion, but this emotion-qialia is only representation of some process for our consciousness, not the process itself. Often, people demonstrate some emotions without knowing that they have the emotion - this is the case of so-called suppressed emotions.
Emotions themselves work as modes of behaviour which we signal to other members of the tribe: that is why we become red when we are angry. Most processing of emotions is happening unconsciously: both generating them and reading others' emotions is happening automatically.
AI in most cases doesn't need emotions, but if need, it could be perfect is simulating smile of hate - notjing difficult in it.
The question of AI's qualia is the difficult one.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-02-25T19:03:31.868Z · LW(p) · GW(p)
AI in most cases doesn't need emotions, but if need, it could be perfect is simulating smile of hate - notjing difficult in it.
I agree with you that this post seems to be suffering from some kind of confusion because it doesn't clearly distinguish emotions from the qualia of emotions (or, for anyone allergic to talk of "qualia", maybe we could just say "experience of emotions" or "thought about emotions").
I do, however, suspect any sufficiently complex mind may benefit from having something like emotions, assuming it runs in finite hardware, because one of the functions of emotions seems to be to get the mind to operate in a different "mode" than what it otherwise does.
Consider the case of anger in humans as an example. Let's ignore the qualia of anger for now and just focus on the anger itself. What does it do? I'd summarize the effect as putting the brain in "angry mode" so that whatever comes up we respond in ways that are better optimized for protecting what we value through aggression. Anger, then, conceptualized as emotion, is a slightly altered way of being that put the brain in a state that is better suited to this purpose of "protect with aggression" than what it normally does.
This is necessary because the brain is only so big, and can only be so many ways at once, thus it seems necessary to have a shortcut to putting the brain in a state that favors a particular set of behaviors to make sure those happen.
Thus if we build an AI and it is operating near the limits of its capabilities, then it would benefit from something emotion-like (although we can debate all day whether or not to call it emotion), i.e. a system for putting it self in altered states of cognition that better serve some tasks than others, thus trading off temporarily better performance in one domain/context for worse performance in others. I'm happy to call such a thing emotion, although whether the qualia of noticing such "emotion" from the inside would resemble the human experience of emotion is hard to know at this time.
Replies from: avturchin↑ comment by avturchin · 2019-02-25T19:29:48.699Z · LW(p) · GW(p)
Agreed. I also thought about the case of "military posture" in national states, which is both preparing to war and singling others what the state is ready to defend itself. However, it seems that emotions are more complex than that (I just downloaded a book called "History of emotions" but didn't read it yet).
One guess I have is that emotions are connected with older "reptilian" brain, which could process information very quickly and effectively, but limited to local circumstances like immediate surroundings (which is needed to fight or flight reaction). However, another part of emotions is their social framing, as we are trained to interpret raw brain signal as some high-level emotions like love.
comment by Gurkenglas · 2019-02-23T18:29:25.292Z · LW(p) · GW(p)
1. My observations result from evaluations.
2. I have no reason to believe that I am any more conscious than I can observe.
2 => 3. Changing the parts of me that I cannot observe does not change whether I am conscious.
1, 3 => 4. Anything that has the same evaluations as me is as conscious as me.
Replies from: Charlie Steiner, TAG↑ comment by Charlie Steiner · 2019-02-23T19:27:53.956Z · LW(p) · GW(p)
I'm reminded of Dennett's passage on the color red. Paraphrased:
To judge that you have seen red is to have a complicated visual discrimination process send a simple message to the rest of your brain. There is no movie theater inside the brain that receives this message and then obediently projects a red image on some inner movie screen, just so that your inner homunculus can see it and judge it all over again. Your brain only needs to make the judgment once!
Similarly, if I think I'm conscious, or feel emotion, it's not like our brain notices and then passes this message "further inside," to the "real us" (the homunculus). Your brain only has to go into the "angry state" once - it doesn't have to then send an angry message to the homunculus so that you can really feel anger.
↑ comment by TAG · 2019-02-24T11:49:59.114Z · LW(p) · GW(p)
Do you mean outward, behavioural style observation, or introspection.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-02-24T19:53:25.288Z · LW(p) · GW(p)
All of them.
Replies from: TAG↑ comment by TAG · 2019-02-26T10:09:06.705Z · LW(p) · GW(p)
I still don't see how the conclusion follows.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-02-26T12:41:24.373Z · LW(p) · GW(p)
Let me decompose 1, 3 => 4 further, then.
3 => 5. Anything that makes the same observations as me is as concious as me.
1 => 6. Anything that has the same evaluations as me makes the same observations as me.
5, 6 => 4.
What step is problematic?
Replies from: TAG↑ comment by TAG · 2019-02-26T12:55:17.041Z · LW(p) · GW(p)
None of them is clear, and the overall structure isn't clear.
Can you be less conscious than you observe?
Is "changing the parts of me" supposed to be some sort of replacement-by-silicon scenario?
Is an "evaluation" supposed to be a correct evaluation?
Does "has the same evaluations as me" mean "makes the same judgments about itself as I would about myself"?
3 ⇒ 5. Anything that makes the same observations as me is as concious as me.
As per my first question, if an observation is some kind of infallible introspection of your own consciousness, that is true. OTOH, if if it some kind of external behaviour, not necessarily. Even if you reject p-zombies, a silicon replacement is not a p zombie abd could lack consciousness.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-02-26T13:10:30.913Z · LW(p) · GW(p)
No, I would say I can't be less conscious than I observe.
Sure, replacement by silicon could preserve my evaluations, and therefore my observations.
Evaluations can be wrong in, say, the sense that they produce observations that fail to match reality.
Having the same evaluations implies making the same judgements.
Replies from: TAG↑ comment by TAG · 2019-02-26T13:14:12.918Z · LW(p) · GW(p)
No, I would say I can’t be less conscious than I observe.
You didn' say that. As a premise, it begs the whole question. Or it supposed to be a conclusion?
Sure, replacement by silicon could preserve my evaluations, and therefore my observations
A functional duplicate of yourself would give the same responses to questions, to have the same beliefs, loosely speaking... but it is quite possible for some of those responses to have been rendered false. For instance, your duplicate would initially believe itself to be a real boy made of flesh and bone.
Replies from: Gurkenglas, Gurkenglas↑ comment by Gurkenglas · 2019-02-26T13:49:08.412Z · LW(p) · GW(p)
Your duplicate would initially believe itself to be a real boy made of flesh and bone.
Hmm. Yes, I suppose preserving evaluations only always preserves the correctness of those observations which are introspection. Do we have evidence for any components of consciousness that we cannot introspect upon?
Replies from: TAG↑ comment by TAG · 2019-02-27T10:05:25.730Z · LW(p) · GW(p)
The issue is whether introspection would be retained.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-02-27T13:04:57.540Z · LW(p) · GW(p)
To introspect is to observe your evaluations. Since introspection is observation, it is preserved; since it is about your evaluations, its correctness is preserved.
↑ comment by Gurkenglas · 2019-02-26T13:22:51.405Z · LW(p) · GW(p)
2 was supposed to point out how there shouldn't be any components to consciousness that we can miss by preserving all the observations, since if there are any such epiphenomenal components of consciousness, there by definition isn't any evidence of them.
That the reasoning also goes the other way is irrelevant.
Replies from: TAG↑ comment by TAG · 2019-02-26T13:51:48.046Z · LW(p) · GW(p)
That depends on what you mean by observations and evidence. If we preserved the subjective, introspective access to consciousness, then consciousness would be preserved... logically. But we can not do that practically. Practically, we can preserve externally observable functions and behaviour, but we can't be sure that doing that preserves consciousness.
comment by paul (paul-1) · 2019-02-23T18:36:57.948Z · LW(p) · GW(p)
I look at this from a functional point of view. If I were designing an AGI, what role would emotions play in its design? In other words, my concern is to design in emotions, not wait for them to emerge from my AGI. This implies that my AGI needs emotions in order to function more competently. I am NOT designing in emotions in order to better simulate a human, though that might be a design goal for some AGI projects.
So what are emotions and why would an AGI need them? In humans and other animals, emotions are a global mechanism for changing the creature's behavior for some high priority task. Fear, for example, readies a human for a fight or flight response, sacrificing some things (energy usage) for others (speed of response, focused attention). Such things may be needed in an AGI I'm designing. A battlefield AGI or robot, for example, might need an analogous fear emotion to respond to a perceived threat (or instructed to do so by a controlling human).
Obviously, the change brought about by "fear" in my AGI will be different from the 6 qualities you describe here. For example, my battlefield robot would temporarily suspend any ongoing maintenance activities. This is analogous to a change in its attention. It might rev up its engines in preparation for a fight or flight response. Depending on the nature of the threat, it might change the configuration of its sensors. For example, it may turn on a high-resolution radar that is normally off to save energy. Generally, as in humans, emotion is a widespread reallocation of the AGI's resources for a particular perceived purpose.
Replies from: JohnBuridan↑ comment by SebastianG (JohnBuridan) · 2019-04-07T03:12:51.489Z · LW(p) · GW(p)
If you assume that emotions are type of evaluation that cause fast task switching, then it makes sense to say your battlefield AI has a fear emotion. But if emotion is NOT a type of computational task, then it is ONLY by analogy that your battlefield AI has "fear."
This matters because if emotions like fear are not identifiable with a specific subjective experience, then the brain state of fear is not equivalent to the feeling of fear, which seems bizarre to say (Cf. Kripke "Naming and Necessity" p.126).