Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment

post by Marc Carauleanu (Marc-Everin Carauleanu) · 2023-04-03T19:59:00.057Z · LW · GW · 6 comments

This is a link post for https://clipchamp.com/watch/6c0kTETRqBc

This is a talk I gave at the recent AI Safety Europe Retreat (AISER) on my research on obtaining insights from the cognitive science of empathy and applying them to RL agents and LLMs.

Talk link: https://clipchamp.com/watch/6c0kTETRqBc

Slides link: https://bit.ly/3ZFmjN8

Talk description: I begin by presenting a short review on the cognitive science of empathy as a Perception-Action Mechanism (PAM) which relies on self-other overlap at the neuronal level. I continue by presenting the theory of change of this research direction by arguing that inducing self-other overlap as empathy is model agnostic and that it has the potential to avert AI x-risk and be sub-agent stable in the limit. Then I present experimental evidence of the emergence of PAM in RL agents and present a way of inducing PAM in RL agents. I end the talk by discussing how this paradigm could be extended to LLMs. 

Acknowledgements: I am thankful for Dr. Bogdan Ionut-Cirstea for inspiring me to look into this neglected research direction, the Long-Term Future Fund for funding the initial deep dive into the literature,  and for Center for AI Safety for funding half of this research as part of their Student Researcher programme. Last but not least, I want to thank Dr. Matthias Rolf for supervising me and providing good structure and guidance. The review on the cognitive science of empathy is adapted from a talk given by Christian Keysers from the Netherlands Institute for Neuroscience.

6 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2023-04-04T13:41:38.206Z · LW(p) · GW(p)

Nice talk! A few comments:

  • I think the premotor cortex data shouldn’t be taken at face value (see my Quick notes on “mirror neurons” [LW · GW])
  • I wonder what you make of the issue I bring up here [LW(p) · GW(p)], i.e. that “self-other overlap” is a “mistake” from the perspective of the agent’s loss function(s) and therefore can & will go away upon sufficient training.
  • I’m very in favor of studying this topic, but I think your theory-of-change is too simplistic. Specifically, if you can get “self-other overlap” to work reliably, then I understand the story you’re telling yourself is “The AI suffers when it sees a human suffering, and the AI feels happy when it sees a happy human, therefore the AI will increase human happiness and decrease suffering. Ergo, good future.” That’s a nice story, but I can also tell equally-plausible apocalypse stories like “The AI suffers when it sees a human suffering, therefore it painlessly ends the lives of all humans” or “…therefore the AI learns to manipulate its top-down attention in such a way as to avoid activating that reaction” (cf. “compassion fatigue” & this theory of autism [LW · GW]) or “The AI learns to manipulate its top-down attention in such a way as to “empathize” with imaginary friends who are very happy, or cartoon characters etc., and finds this so much more satisfying than interacting with humans that it painlessly ends humanity to pursue that” (cf. teddy bears, movies, etc.). My own justification for why we should study this topic is here [LW · GW], and is less direct than yours.
Replies from: Marc-Everin Carauleanu
comment by Marc Carauleanu (Marc-Everin Carauleanu) · 2023-04-04T18:32:38.213Z · LW(p) · GW(p)

Thanks for watching the talk and for the insightful comments! A couple of thoughts:

  • I agree that mirror neurons are problematic both theoretically and empirically so I avoided framing that data in terms of mirror neurons. I interpret the premotor cortex data and most other self-other overlap data under definition (A) described in your post [LW · GW].
  • Regarding the second point, I think that the issue you brought up correctly identifies an incentive for updating away from "self-other overlap" but this doesn't seem to fully realise in humans and I expect this to not fully realise in AI agents either due to stronger competing incentives that favour self-other overlap. One possible explanation is the attitude for risk incorporated into the subjective reward value. In this paper, in the section "Risky rewards, subjective value, and formal economic utility", it is mentioned that monkeys show nonlinear utility functions that are compatible with risk seeking at small juice amounts and risk avoiding at larger amounts. It is possible that if the reward predictor predicts that "I will be eating chocolate" given that the amount of reward expected is fairly low humans are risk-seeking in that regime and our subjective reward value is high, making us want to take that bet, and given that the reward prediction error might be fairly small, it could potentially be overridden by our subjective reward value influenced by our attitude to risk. Now this might explain how empathic responses from self-other overlap are kept for low-stakes scenarios but there are stronger incentives to update away from self-other overlap in higher-stakes scenarios. Humans seem to have learned to modulate their empathy which might prevent some reward-prediction error. Also it is possible that we have evolved to be more risk-seeking when it comes to empathic concern due to the evolutionary benefits of empathy increasing the subjective reward value of the empathic response due to self-other overlap in higher stakes scenarios but I am uncertain. I am curious what you think about this hypothesis. 
  • I agree that the theory of change that I presented is simplistic and I should have more explicitly stated the key uncertainties of this proposal although I did mention throughout the talk that I do not think that inducing self-other overlap is enough and that we still have to ensure that the set of incentives that shape the agent's behaviour favours good outcomes. What I was trying to communicate in the theory of change section is that self-other overlap sets incentives that favour AI not killing us (self-preservation as an instrumentally convergent goal given which self-other overlap provides an incentive for other-preservation) and sub-agent stability of self-other overlap (due to the AI expecting its self/other-preservation preferences to be frustrated if the agents that it creates, including improved versions of itself, don't have self/other-preservation preferences) but I failed to put enough emphasis on the fact that this will only happen if we find ways to ensure that these incentives dominate and are not overridden by competing incentives and mechanisms. I think that the competing incentives problem is tractable, which is one of the main reasons I believe that this research direction is promising. 
Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2023-04-04T19:35:08.550Z · LW(p) · GW(p)

I’m confused by your second bullet point. Let’s say you really like chocolate chip cookies but you’re strictly neutral on peanut butter chocolate chip cookies. And they look and smell the same until you bite in (maybe you have a bad sense of smell).

Now you see a plate on a buffet with a little sign next to it that says “Peanut butter chocolate chip cookies”. You ask your trusted friend whether the sign is correct, and they say “Yeah, I just ate one, it was very peanut buttery, yum.” Next to that plate is a plate of brownies, which you like a little bit, but much less than you like chocolate chip cookies.

Your model seems to be: “I won’t be completely sure that the sign isn’t wrong and my friend isn’t lying. So being risk-seeking, I’m going to eat the cookie, just in case.”

I don’t think that model is realistic though. Obviously you’re going to believe the sign & believe your trusted friend. The odds that the cookies aren’t peanut butter are essentially zero. You know this very well. So you’ll go for the brownie instead.

Now we switch to the empathy case. Again, I think it’s perfectly obvious to anyone that a plate labeled & described as “peanut butter chocolate chip cookies” is in fact full of peanut butter chocolate chip cookies, and people will act accordingly. Well, it’s even more obvious by far that “my friend eating a chocolate chip cookie” is not in fact “me eating a chocolate chip cookie”! So, if I don’t feel an urge to eat the cookie that I know damn well has peanut butter in it, I likewise won’t feel an urge to take actions that I know damn well will lead to someone else eating a yummy cookie instead of me.

So anyway, you seem to be assuming that the human brain has no special mechanisms to prevent the unlearning of self-other overlap. I would propose instead that the human brain does have such special mechanisms, and that we better go figure out what those mechanisms are. :)

self-other overlap sets incentives that favour AI not killing us (self-preservation as an instrumentally convergent goal given which self-other overlap provides an incentive for other-preservation) and sub-agent stability of self-other overlap (due to the AI expecting its self/other-preservation preferences to be frustrated if the agents that it creates, including improved versions of itself, don't have self/other-preservation preferences) but I failed to put enough emphasis on the fact that this will only happen if we find ways to ensure that these incentives dominate and are not overridden by competing incentives and mechanisms

I’m a bit confused by this. My “apocalypse stories” from the grandparent comment did not assume any competing incentives and mechanisms, right? They were all bad actions that I claim also flowed naturally from self-other-overlap-derived incentives.

Replies from: Marc-Everin Carauleanu
comment by Marc Carauleanu (Marc-Everin Carauleanu) · 2023-04-04T21:08:39.431Z · LW(p) · GW(p)

I am slightly confused by your hypothetical. The hypothesis is rather that when the predicted reward from seeing my friend eating a cookie due to self-other overlap is lower than the obtained reward of me not eating a cookie, the self-other overlap might not be updated against because the increase in subjective reward from risking to get the obtained reward is higher than the prediction error in this low stakes scenario. I am fairly uncertain about this being what actually happens but I put it forward as a potential hypothesis. 

"So anyway, you seem to be assuming that the human brain has no special mechanisms to prevent the unlearning of self-other overlap. I would propose instead that the human brain does have such special mechanisms, and that we better go figure out what those mechanisms are. :)"

My intuition is that the brain does have special mechanisms to prevent the unlearning of self-other overlap so I agree with you that we should be looking into the literature to understand them, as one would expect mechanisms like that to evolve given incentives to unlearn self-other overlap and given the evolutionary benefits of self-other overlap. One such mechanism could be the brain being more risk tolerant when it comes to empathic responses and not updating against self-other overlap when the predicted reward is lower than the obtained reward but I don't have a model of how exactly this would be implemented. 

"I’m a bit confused by this. My “apocalypse stories” from the grandparent comment did not assume any competing incentives and mechanisms, right? They were all bad actions that I claim also flowed naturally from self-other-overlap-derived incentives."

What I meant by "competing incentives" is any incentives that compete with the good incentives described by me (other-preservation and sub-agent stability), which could include bad incentives that might also flow naturally from self-other overlap. 

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2023-04-05T14:14:25.637Z · LW(p) · GW(p)

when the predicted reward from seeing my friend eating a cookie due to self-other overlap is lower than the obtained reward of me not eating a cookie, the self-other overlap might not be updated against because the increase in subjective reward from risking to get the obtained reward is higher than the prediction error in this low stakes scenario. I am fairly uncertain about this being what actually happens but I put it forward as a potential hypothesis.

I think you’re confused, or else I don’t follow. Can we walk through it? Assume traditional ML-style actor-critic RL with TD learning, if that’s OK. There’s a value function V and a reward function R. Let’s assume we start with:

  • V(I’m about to eat a cookie) = 10
  • V(Joe is about to eat a cookie) = 2 [it’s nonzero “by default” because of self-other overlap]
  • R(I’m eating a cookie) = 10
  • R(Joe is eating a cookie) = 0

So when I see that Joe is about to eat a cookie, this pleases me (V>0). But then Joe eats the cookie, and the reward is zero, so TD learning kicks in and reduces V(Joe is about to eat a cookie) for next time. Repeat a few more times and V(Joe is about to eat a cookie) approaches zero, right? So eventually, when I see that Joe is about to eat a cookie, I don’t care.

How does your story differ from that? Can you walk through the mechanism in terms of V and R?

What I meant by "competing incentives" is…

This is just terminology, but let’s say the water company tries to get people to reduce their water usage by giving them a gift card when their water usage is lower in Month N+1 than in Month N. Two of the possible behaviors that might result are (A) the intended behavior where people use less water each month, (B) an unintended behavior where people waste tons and tons of water in odd months to ensure that it definitely will go down in even months.

I would describe this as ONE INCENTIVE that incentivizes both of these two behaviors (and many other possible behaviors as well). Whereas you would describe it as “an incentive to do (A) and a competing incentive to do (B)”, apparently. (Right?) I’m not an economist, but when I google-searched for “competing incentives” just now, none of the results were using the term in the way that you’re using it here. Typically people used the phrase “competing incentives” to talk about two different incentive programs provided by two different organizations working at cross-purposes, or something like that.

comment by Charlie Steiner · 2023-04-05T02:49:50.271Z · LW(p) · GW(p)

Interesting talk, though I skimmed a fair bit of it once I felt like the experiments weren't telling me what I wanted. I think the crucial thing to try to do here, to show me that you're seeing something interesting, is interventions. Can you increase or decrease some measure of empathy without changing orthogonal metrics very much?

The second things to do is to not use a toy NN model on a gridworld. The early results are suggestive, but I'm really wary of generalizing suggestive results from 3x3 gridworlds to the much-more-than-3x3 real world. So I'm looking forward to future scaled-up work.