Using GPT-3 for preventing conflict during messaging — a pitch for an app
post by Eli_ · 2022-03-17T11:02:00.213Z · LW · GW · 17 commentsContents
The problem The app How to implement the app Some more examples Final words None 17 comments
I'm going to pitch an idea for an app, which I would love to see happen, but don't have time to build myself. My hope is that other people will steal and improve this idea because I believe it has the potential to help a lot of people including myself.
The problem
I love non-violent communication, but in practice I tend to forget it when it is most needed. For those of you that aren't familiar with non-violent communication (NVC), it is a method for translating what you want to say in a way that is actually helpful to the situation. You can translate your own words, but non-violent communication also stretches to listening in a non-violent way. Both really help with de-escalating a situation and improve overall communication.
An example might be your daughter saying: "I hate you! You suck as a father".
And hearing it in a non-violent way would be: "I feel frustrated when you cancel our plans. I need quality time with you. Can we please spend some time together tomorrow?"
The underlying idea is that we leave out the judgements and replace them with observations (you cancelled our plan), feelings (I feel frustrated), needs (I need some quality time with my father[1]), and a clear request (can we spend some time together tomorrow?).
Easy-peasy in theory, but very hard in practice.
The app
How great would it be if, whenever you receive a message on your phone, containing a message that might cause conflict, your phone would offer a translation into non-violent communication?
So you might receive the following message: "I don't want you to come home! You cheated on me you stupid asshole!”
And it would be translated to: "I feel betrayed that you would cheat on me. I need some time to process this. I would appreciate it if you would not come home until I have had a chance to sort through my feelings."
Or when you want to send something that would cause conflict: "Michael, stop arriving late to work! Do you want me to fire you?"
To something more palatable: "Michael, when you arrive late to work, I feel frustrated because I need punctuality from my employees. Can you please try to arrive on time from now on?"
Now, we can't expect the app to translate everything perfectly. But a reminder, with a rough translation, could do a world of good. My relationships for instance have suffered from the way I have texted with people in ways that were probably avoidable with non-violent communication.
How to implement the app
Now, this might seem like an incredibly difficult project to implement, but in fact, it is not as hard as it seems. All the examples I have given so far have been generated by GPT-3 with the following prompt:
"[input phrase here]"
Can be rephrased using non-violent communication into an observation (without any judgement), feeling, need, and a request:
"
You can definitely try it out yourself on beta.openai.com, I have been using the text-davinci engine.
Some more examples
All the examples have been generated with one or two prompts:
Input: "I'm trying to respect your opinion, but you're simply wrong. I can't help that you believe that covid vaccines don't work. They do! Read the science bitch!"
Output: "I've noticed that you don't believe that covid vaccines are effective. I'm wondering if you're feeling afraid or uncertain about them? I need some reassurance that you're open to considering the science around the vaccines. Would you be willing to read some of the research with me?"
Input: "If you're not a vegan then you basically let all kinds of animals suffer for your own pleasure, how doesn't that make you a monster?"
Output: "When I see you eating meat, I feel sad because it reminds me of all the animals that suffer for our pleasure. I need to feel like I'm doing everything I can to reduce that suffering, and I would appreciate it if you would join me in being vegan."
Input: "You promised to redesign the website as we discussed, but you didn't. Now, I'm going to quit because I think you are a lazy piece of shit."
Output: "I observed that you didn't redesign the website as we discussed, and I feel frustrated because I need clarity and cooperation in order to do my job well. Can we please talk about what happened and how we can move forward?"
Input: “I hate you for invading my country, please go fuck yourself,”
Output: "I see that you have invaded my country. I feel angry and betrayed. I need safety and security for my people. Please leave my country."
Input: "Refugees are stealing our jobs, they should go back to their own country."
Output: "I've noticed that there seems to be a lot of competition for jobs lately. I'm feeling worried that I might not be able to find a job. I need to feel confident that I can provide for myself and my family. Can we talk about ways to improve the job market?"
Final words
Is GPT-3 doing it perfectly? No, definitely not. Most examples I have given so far aren't perfect non-violent communication, but I left them in because I wanted everything to be translated by GPT-3. For instance; "I feel betrayed that you would cheat on me." would not be considered a feeling in the NVC framework. You don't feel betrayed. You think you are betrayed by the other person and this makes you feel sad, vulnerable and angry.
But the potential is clearly there. It could help us open up the conversation towards other people’s feelings and needs. Even when the guess is wrong, it will remind you that there is another person on the other side with a complex inner life that you probably haven't taken into account. At the same time, it will also make you look at your own feelings and needs—what is alive in me at the moment?
Now, you might not like the NVC framework, but that is a whole different discussion. GPT-3 should also be adaptable to other frameworks that mitigate conflict.
- ^
It is debatable if this actually a need according to the NVC framework, but the example is there because it was generated by GPT-3.
17 comments
Comments sorted by top scores.
comment by RamblinDash · 2022-03-17T17:39:48.125Z · LW(p) · GW(p)
I actually think this idea is not good, and would oppose its widespread adoption. For level-setting, I also want to say that I upvoted the post as I think it's an idea worth thinking about.
I have noticed in the last few years, many services have been rolling out more and more features like this already -- not so much "anger translators" but other sorts of suggested replies, etc. My problem with these is that it feels very intrusive, the computers intruding sometimes into our most intimate lives. For example, GMail's suggested replies and autocompletes show up on all messages if not disabled. I would be fine with this for pro forma interactions, such as customer service reps or whatever, but they also show up when I'm conversing with family and close friends.
The problem is that it feels impossible to just completely ignore the suggestions - either I want to accept one or else I feel myself changing my intended reply intentionally to avoid one. Either way, the system is changing how I would communicate with my loved ones in a way that I really dislike. It reminds me one of the most memorable lines from the Matrix, (from memory, not verbatim) "1999, the peak of your civilization. Of course, I say your civilization, because as soon as we [AIs] started thinking for you, it really became our civilization." To the extent that AI can automate more and more work that seems great to me, but automating interactions with our friends and family sounds like dystopia.
Replies from: Eli_↑ comment by Eli_ · 2022-03-18T10:54:00.000Z · LW(p) · GW(p)
I agree that if your device automatically translated everything you send/receive without you having any control over it, that it would be incredibly dystopian, but that is not what I'm suggesting.
This would be a service that people actively chose and pay for, GPT-3 is not free. If you don't like it, don't use it.
I agree, that once you're using the service that it would be very difficult to ignore the suggestions, but again that is the whole point. The goal of the app is to intervene in the case of a high conflict situation and point you in the directions of the other person feelings and needs. Or reminding you of your own feelings and needs.
Now, it might be true, that at this point GPT-3 is not very good at reminding your of the other persons feelings and needs—I can give you that. And it might have been a mistake of me to only use examples generated by GPT-3.
But whenever I speak to someone that is angry or upset, I wish that I could hear their feelings and needs. Just like myself, I believe that people find it difficult to express or hear those when shit hits the fan.
Replies from: RamblinDash↑ comment by RamblinDash · 2022-03-18T13:01:24.237Z · LW(p) · GW(p)
Yes, but if you apply this concept, you still won't be hearing their feelings and needs. You will hear some function F(feelings, original message) = output. Likely one of the following:
- (A) GPT on sender's side, guesses 90% right: You might hear that 90% and lose the 10% that GPT did not guess
- (B) GPT on sender's side, guesses 80% right: You might hear mostly the 20% because the sender, consciously or unconsciously, alters their original message after seeing GPT's mostly-right-but-still-wrong guess to emphasize the distance between their true feelings and the guess
- (C) GPT on sender's side, guesses 80% right: Sender might tweak the output so that its close to GPT's output but not exactly -- maybe this 100% captures their feelings and needs or maybe 10% is still lost trying to wedge their amorphous vibe into GPT's format
- (D) GPT on recipient's side: You see the original message and now GPT is totally guessing at what the other person might be feeling - if its wrong you will have no human feedback in the moment to correct it so its output is at best useless and at worst will lead you to form overconfidently wrong beliefs about the sender's emotional state.
- (E) GPT nowhere, but app is popular in your community: You see an NVCish message from someone and now have to guess at whether their message is an original message, or the output of F(feelings, original message)
Note also that a recipient definitely has no way to know if they are in world (A), world (B), world (C), and depending on your relationship might also be in world (E).
Replies from: Eli_↑ comment by Eli_ · 2022-03-21T07:35:49.499Z · LW(p) · GW(p)
Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I'm not a 100% right. In fact, I'm probably not even 50% right and this really doesn't matter. The translation doesn't have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn't matter if your guess is wrong. People deeply appreciate when you're curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You're worried about 10% being lost in translation. In my personal experience, most people (including myself), when they're upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
Replies from: RamblinDash↑ comment by RamblinDash · 2022-03-21T16:58:47.125Z · LW(p) · GW(p)
How much of this value could be equally accomplished by a pop-up that, rather than trying to "translate" the message, reminds the user that the other person has feelings and needs that they may be having trouble expressing, and that the user should consider that, without any attempt for the AI to state its best guess of what those feelings and needs are? Because I think that alternative would address substantially all of my objections about this while possible preserving nearly all of the value.
Replies from: Eli_↑ comment by Eli_ · 2022-03-22T13:05:20.160Z · LW(p) · GW(p)
I think that is a fair point, I honestly don't know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
- It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
- It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.
comment by Maxwell Peterson (maxwell-peterson) · 2022-03-17T21:40:02.429Z · LW(p) · GW(p)
I think this is a clever and interesting application of GPT. I’ve downvoted the post. Blunting feeling by filtering it through a robot is… nasty. I’m most opposed to the idea that this should be implemented on the receiver’s side. A machine to shield cheating partners from having to read a “fuck you” is a negative; I don’t want it in the world.
An implementation that only allows optional translations on the sender’s side is better, but still not something I like; RamblinDash’s comment covers the issues there well.
Replies from: Eli_↑ comment by Eli_ · 2022-03-21T07:46:43.716Z · LW(p) · GW(p)
Can you explain why you think this is blunting feelings? Since my intention would be for it to do the exact opposite.
If we look at the example you give: shielding a partner from having to read a "fuck you". In that case you would read the message containing "fuck you" first and GPT-3 might suggest that this person is feeling sad, vulnerable and angry. In my perspective this is exactly the opposite of "blunting feelings". I would be pointed towards the inner life of the other person.
Replies from: maxwell-peterson↑ comment by Maxwell Peterson (maxwell-peterson) · 2022-03-22T14:44:18.814Z · LW(p) · GW(p)
Hmm. Maybe blunting isn’t the right word. I can’t really describe the feeling I want to point to, but it’s sort of like “tech elites translating the language of the masses into something the elites find more palatable” - if that makes sense.
comment by Razied · 2022-03-17T15:11:53.625Z · LW(p) · GW(p)
The second-order effect of widespread adoption, of course, is that humans learn to distinguish GPT translations from actual communication, and learn to infer "ah, I see he was really mad, he had to run his actual message through GPT". Actual angry messages would then convey even more anger, since the person had the option of passing them through GPT, and chose not to do it.
As an aside, the NVC versions are still weirdly self-centered, they're pure statements of personnal preference with no attempt to understand the other person's point of view. And in a weird way, "I feel angry and betrayed" packs more punch than "fuck off!", the latter is emotional and said in the moment, you know the person who said it will probably regret it, but the former feels like it was carefully examined.
Overall I think it's an interesting idea, much worse sounding apps have been very successful.
comment by Kaj_Sotala · 2022-03-22T13:00:10.567Z · LW(p) · GW(p)
I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off. However, I'm excited by the potential of using this kind of a prompt to help people practice NVC by feeding it sentences and having it rewrite them - even if it doesn't get everything perfectly right, this could be a cool "get practice using NVC" app. (Then it could also be used for rewording your actual messages in an NVC way, but if it was a separate app that you yourself had to copy-paste things to/from, it wouldn't have the intrusiveness aspect people were concerned about - it'd only do it if you made the active choice of using it that way.)
Replies from: Eli_↑ comment by Eli_ · 2022-03-22T14:09:35.774Z · LW(p) · GW(p)
I wanted to make this comment for a while now, but I was worried that it wasn't worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I'll give it a go. Curious what you think.
I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off.
Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision).
Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as "me".
There might be a big difference in performance. I can systematically test and tune GPT-3. If I don't like how it performs, then I shouldn't let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.
It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2022-03-22T17:06:13.437Z · LW(p) · GW(p)
I find it hard to disagree with this when you frame it this way. :-)
Yeah I don't think there's anything intrinsically wrong with treating GPT-3 as yet another subagent, just one that you interface with somewhat differently. (David Chalmers has an argument about how, if you write your thoughts down to a notepad, the notepad is essentially a part of you as an external source of memory that's in principle comparable to your internal memory buffers.)
I think the sense of "something is off" comes more from things like people feeling that someone external is reading their communications and them not feeling able to trust whoever made the app, as well as other more specific considerations that were brought up in the other comments.
comment by ambigram · 2022-03-18T17:05:43.117Z · LW(p) · GW(p)
This sounds like an interesting idea, it would be nice to have a tool that reminds us to be our better selves. However, it seems like it'd be quite problematic to let GPT mediate our communications. Besides the issues discussed in the other comments, I also think letting GPT handle the translation encourages a more superficial form of NVC. I think the main reason NVC works is not that the framework itself lets people communicate using non-violent language. Rather, it works because using the framework encourages people to be more emotionally aware and empathetic.
As a sender, NVC helps me because:
- It makes me more aware of the inferences I am making (which may be false) based on my observations (which may not be complete). It reminds me that what I perceive is only my interpretation, which may not be true!
- It helps me pay more attention to what I'm feeling and what I'm lacking, so I can focus more on problem solving (i.e. finding ways to meet my needs), rather than emotional arguments.
- Note that sometimes, cooperative problem solving is not our goal, in which case we may not want to use NVC.
As a listener, NVC helps me because:
- It reminds me that people usually have a valid reason why they are upset, which helps me be more empathetic. For example, I used to think people were angry at me because I didn't do what they say, but later realised it was because they genuinely found my actions to be upsetting (e.g. they may find messiness to be distracting, whereas I don't notice it).
- Note that this is not always true, such as when someone is saying something to manipulate you.
My concerns with using GPT to handle the NVC translation are as follows:
- I suspect an NVC translator would encourage people to "outsource" this thinking to GPT. This would lead to people following the framework without genuinely going through the process of thinking through their (or their partner's) emotions and needs. This means people don't really get to practise the NVC skills, and so don't truly benefit from NVC.
- Knowing that others may be using a translator may also make the conversation feel less genuine because it becomes easy to fake it (e.g. maybe they are actually really mad at me and can't be bothered to make the effort to go through the NVC thought process, and are just using the translator because they believe it will get a better response).
- When it's presented as a translation, it gives the impression that the translation is the real answer, rather than just one of many possible answers.
(Also, I think some of the examples reflect a different problem that won't be solved by NVC translation. For example, I think a better approach for the covid vaccine example would be asking the other person why they believe what they believe, rather than asking them to read the research with you.)
The idea of a tool to help open up the conversation is interesting though, so here are two possible alternatives I can think of (which may not be feasible):
- Instead of translating your words, the app detects messages with strong negative emotions, and prompts you to reconsider when you send it, similar to how Outlook reminds you if you mention "attachment" in your email but don't attach any files. This should be something that is enabled by the user, so it's like a reminder from our calmer selves to our more impulsive selves to think through before sending.
- Instead of providing a single translation, the app suggests possible feelings/needs based on other people's experiences, while making it clear that yours may be different. For example, "Sometimes people say X because they are feeling Y1 and want Z1, or because they are feeling Y2 and want Z2. Do you feel like this applies to you? Or maybe you're feeling something else?"
comment by Viliam · 2022-03-18T19:27:48.315Z · LW(p) · GW(p)
Here is a business plan:
Make a free messaging application that will make your messages sound friendlier and smarter. (Also, gramatically correct. Essentially, advertise it as "Grammarly for text messages".) Keep it free until you get the majority of users. Then allow them to choose to pay a little money to make them sound even friendlier and smarter. Then gradually make the non-paid version sound normal, or even slightly hostile and stupid.
Actually, if you are Google, make this a standard Gmail functionality. You already have your users sufficiently locked in. ;)
comment by David Ring (david-ring) · 2024-05-19T00:31:10.777Z · LW(p) · GW(p)
I love this and would love to collaborate. I've been thinking of more of a training system, social connection, and inner exploration app. I do think one issue with blindly translating into nvc is we don't develop the ability to process our own feelings that come up.
comment by milo · 2022-03-18T10:33:47.378Z · LW(p) · GW(p)
This is definitely a fantastic idea. However the usage will be more attuned towards institutional use rather than a formal communication between friends.
If i am talking to a friend or a colleague this would really make the communication in effective and might loose camaraderie.