How should one feel morally about using chatbots?

post by Adam Zerner (adamzerner) · 2023-05-11T01:01:39.211Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    9 Thomas Kwa
    7 Alexei
    7 Brendan Long
None
No comments

Personally, I feel some confusion.

On the one hand, ones personal usage of a chatbot probably wouldn't be more than a drop in the bucket in terms of producing bad consequences such as increasing the demand for chatbots and marginally pushing AI capabilities forward.

But on the other hand, I at least feel a strong sense of disdain for them. To me it feels like the beginning of the end. It's a marked step forward in what we've all been anticipating for so many years.

This obviously isn't a perfect analogy, but imagine if someone killed your family, got away with it, and operates the best restaurant in town. Would you eat at their restaurant? Probably not. I think most people would feel a strong sense of disdain, and even though eating at their restaurant wouldn't lead to any bad consequences, still wouldn't dream of eating there. Similarly, AI advancements are probably going to lead to doom [LW · GW], and so even though my usage wouldn't really move the needle at all, I still feel a lot of aversion.

I don't want to conflate the question of managing ones emotions with the question of how one "should" feel. Here, I am asking about the latter.

Answers

answer by Thomas Kwa · 2023-05-11T01:31:10.874Z · LW(p) · GW(p)

Using chatbots and feeling ok about it seems like a no-brainer. It's technology that provides me a multiple percentage point productivity boost, it's used by over a billion people, and a boycott of chatbots is well outside the optimal or feasible space of actions to help the world.

I think the restaurant analogy fails because ChatGPT was not developed in malice, just recklessness. For the open source models, there's not even an element of greed.

comment by Adam Zerner (adamzerner) · 2023-05-11T01:46:08.043Z · LW(p) · GW(p)

I think the restaurant analogy fails because ChatGPT was not developed in malice, just recklessness.

My impression is that if the restaurant owner killed the family out of recklessness instead of malice, most people would still feel a very strong sense of disdain and choose to avoid the restaurant.

answer by Alexei · 2023-05-11T03:51:40.241Z · LW(p) · GW(p)

I’m not using ChatGPT or any of its ilk and plan to continue to do so for the foreseeable future. Basically for the rough reasons described by OP.

I see people make the argument that an additional subscriber doesn’t make a big difference on the margin. But as far as individual choices consumer choices go, that’s all the leverage you have!

I think most people would agree that the eventual logical outcome of this technology is highly volatile, potentially including some very very negative outcomes in the mix. I think basic moral logic compels us not to engage with something like that. Doing otherwise is like destroying the commons but with no easy way of reparation.

Justifying it with “it increases my productivity” seems laughable and ironic when you consider the long term consequences.

The way I’m approaching this internally though is kind of like most vegans approach their choice, I think. It’s becoming a life choice, a moral one, and I think ultimately the right one. But I do not want to be militaristic about it. And while everyone is using ChatGPT around me I continue to love them and will do so until the end.

answer by Brendan Long · 2023-05-11T01:50:16.531Z · LW(p) · GW(p)

I already emailed you about this but it might be useful to share here too for feedback.

I think it would be bad to use a technology if:

  1. Using the technology itself is an existential risk (i.e. if ChatGPT might kill all humans if I ask the wrong question)
  2. Using it will increase the amount of resources into making it powerful enough for situation (1) to apply

I don't think (1) is currently the case with ChatGPT, and I think people like Eliezer agree.

I think using ChatGPT, even the paid version, won't increase the resources going into AI capabilities because the big AI companies aren't funding constrained. If OpenAI needed another billion dollars, they'd just sell a billion dollars worth of stock. My $20 per month probably increases the price people would pay for that stock, but that just reduces the dilution existing shareholders face and has no real effect on their ability to get funding if they need it.

I might feel differently if my usage increased other people's usage of ChatGPT (although I also think hype is so high it would be very difficult to meaningfully increase it), if I was part of a big enough coalition that our boycott was meaningful and noticable, or if I was using it at a scale where the money paid to OpenAI was significant (I would consider free-riding with open source models to avoid funding capabilities).

No comments

Comments sorted by top scores.