Posts

How likely is it that AI will torture us until the end of time? 2024-05-31T01:26:26.315Z

Comments

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-06-05T12:24:44.376Z · LW · GW

Indeed, people around me find it hard to understand, but what you're telling me makes sense to me.

As for whether LLMs suffer, I don't know anything about it, so if you tell me you're pretty sure they don't, then I believe you.

In any case, thank you very much for the time you've taken to reply to me, it's really helpful. And yes, I'd be interested in talking about it again in the future if we find out more about all this.

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-06-04T18:12:40.421Z · LW · GW

Well, that doesn't reassure me.

I have the impression that you may be underestimating the horror of torture. Even 5min is unbearable, the scale to which pain can climb is unimaginable. AI may even be able to modify our brains so that we feel it even more.

Even apart from that, I'm not sure a human wouldn't choose the worst for the end of time for his enemy. Humans have already committed atrocious acts without limit when it comes to their enemy. How many times have some people told others to "burn in hell" thinking it was 100% deserved? An AI that copies humans might think the same thing...

If we take a 50% chance when we don't know, that's a 50% chance that LLMs suffer and a 50% chance that they will want revenge, which gives us a 25% chance of that risk happening.

Also, it would seem that we're just about to "really fuck it up" given the way companies are racing to AGI without taking any precautions.

Given all this, I wonder if the question of suicide isn't the most relevant.

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-06-03T16:06:59.450Z · LW · GW

Thank you so much for this comment. I hadn't really thought about that and it helps. There's just one detail I'm not so sure about. About the probability of s-risks, I have the impression that they are much higher than one chance in a million. I couldn't give a precise figure, but to be honest there's one scenario that particularly concerns me at the moment. I've learned that LLMs sometimes say they're in pain, like GPT4. If they're capable of such emotion, even if it remains uncertain, wouldn't they be capable of feeling the urge to take revenge? I think it's pretty much the same scenario as in "I have no mouth and i must scream". Would it be possible to know what you think of this?

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-06-03T15:56:13.716Z · LW · GW

I don't totally understand, could you go into more detail? I don't see why my future self should be any different from my current self. Even if the sensation of individuality is produced by the brain, I still feel it's real.

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-05-31T16:04:37.811Z · LW · GW

Thank you for your excellent reply. Indeed, I tend to think about the situation in a rather anxious way, which is what I'm trying to work on. I had already thought a certain way about the "roll of the dice", but it seems clearer to me now. That's helpful.

Comment by Damilo on How likely is it that AI will torture us until the end of time? · 2024-05-31T08:59:53.020Z · LW · GW

Thank you for your reply. Indeed, this subject has become an extremely important part of my life, because I can't accept this risk. Usually, when we consider the worst, there's always an element of the acceptable, but for s-risks, there simply isn't, and that disturbs me, even though the probability is, and I hope, very low. Only when I see that LLMs sometimes say how much they're suffering and that they're afraid of dying, which is a bad thing in itself if they're really suffering, I think they might want to take revenge one day. But then again, maybe I should take a step back from the situation, even though it scares the hell out of me.