Posts

Saying the quiet part out loud: trading off x-risk for personal immortality 2023-11-02T17:43:34.155Z

Comments

Comment by disturbance on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T22:27:32.925Z · LW · GW

You are right, there are three possible avenues of approaching this: (1) people have certain goals and lie about them to advance their interests, (2) people have certain goals, and they self-delude about their true content so that they advance their interests, (3) people don't have any goals, they are simply executing certain heuristics that proved to be useful in-distribution (Reward is not an optimisation target approach), I omitted the last one from the post. But think that my observation about (2) having non-zero chance of explaining variance in opinions still stands true. And this is even more true for people engaged in AI safety, such as members of Pause AI, e/acc and (to a lesser extent) academics doing research on AI.

Even if (3) has more explanatory power, it doesn't really defat the central point of the post, which is the ought question (which is a bit of a evasive answer, I admit).

Comment by disturbance on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T22:17:43.268Z · LW · GW

I think the current situation is/was greatly distorted by signalling games that people play. Once everyone realises that this is an actual choice, there is a chance they change their opinions to reflect the true tradeoff. (This depends a lot on network effects, shifting Overton window etc., I'm not claiming that 100% of the effect would be rational consideration. But I think rational consideration biases the process to in a non-negligible way.). But yes, one of the pieces of evidence is how old people don't seem to particularly care about the future of civilisation.

Comment by disturbance on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T22:12:26.576Z · LW · GW
  1. The timelines certainly still looked short enough a couple of months ago. But what prompted me to write this was the 13th observation: the seemingly snowballing Pause movement, which, once it reaches certain threshold, has a potential to significantly stifle the development of AI. Analogies: human genetic enhancement, nuclear energy. I'm not sure whether this is already past the point of countering the opposite forces (useful applications, Moore's law), but I'm also not sure that it isn't (or won't be soon).
  2. Cryonics is a very speculative tech. We don't understand how much information is lost in the process, scientific evidence seems lacking overall - consensus being it's in the ~few percent success probability region, future AI (future society) would have to want to revive humans instead of creating new ones, etc.