Posts

Comments

Comment by zkTRUTH (nicwickman) on AGI Ruin: A List of Lethalities · 2022-06-09T01:11:01.505Z · LW · GW

If we have total conviction that the end of the world is nigh, isn't it rational to consider even awful, unpalatable options for extending the timeline before we "achieve" AGI?

It's not strictly necessary that a pivotal act is powered by AI itself.

Avoiding explicit details for obvious reasons and trusting that it's understood. But surely it's within the realm of possibility to persecute, terrorize or sabotage the progression of AI research; and plausibly for a long enough time to solve alignment first.

Curious to know the "dignity" calculation here. Presumably almost any pivotal act with an aligned-AGI is forgivable because it would be specific and the dawn of a Utopia. But what if horrible things are done only to buy more time into a still uncertain future?