Posts
Comments
I actually agree entirely. I just don't think that we need to explore those x-risks by exposing ourselves to them. I think we've already advanced AI enough to start understanding and thinking about those x-risks, and an indefinite (perhaps not permanent) pause in development will enable us to get our bearings.
Say what you need to say now to get away from the potential lion. Then back at the campfire, talk it through.
It’s arguable from a negative utilitarian maladaptive point of view, sure. I find the argument wholly unconvincing.
How we get to our deaths matters, whether we have the ability to live our lives in a way we find fulfilling matters, and the continuation of our species matters. All are threatened by AGI.
And that’s worth a lot. I am a living human being, evolved to desire the life and flourishing of living human beings. Ensuring a future for humanity is far more important than whether any number of individuals alive today die. I am far more concerned with extending the timeline of humanity than maximizing any short term parameters.
This is a case where the precautionary principle grants a great deal of rhetorical license. If you think there might be a lion in the bush, do you have a long and nuanced conversation about it, or do you just tell your tribe, “There’s a line in that bush. Back away.”?
My thoughts run along similar lines. Unless we can guarantee the capabilities of AI will be drastically and permanently curtailed, not just in quantity but also in kind (no ability to interact with the internet or the physical world, no ability to develop intent)c then the inevitability of something going wrong implies that we must all be Butlerian Jihadists if we care for biological life to continue.
Far more than a “lengthy ban” — it justifies an indefinite ban until such time as the probability can be understood, and approaches zero.