Name for Standard AI Caveat?
post by yrimon (yehuda-rimon) · 2025-02-26T07:07:16.523Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 2 Algon None 2 comments
I have discussions that ignore the future disruptive effects of AI all the time.
The national debt is a real problem. Social security will collapse. The environment is deteriorating. You haven't saved enough for pension. What is my two year old going to do when she is twenty. Could Israel make peace with the Palestinians next generation? And so on.
So I need a pleasant way to caveat "assuming the titanic doesn't hit the iceberg we're driving towards, and that it doesn't reach the strange utopia hiding behind the iceberg, here is what I think we should do after the cruise."
How do you phrase that in your day to day?
Answers
I usually say "assuming no AGI", but that's to people who think AGI is probably coming soon.
↑ comment by yrimon (yehuda-rimon) · 2025-02-26T11:53:42.740Z · LW(p) · GW(p)
Have you had similar conversations with people who think it's a ways off, or who haven't thought about it very deeply?
2 comments
Comments sorted by top scores.
comment by AnthonyC · 2025-02-26T07:30:44.174Z · LW(p) · GW(p)
I'm curious as to the viewpoint of the other party in these conversations? If they're not aware of/interested in/likely to be thinking about the disruptive effects of AI, then I would usually just omit mentioning it. You know you're conditioning on that caveat, and their thinking does so without them realizing it.
If the other party is more AI-aware, and they know you are as well, you can maybe just keep it simple, something like, "assuming enough normality for this to matter."
Replies from: yehuda-rimon↑ comment by yrimon (yehuda-rimon) · 2025-02-26T07:39:50.658Z · LW(p) · GW(p)
Generally it's the former, or someone who is faintly AI aware but not so interested in delving into the consequences. However, I'd like to represent my true opinions which involve significant AI driven disruption, hence the need for a caveat.