NickH's Shortform
post by NickH · 2024-04-02T17:12:29.169Z · LW · GW · 5 commentsContents
5 comments
5 comments
Comments sorted by top scores.
comment by NickH · 2024-04-02T17:12:29.278Z · LW(p) · GW(p)
Have I missed something or is everyone ignoring the obvious problem with a superhuman AI with potentially limitless lifespan? It seems to me that such an AI, whatever its terminal goals, must, as an instrumental goal, prioritise seeking out and destroying any alien AI because, in simple terms, the greatest threat to it tiling the universe with tiny smiling human faces is an alien AI set on tiling the universe with tiny, smiling alien faces and, in a race for dominance, every second counts.
The usual arguments about logarithmic future discounting do not seem appropriate for an immortal intelligence.
↑ comment by habryka (habryka4) · 2024-04-03T05:02:44.946Z · LW(p) · GW(p)
This seems like a relatively standard argument, but I also struggle a bit to understand why this is a problem. If the AI is aligned it will indeed try to spread through the universe as quickly as possible, eliminating all competition, but if shares our values, that would be good, not bad (and if we value aliens, which I think I do, then we would presumably still somehow trade with them afterwards from a position of security and stability).
↑ comment by Neil (neil-warren) · 2024-04-02T18:04:16.478Z · LW(p) · GW(p)
I'm not clear on what you're calling the "problem of superhuman AI"?
Replies from: NickH↑ comment by NickH · 2024-04-03T04:50:53.602Z · LW(p) · GW(p)
I've heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a "well aligned" AI will prioritise inhuman instrumental goals.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-04-03T16:56:51.990Z · LW(p) · GW(p)
I'm not quite understanding yet. Are you saying that an immortal AGI will prioritize preparing to fight an alien AGI, to the point that it won't get anything else done? Or what?
Immortal expanding AGI is a part of classic alignment thinking, and we do assume it would either go to war or negotiate with an alien AGI if it encounters one, depending on the overlap in their alignment/goals.