NickH's Shortform
post by NickH · 2024-04-02T17:12:29.169Z · LW · GW · 7 commentsContents
7 comments
7 comments
Comments sorted by top scores.
comment by NickH · 2024-04-02T17:12:29.278Z · LW(p) · GW(p)
Have I missed something or is everyone ignoring the obvious problem with a superhuman AI with potentially limitless lifespan? It seems to me that such an AI, whatever its terminal goals, must, as an instrumental goal, prioritise seeking out and destroying any alien AI because, in simple terms, the greatest threat to it tiling the universe with tiny smiling human faces is an alien AI set on tiling the universe with tiny, smiling alien faces and, in a race for dominance, every second counts.
The usual arguments about logarithmic future discounting do not seem appropriate for an immortal intelligence.
↑ comment by habryka (habryka4) · 2024-04-03T05:02:44.946Z · LW(p) · GW(p)
This seems like a relatively standard argument, but I also struggle a bit to understand why this is a problem. If the AI is aligned it will indeed try to spread through the universe as quickly as possible, eliminating all competition, but if shares our values, that would be good, not bad (and if we value aliens, which I think I do, then we would presumably still somehow trade with them afterwards from a position of security and stability).
Replies from: NickH↑ comment by NickH · 2025-01-24T13:29:06.712Z · LW(p) · GW(p)
An AI with potentially limitless lifespan will prioritise the future over the present to an extent that would, almost certainly be bad for us now.
For example it may seem optimal to kill off all humans whilst keeping a copy of our genetic code so as to have more compute power and resources available to produce Von Neumann Probes to maximise the region of the universe it controls before encountering, and hopefully destroying, any similar alien AI diaspora. Only after some time, once all possible threats had been eliminated, would it start to recreate humans into our new, safe, galactic utopia. The safest time for this would almost certainly, be when all other galaxies had red-shifted beyond the future light cone of our local cluster.
↑ comment by Neil (neil-warren) · 2024-04-02T18:04:16.478Z · LW(p) · GW(p)
I'm not clear on what you're calling the "problem of superhuman AI"?
Replies from: NickH↑ comment by NickH · 2024-04-03T04:50:53.602Z · LW(p) · GW(p)
I've heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a "well aligned" AI will prioritise inhuman instrumental goals.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-04-03T16:56:51.990Z · LW(p) · GW(p)
I'm not quite understanding yet. Are you saying that an immortal AGI will prioritize preparing to fight an alien AGI, to the point that it won't get anything else done? Or what?
Immortal expanding AGI is a part of classic alignment thinking, and we do assume it would either go to war or negotiate with an alien AGI if it encounters one, depending on the overlap in their alignment/goals.
Replies from: NickH↑ comment by NickH · 2025-01-24T13:37:22.848Z · LW(p) · GW(p)
Yes. It will prioritise the future over the present.
The utility of all humans being destroyed by an alien AI in the future is 0.
The utility of populating the future light cone is very, very large and most of that utility is in the far future.
Therefore the AI should sacrifice almost everything in the near term light cone to prevent the 0 outcome. If it could digitise all humans or possibly just have a gene bank then it can still fill most of the future light cone with happy humans once all possible threats have red-shifted out of reach. Living humans are small but non-zero risk to the master plan and hence should be dispensed with.