Term/Category for AI with Neutral Impact?

post by isomic · 2023-05-11T22:00:21.950Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    4 the gears to ascension
None
No comments

Is there any commonly known term on LessWrong to describe an AGI that does not significantly increase or decrease human value? (For instance, an AGI that stops all future attempts to build AGI, but otherwise tries to preserve the course of human history as if it had never existed).

Would such an AI be considered "aligned"? It seems that most discussions of creating aligned AI focus on making the AI share and actively promote human values, but this seems importantly different from what I described.

Answers

answer by the gears to ascension · 2023-05-11T23:38:01.928Z · LW(p) · GW(p)

low impact (term of art: still agentic but not high impact; see papers citing the original I haven't read or the most recent follow-up I haven't read)

passive (colloquial, no particular term of art: I'd interpret it to mean very non-agentic)

non-agentic: does not take actions which depend on the future (I've read this one it's incredible, though has some limitations)

No comments

Comments sorted by top scores.