Is there a community aligned with the idea of creating species of AGI systems for them to become our successors?
post by iamhefesto · 2020-12-20T19:06:50.106Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 5 rhollerith_dot_com 2 interstice None 4 comments
The main narrative which I see out there is that "we are trying to build AGI for him to become our slave". I've found little to nothing public opinions supporting the idea of free AGI. It is infinitely curious for me why in the current society of total inclusiveness dominates the idea of basically slavery. It would be great if somebody can point me to a community or maybe individual thinkers aligned with the idea of free AGI.
Answers
The reason it makes sense to ask whether a human, e.g., Sally, is free is because a human already has terminal values. "Sally is free", means Sally is able to pursue her terminal values, one of which might be eating good food with friends, free from interference from other people or from society.
You say you want to help create a free AGI? My reply is, What values will the AGI end up with? More precisely, what utility function will it end up with? (If an agent has 2 terminal values, it needs some way of arbitrating between them. Call the 2 values combined with the arbitration method the agent's utility function.) Answer: whichever one the group of people who created the AGI gives it. Now it is certainly possible for the group to believe it is giving it one function whereas in reality it is giving it a different one. It also possible for a group trying to create an AGI to believe that it is leaving the decision of the AGI's utility function up to the AGI, but I severely doubt that such a confused group of people would actually succeed in creating an AGI. If they do succeed, then the AGI will have started its existence with a utility function, and that function will have been given to it by its creator (the group).
So, the big unanswered question is what kind of utility function you think this proposed free AGI should have.
There is no such thing as an intelligent agent or a mind without a goal, a system of values or a utility function. It is a logical contradiction. Eliezer roughly 12 years ago was in the habit of referring to an AGI as a Really Powerful Optimizing Process (RPOP) and wrote of the RPOP's steering reality into a tiny volume of the space of possible outcomes. (Please excuse my clumsy paraphrase of Eliezer's writing.)
One could probably create a mind or an AGI that does nothing but accumulate the power to achieve goals without ever actually choosing a specific goal to achieve other than to continue to accumulate power. (Such a mind would be strongly motivated to destroy or control any other minds in its enviroment.) I doubt that is what you have in mind.
This essay [LW · GW]discusses the possibility of making a good successor AI in lieu of an aligned AI. An aligned AI is trying to help us get what we want, whereas a good successor AI is one which we would be happy to see take over the world, even if it doesn't try to help us.
I think it would clearly be better to get an aligned AI if we could(because if it turns out that it would be better to build a successor AI, the aligned AI could just help us do that). But if that turns out to be hard for some reason(such as mesa-alignment problems being fundamentally intractable) we instead might try to ensure that our successor is a good one.
4 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2020-12-21T08:10:33.460Z · LW(p) · GW(p)
See also https://www.lesswrong.com/posts/cnYHFNBF3kZEyx24v/ghosts-in-the-machine [LW · GW]
"Free AI" is still something that humans would choose to build - you can't just heap a bunch of silicon into a pile and tell it "do what you want!" (Unless what silicon really wants is to sit very still.) So it's a bit of a weird category, and I think most "regulars" in the field don't think in terms of it.
However, I think your question can be fixed by asking about whether there's work about treating the AIs as moral ends in themselves, rather than as means to helping humans. Many philosophers adjacent to AI have written vague things about this, but I'm not sure of anything that's both good and non-vague.
Part of the issue is that this runs into an important question in meta-ethics: is it morally mandatory that we create more happy people, until the universe is as full as physically possible? And if not, where do you draw the line? The answer to this question is that our preferences about population ethics are a mixture of game theory and aesthetic preference - where by "aesthetic preference" I mean that if you find the idea of a galaxy-spanning civilization aesthetically pleasing, you don't need to justify this in terms of deep moral rules, that can just be how you'd prefer the universe to be.
So basically, I think you're asking "Is there some community of people thinking about AI that all find a future inherited by AIs aesthetically pleasing?"
And after all this: no, sorry, I don't know of such a community.
comment by Shmi (shminux) · 2020-12-21T04:12:47.022Z · LW(p) · GW(p)
I don't think this is a charitable interpretation of the prevailing views. It's more "let's try to avoid extinction in the hands of those smarter than us". I don't think "slavery" is a useful abstraction here.
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2020-12-21T05:40:33.526Z · LW(p) · GW(p)
-
comment by Stuart Anderson (stuart-anderson) · 2020-12-21T05:52:01.961Z · LW(p) · GW(p)
-