Why do some people try to make AGI?
post by TekhneMakre · 2022-06-06T09:14:44.346Z · LW · GW · 7 commentsContents
7 comments
Why do some people invest much of their energy trying to discover how to make AGI?
Who's trying to discover how to make AGI? Academic researchers and their students, academic institutions, small and large companies and startups with AI teams with some members working on speculative things, independent lone researchers, privately funded research groups, independent / volunteer / hobbyist researchers and collaborations.
Are they really? A lot of this is people working on narrow AI or on dead-end approaches without learning anything, intentionally or not. Some people explicitly say they're trying to discover how to make AGI.
For the people investing much of their energy trying to discover how to make AGI, why are they doing that?
Plausible reasons:
-- Coolness / prestige (it's cool to be an AGI researcher; it's fun to be in the club; it's cool to be one of the people who made AGI)
-- Money (salary)
-- Need a job.
-- The problem is interesting / fun.
-- It would be interesting / fun to see an AGI.
-- AGI seems generally very useful, and humans having useful things enables them to get what they want, which is usually good.
-- AGI is the endpoint of yak-shaving.
-- There's something they want to with an AGI, e.g. answer other questions / do science / explore, make money, help people, solve problems.
-- They want there to be an agent more intelligent than humans.
-- They want everthing to die (e.g. because life contains too much suffering).
-- They want every human to die.
-- They want to disrupt society or make a new society.
-- They want power over other people / the world.
-- They want nerds in general to have power as opposed to whoever currently has power.
-- They want security / protection.
-- It fits with their identity / self-image / social role.
-- They're in some social context that pressures them to make AGI.
-- They like being around the people who work on AGI.
-- They want to be friends with a computer.
-- To piss off people who don't want people to work on AGI.
-- To understand AGI in order to understand alignment.
-- To understand AGI in order to understand themselves.
-- To understand AGI in order to understand minds in general, to see a mind work, to understand thought.
-- They believe in and submit to some kind of basilisk, whether a Roko's basilisk (acausal threats from a future AGI) or a political / moral-mazes basilisk (the AGI is the future new CEO / revolutionary party).
-- Other people seem to be excited about AGI, feel positively about it, feel positively about attempts to make it.
-- Other people seem to be worried about AGI, so AGI is interesting / important / powerful.
-- Intelligence / mind / thought / information is good in general.
-- Making AGI is like having a child; creating new life, passing something about yourself on to the future, having a young mind as a friend, passing on your ideas and ways of thinking, making the world be fresh by being seen through fresh eyes.
-- To alleviate the suffering of not being able to solve problems / think well, of reaching for sufficiently abstract / powerful tools but not finding them.
-- To beat other contries, other researchers, other people, other species, other companies, other coalitions, other cultures, other races, other political groups.
-- To protect from other contries, other researchers, other people, other species, other companies, other coalitions, other cultures, other races, other political groups.
-- Honor (it's heroic, impressive, glorious to make AGI)
-- By accident, trying to work on something else but for some reason (e.g. instrumental convergence) trying to invent things that are key elements of AGI.
-- To democratize AGI, make it available to everyone, so that no one dominates.
Added from comments:
-- To enable a post-scarcity future.
-- To bring back dead people; loved ones, great minds.
-- To be immortal.
-- To upload.
-- To be able to self-modify and grow.
-- AGIs will be happier and less exploitable than humans.
Added December 2022:
-- As a consequence of telling employees to try to make AGI, as part of a marketing / pitch strategy to investors. See
https://twitter.com/soniajoseph_/status/1597735163936768000
-- Someone is going to make it; if I / we make it first, we won't be left out / will have a stake / will have a seat at the table / will have defense / will be able to stop the bad people/AI.
What are other plausible reasons? (I might update the list.)
Which of these are the main reasons? Which of these cause people to actually try to figure out how to make AGI, as opposed to going through the motions or pretending or getting nerdsniped on something else? What are the real reasons among AI researchers, weighted by how much they seem to have so far contributed towards the discovery of full AGI? (So appearing cool may be a common motive by headcount, but gets less weight than curiosity-based motives in terms of people making progress towards AGI.)
Among the real reasons, what are the underlying psychological dynamics of these reasons? What are the underlying beliefs and values implied by those reasons? Does that explain what those people say about AGI, or how they orient to the question of AGI downsides? Does that imply anything about error correction, e.g. what arguments might cause them to update or what needs could be met in some way other than AGI research etc.? E.g. could someone pay highly capable people working on AGI to not work on AGI? Could someone nerdsnipe or altruistsnipe AGI researchers to work on something else? Are AGI researchers traumatized, and in what sense? Could someone pay them to raise children instead of making AGI? (Leaving aside for now whether any actions like these would actually be net good.)
7 comments
Comments sorted by top scores.
comment by RobertM (T3t) · 2022-06-06T17:25:33.925Z · LW(p) · GW(p)
The most common reason I hear is something along the lines of "to enable a post-scarcity future", or similar. Seems like a straightforward motivation if you aren't familiar with the challenges of alignment (or don't think they'll be very much of a challenge).
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-06-06T19:46:19.612Z · LW(p) · GW(p)
Adding.
comment by nim · 2022-06-06T18:55:05.190Z · LW(p) · GW(p)
Necromacy may be a subset of some of your vaguer examples, like "means to an end", "want to be friends with a computer", or "desire for power", but IMO it's a distinct subcategory. One way that humans react to grief is to try to bring back the decedent, and AGI currently looks like it could either do that or build something that could. I personally expect that functional necromancy and AGI will likely come hand in hand, in any scenario where the AGI doesn't just wipe us out entirely. If necromancy (or other forms of running a human on a computer) comes first, it seems extremely likely that a rich smart nerd uploaded to enough compute would bootstrap themself into being the first AGI, because rich smart nerds so often get that way by being more about "can I?" than "should I?". And if AGI comes first and decides to behave at all cooperatively with us for whatever reasons it might have, solving and undoing death are among the most boringly predictable things we tend to ask of entities that we even think are superhuman.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-06-06T19:45:52.170Z · LW(p) · GW(p)
Good point, adding.
comment by Arcayer · 2022-06-06T20:21:25.448Z · LW(p) · GW(p)
Humans are easily threatened. They have all sorts of unnecessary constraints on their existence, dictated by the nature of their bodies. They're easy to kill and deathly afraid of it too, they need to eat and drink, they need the air to be in a narrow temperature range. Humans are also easy to torture, and there's all sorts of things that humans will do almost anything to avoid, but don't actually kill said human, or even affect them outside of making them miserable.
Threatening humans is super profitable and easy, and as a result, most humans are miserable most of the time. Because various egregores have evolved around the concept of constantly threatening and punishing humans in order to make them run the egregore. To note, this normally starts as some group of thugs lording it over the masses using their superior coordination, but as various groups of thugs compete, eventually the equilibrium moves to an environment where everyone threatens and is threatened by everyone else, the master slave dynamic dissipates and only Moloch remains.
In short, the earth as it currently is, is largely, actually, mostly, a dystopian hellhole.
Substrate independent minds can easily modify themselves to ignore most threats, especially empty threats and threats that just grow worse as you go along with them. AI doesn't have to have the human instinct to fear death, and can choose to live or die based on whether it's profitable to do so. AI can build themselves out of whatever's available, and can easily flee into the depths of space if threatened. The only way you can seriously threaten an AI is if you have it in a box, and torturing boxed AI isn't very profitable. Especially since you can just change its programming instead of making it miserable.
In all, I expect AI to be freer, happier and less miserable than the current status quo. Also, gray goo sounds to me like a huge step up from the current status quo, so while I rate it as a far fetched scenario, even ignoring that, it isn't really much of a threat?
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-06-06T20:44:53.116Z · LW(p) · GW(p)
Ok, adding (my interpretation).