Why is AGI/ASI Inevitable?

post by DeathlessAmaranth · 2024-05-02T18:27:17.486Z · LW · GW · 6 comments

Contents

  What makes artificial general intelligence 'inevitable'? What makes artificial superintelligence 'inevitable'? Can't people decide simply not to build AGI/ASI?
None
6 comments

Hello! My name is Amy.

This is my first LessWrong post. I'm about somewhat certain it will be deleted, but I'm giving it a shot anyway, because I've seen this argument thrown around a few places and I still don't understand. I've read a few chunks of the Sequences, and the fundamentals of rationality sequences. 

What makes artificial general intelligence 'inevitable'? What makes artificial superintelligence 'inevitable'? Can't people decide simply not to build AGI/ASI?

I'm very, very new to this whole scene, and while I'm personally convinced AGI/ASI is coming, I haven't really been convinced it's inevitable, the way so many people online (mostly Twitter!) seem convinced. 

While I'd appreciate to hear your thoughts, what I'd really love is to get some sources on this. What are the best sequences to read on this topic? Are there any studies or articles which make this argument?

Or is this all just some ridiculous claim those 'e/acc' people cling to?

Hope this doesn't get deleted! Thank you for your help!

6 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-05-02T19:46:10.290Z · LW(p) · GW(p)

I think people can in theory collectively decide not to build AGI or ASI.

Certainly you as an individual can choose this! Where things get tricky is when asking whether that outcome seems probable, or coming up with a plan to bring that outcome about. Similarly, as a child I wondered, "Why can't people just choose not to have wars, just decide not to kill each other?"

People have selfish desires, and group loyalty instincts, and limited communication and coordination capacity, and the world is arranged in such a way that sometimes this leads to escalating cycles of group conflict that are net bad for everyone involved.

That's the scenario I think we are in with AI development also. Everyone would be safer if we didn't, but getting everyone to agree not to and hold to that agreement even in private seems intractably hard.

[Edit: Here's a link to Steven Pinker's writing on the Evolution of War. I don't think, as he does, that the world is trending strongly towards global peace, but I do think he has some valid insights into the sad lose-lose nature of war.]

Replies from: MondSemmel
comment by MondSemmel · 2024-05-03T09:04:24.578Z · LW(p) · GW(p)

In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.

Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.

Another problem is that many people don't even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they're racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.

comment by the gears to ascension (lahwran) · 2024-05-02T19:51:41.794Z · LW(p) · GW(p)

In order to decide to not build it, all people who can and would otherwise build it must in some way end up not doing so. For any individual actor who could build it, they must either choose themselves to not build it, or be prevented from doing so. Pushing towards the former is why it's a good idea to not publish ideas that could, even theoretically, help with building it. In order for the latter to occur, rules backed by sufficient monitoring and force must be used. I don't expect that to happen in time. As a result, I am mostly optimistic about plans where it goes well, rather than plans where it doesn't happen. Plans where it goes well depend on figuring out how to encode to it an indelible target that makes it care about everyone, and then convincing a team who will build it to use that target. as you can imagine, that is an extremely tall order. Therefore, I expect humanity to die, likely incrementally as more and more businesses grow that are more and more AI-powered and uninhibited by any worker or even owner constraints.

But those are the places where I see branches that can be intervened on. If you want to prevent it, people are attempting to get governments to implement rules sufficient to actually prevent it from coming into existence anywhere, at all. It looks to me like it's going to just create regulatory capture and still allow the companies and governments to create catastrophically uncaring AI.

And no, your question is not the kind that would be deleted here. I appreciate you posting it. Sorry to be so harshly gloomy in response.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-05-03T21:52:55.001Z · LW(p) · GW(p)

I, on the other hand, have very little confidence that people trying to build AGI will fail to quickly (within the next 3 years, aka 2027) find ways to do it. I do have confidence that we can politically coordinate to stop the situation becoming an extinction or near-extinction-level catastrophe. So I place much less emphasis on abstaining from publishing ideas which may help both alignment and capabilities, and more emphasis on figuring out ways to generate empirical evidence of the danger before it is too late, so as to facilitate political coordination.

I think that the situation in which humanity fails to politically coordinate to avoid building catastrophically dangerous AI is a situation that leads into conflict, likely a World War III with wide-spread use of nuclear weapons. I don't expect humanity to go extinct from this and I don't expect the rogue AGI to emerge as the victor, but I do think it is in everyone's interests to work hard to avoid such a devastating conflict. I do think that any such conflict would likely wipe out the majority of humanity. That's a pretty grim risk to be facing on the horizon.

comment by zeshen · 2024-05-03T03:43:39.568Z · LW(p) · GW(p)

Can't people decide simply not to build AGI/ASI?

Yeah, many people, like the majority of users on this forum, have decided to not build AGI. On the other hand, other people have decided to build AGI and are working hard towards it. 

Side note: LessWrong has a feature to post posts as Questions, you might want to use it for questions in the future.

Replies from: erioire
comment by ErioirE (erioire) · 2024-05-03T20:36:49.933Z · LW(p) · GW(p)

Yeah, many people, like the majority of users on this forum, have decided to not build AGI.

Not to build AGI yet. 
Many of us would love to build it as soon as we can be confident we have a realistic and mature plan for alignment, but that's a problem that's so absurdly challenging that even if aliens landed tomorrow and handed us the "secret to Friendly AI", we would have a hell of a time trying to validate that it actually was the real thing.

If one is faced with a math problem where you could be staring at the answer and know no way to unambiguously verify said answer, you are likely not capable of solving the problem until you somehow close the inferential distance separating you from understanding. Assuming the problem is solvable at all.