What are the main arguments against AGI?
post by Edy Nastase (edy-nastase) · 2024-12-24T15:49:03.196Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 3 Charlie Steiner 1 Noah Birnbaum None 4 comments
Recently, I have been trying to reason why I belive what I belive (regarding AGI). However, it appears to me that there is not enough discussion around the arguments against AGI (more specifically AGI skeptisim). This might be of benefit, especially given that
Would this be because the arguments are either too weak or the Ai Safety is biased (understandably) towards imminent AGI?
This might also come out as a reaction from the recent advancements (such as o3) and the alarmant short timelines (less than 3 years). I want to understand the other sides points as well.
Based on what I found on the internet, the main arguments are roughly the following (not exact, given that most of the sources are either informal, such as Wikipedia) :
- outdated, usually statements from scientists made around 2010s before LLMs
- Ethics-ish arguments, that state dangers from AGI are just simple distractions from the real dangers of AI (racism, bias etc)
- Frontier Labs propaganda, simply stating that AGI is happening soon to keep their stakeholders happy and investments coming
- Cognitive Science arguments, stating that it is intracable to create a human level mind using a computer
What do people think? What are some good resources or researchers that might have a good counterpoint to the imminent AGI path?
Edit: Modified slightly to focus only on the AGI argument, rather than including safety implications as well.
Answers
I think the history of things being predicted Real Soon Now is one of the main counterarguments to short timelines. It just seemed Obvious that we were getting flying cars, or fusion power, or self-driving cars, or video-phones, for years, before in some cases we eventually did get those things, and in other cases maybe we'll never get those things because technology just followed a different path than we expected.
Like, maybe the "we'll just merge with the machines" people will turn out to actually be right. I don't believe it. But it could happen, and there are plenty of similar things that "could happen" that eventually add up to a nontrivial chunk of probability.
4 comments
Comments sorted by top scores.
comment by Carl Feynman (carl-feynman) · 2024-12-26T17:07:49.313Z · LW(p) · GW(p)
One argument against is that I think it’s coming soon, and I have a 40 year history of frothing technological enthusiasm, often predicting things will arrive decades before they actually do. 😀
comment by Seth Herd · 2024-12-26T15:19:32.108Z · LW(p) · GW(p)
I wonder if you mean arguments against AGI x-risk being a concern right now? You've included some timeline and safety arguments.
You might want to edit the post to clarify what arguments you're talking about.
Replies from: edy-nastase↑ comment by Edy Nastase (edy-nastase) · 2024-12-26T23:15:03.143Z · LW(p) · GW(p)
My focus would have wanted to be purely on AGI. I guess, the addition of AGI x-risks was sort of trying to hint at the fact that they would be coming together with it (that is why I mentioned Lecunn).
I feel like there is this strong belief that AGI is just around the corner (and it might very well be), but I wanted to know what is the opposition against such statement. I know that there is a lot of solid proof that we are going towards more intelligent systems, but understanding the gaps in this "given prediction" might provide useful information (either for updating timelines, changing research focus etc.).
Personally, I might be on the "in-between" position, where I am not sure what to believe (in terms of timelines). I am safety inclined, and I applaud the effort of people in the field, but there might be a blind spot in believing that AGI is coming soon (when the reality might be very much different). What if that is not the case? What then? What are the safety research implications? More importantly, what are the implications around the field of AI? Companies and researchers might very well use the hype wave to keep getting financed, get recognition etc.
Perhaps, an analogy would help. Think about cancer. Everyone knows it is true, and that is something that is not going to be argued about (hopefully). Now, I cannot come in and say what are the arguments in support of the existence of cancer, because it is already there and proven to be there. Now, in the context of AGI, I feel like there might be a lot of speculations and a lot of people trying to claim that they knew the perfect day AGI came. It feels sort of like a distraction to me. Even the posts around "getting your things in order". It feels sort of wrong to just give up on everything without even considering the arguments against the truth you believe in.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-12-27T02:32:21.071Z · LW(p) · GW(p)
I see - you're mostly asking about timelines, you're not asking whether AGI is possible ever, but whether it will happen soon. You should look at the recent post
https://www.lesswrong.com/posts/oC4wv4nTrs2yrP5hz/what-are-the-strongest-arguments-for-very-short-timelines [LW · GW]
I think timelines is the keyword to search for.
I did think about writing a similar post to the one I linked asking what are the strongest arguments for longer timelines. I expect those to be at least as weak.
I think the correct summary is that nobody knows. I think the wise move would be to prepare for short timelines since we don't know.