What are the differences between AGI, transformative AI, and superintelligence?

post by Vishakha (vishakha-agrawal), Algon · 2025-01-23T10:03:31.886Z · LW · GW · 3 comments

This is a link post for https://aisafety.info/questions/5864/What-are-the-differences-between-AGI,-transformative-AI,-and-superintelligence

Contents

3 comments

This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

These terms are all related attempts to define AI capability milestones — roughly, "the point at which artificial intelligence becomes truly intelligent" — but with different meanings:

  1. ^

    The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page.

3 comments

Comments sorted by top scores.

comment by rvnnt · 2025-01-23T14:24:56.244Z · LW(p) · GW(p)

I think this approach [LW · GW] to thinking about AI capabilities is quite pertinent. Could be worth including "Nx AI R&D labor AIs" in the list?

Replies from: Algon
comment by Algon · 2025-01-23T16:06:26.115Z · LW(p) · GW(p)

Thanks for the recommendation! I liked ryan's sketches of what capabilities an Nx AI R&D labor AIs might possess. Makes things a bit more concrete. (Though I definitely don't like the name.) I'm not sure if we want to include this definition, as it is pretty niche. And I'm not convinced of its utility. When I tried drafting a paragraph describing it, I struggled to articulate why readers should care about it. 
 

Here's the draft paragraph. 
"Nx AI R&D labor AIs: The level of AI capabilities that is necessary for increasing the effective amount of labor working on AI research by a factor of N. This is not the same thing as the capabilities required to increase AI progress by a factor of N, as labor is just one input to AI progress. The virtues of this definition include: ease of operationalization, [...]"
 

Replies from: rvnnt
comment by rvnnt · 2025-01-24T14:27:59.445Z · LW(p) · GW(p)

I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].


  1. which would lead to "losing the forest for the trees", make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI ↩︎

  2. e.g. a higher-level abstraction like "AI that speeds up AI development by a factor of N" might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development. ↩︎