When do experts think human-level AI will be created?

post by Vishakha (vishakha-agrawal) · 2024-12-30T06:20:33.158Z · LW · GW · 0 comments

This is a link post for https://aisafety.info/questions/5633/When-do-experts-think-human-level-AI-will-be-created

Contents

No comments

This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

On the whole, experts think human-level AI is likely to arrive in your lifetime.

It’s hard to precisely predict the amount of time [? · GW] until human-level AI.[1] Approaches include aggregate predictions, individual predictions, and detailed modeling.

Aggregate predictions:

Individual predictions:

Models:

These forecasts are speculative,[5] depend on various assumptions, predict different things (e.g., transformative versus human-level AI), and are subject to selection bias both in the choice of surveys and the choice of participants in each survey.[6] However, they broadly agree that human-level AI is plausible within the lifetimes of most people alive today. What’s more, these forecasts generally seem to have been getting shorter over time.[7]

Further reading

  1. ^

    We concentrate here on human-level AI and similar levels of capacities such as transformative AI, which may be different from AGI. For more info on these terms, see this explainer.

  2. ^

    Metaculus is a platform that aggregates the predictions of many individuals, and has a decent track record at making predictions related to AI.

  3. ^

    The author estimates the number of operations done by biological evolution in the development of human intelligence and argues this should be considered an upper bound on the amount of compute necessary to develop human-level AI.

  4. ^

    Based on the final graph titled “Cumulative probability distribution over TAI”.

  5. ^

    Scott Alexander points out that researchers that appear prescient one year sometimes predict barely better than chance the next year.

  6. ^

    One can expect people with short timelines to be overrepresented in those who study AI safety, as shorter timelines increase the perceived urgency of working on the problem.

  7. ^

    There have been many cases where AI has gone from zero-to-solved. This is a problem; sudden capabilities are scary.

0 comments

Comments sorted by top scores.