Metaculus launches contest for essays with quantitative predictions about AI
post by Tamay Besiroglu (tamay-besiroglu), Metaculus · 2022-02-08T16:07:25.409Z · LW · GW · 2 commentsThis is a link post for https://www.metaculus.com/project/ai-fortified-essay-contest/
Contents
2 comments
Metaculus has launched the AI Progress Essay Contest which aims to attract systematic thinkers to engage with the wealth of AI Forecasts that Metaculus has built up over the years. Of particular interest is rigorous and nuanced thinking about the likelihood, timing, and impacts of transformative AI that engages with quantitative probabilistic forecasts. The contest will run until April 16th, 2022, and have a $6,500 prize pool.
Many of the highlighted questions are likely to be of interest to the LW community:
- Will the scaling up of compute be sufficient to get human-level performance on a wide-range of important domains, such as language modelling and computer vision? If so, when can we expect training runs to be sufficiently compute intensive to hit important milestones? If not, why not? Justify your views with, among other things, reference to quantitative predictions.
- How much progress in AI algorithms and architectures has there been in the past decade, and how much should we expect in the next decade? Support your views with references to quantitative predictions, such as those involving performance on some common ML benchmarks.
- Can we map the disagreements between notable figures in the AI risk community, as presented in the Late 2021 MIRI Conversations [? · GW], onto disagreements about quantifiable predictions? Which views imply predictions that we think are likely to fare better and why?
- When will AI-enabled automation contribute more than 5%/25%/50%/95% to US GDP? Explain why this may or may not happen this century.
- Is deep learning sufficient for achieving Transformative AI? (This is defined, roughly, as AI that precipitates a transition comparable to the agricultural or industrial revolution.) If not, how many new substantial insights might be needed?
- How might the trajectory of the field of AI look if Moore’s Law were seriously stunted in the next decade? How might the rate of progress be affected, and where would the field look for new sources of improvements?
- What fraction of the field of AI will be dedicated to working on AI Safety, broadly defined? Will the field have substantial influence on how AI systems are designed, tested, and/or deployed? Provide evidence supporting your perspective, as well as references to quantitative predictions.
Further details may be found on the contest landing page.
2 comments
Comments sorted by top scores.
comment by MackGopherSena · 2022-02-09T14:42:53.960Z · LW(p) · GW(p)
[edited]
Replies from: ataraxia↑ comment by Ataraxia (ataraxia) · 2022-02-12T17:52:45.786Z · LW(p) · GW(p)
What do you mean?