Forecasting AI (Overview)
post by jsteinhardt · 2023-11-16T19:00:04.218Z · LW · GW · 0 commentsContents
No comments
This is a landing page for various posts I’ve written, and plan to write, about forecasting future developments in AI. I draw on the field of human judgmental forecasting, sometimes colloquially referred to as superforecasting. A hallmark of forecasting is that answers are probability distributions rather than single outcomes, so you should expect ranges rather than definitive answers (but ranges can still be informative!). If you are interested in learning more about this field, I teach a class on it with open-access notes, slides, and assignments.
For AI forecasting in particular, I first got into this area by forecasting progress on several benchmarks:
- In Updates and Lessons from AI Forecasting, I describe a forecasting competition that I helped commission, which asked competitive forecasters to predict progress on four different benchmarks. This is a good place to start to understand what I mean by forecasting.
- In AI Forecasting: One Year In, I look at the first year of results from the competition, and find that forecasters generally underpredicted progress in AI, especially on the MATH and MMLU benchmarks.
- Motivated by this, in Forecasting ML Benchmarks in 2023 I provide my own forecasts of what state-of-the-art performance on MATH and MMLU will be in June 2023.
- In AI Forecasting: Two Years In, I look at the second year of results from the competition. I found that the original forecasters continued to underpredict progress, but that a different platform (Metaculus) did better, and that my own forecasts were on par with Metaculus.
After these exercises in forecasting ML benchmarks, I turned to a more ambitious task: predicting the properties of AI models in 2030 across many different axes (capabilities, cost, speed, etc.). My overall predictions are given in What Will GPT-2030 Look Like?, which provides a concrete (but very uncertain) picture of what ML will look like at the end of this decade.
Finally, I am now turning to using forecasting to quantify and understand risks from AI:
- In GPT-2030 and Catastrophic Drives: Four Vignettes, I use my GPT-2030 predictions as a starting point to understand the capabilities and corresponding risks of future ML models. I then speculate on four scenarios through which AI could lead to catastrophic outcomes.
- In Base Rates for Catastrophe, I take a different approach, using data on historical catastrophes and extinction events to form a reference class for AI catastrophes. Most expert forecasters consider reference class forecasting to be a strong baseline that forms the starting point for their own forecasts, and I think it’s also a good place to start for AI risk.
- In Forecasting Catastrophic Risks from AI, I put everything together to give an all-things-considered estimate of my probability of an AI-induced catastrophe by 2050.
- Finally, in Other Estimates of Catastrophic Risk, I collect other similar forecasts made by various individuals and organizations, and explain which ones I give more and less weight to, based on track record and overall effort and expertise.
The first of these posts has been written, and I plan to release a new one about once per week.
0 comments
Comments sorted by top scores.