Reflections on AI Timelines Forecasting Threadpost by Amandango · 2020-09-01T01:42:40.349Z · score: 48 (22 votes) · LW · GW · 6 comments
AGI timelines Summary of beliefs Emergence of categories Did this thread change people’s minds? Learnings about forecasting Vaguely defining the question worked surprisingly well Value of a template for predictions Creating AGI forecasting frameworks What’s next? Some open questions Questions I'm curious about Ideas we have for next steps None 6 comments
It’s been exciting to see people engage with the AI forecasting thread [LW · GW] that Ben, Daniel, and I set up! The thread was inspired by Alex Irpan’s AGI timeline update, and our hypothesis that visualizing and comparing AGI timelines could generate better predictions [LW(p) · GW(p)]. Ought has been working on the probability distribution tool, Elicit, and it was awesome to see it in action.
14 users shared their AGI timelines. Below are a number of their forecasts overlaid, and an aggregation of their forecasts.
The thread generated some interesting learnings about AGI timelines and forecasting. Here I’ll discuss my thoughts on the following:
- The object level discussion of AGI timelines
- How much people changed their minds and why
- Learnings about forecasting
- Open questions and next steps
Summary of beliefs
We calculated an aggregation of the 14 forecasts weighted by the number of votes each comment with a forecast received. The question wasn’t precisely specified (people forecasted based on slightly different interpretations) so I’m sharing these numbers mostly for curiosity’s sake, rather than to make a specific claim about AGI timelines.
- Aggregated median date: June 20, 2047
- Aggregated most likely date: November 2, 2033
- Earliest median date of any forecast: June 25, 2030
- Latest median date of any forecast: After 2100
Emergence of categories
I was pleasantly surprised by the emergence of categorizations of assumptions. Here are some themes in the way people structured their reasoning:
- AGI from current paradigm (2023 – 2033)
- GPT-N gets us to AGI
- GPT-N + improvements within existing paradigm gets us to AGI
- AGI from paradigm shift (2035 – 2060)
- We need fundamental technical breakthroughs
- Quantum computing
- Other new paradigms
- We need fundamental technical breakthroughs
- AGI after 2100, or never (2100 +)
- We decide not to build AGI
- We decide to build tool AI / CAIS instead
- We move into a stable state
- It’s harder than we expect
- It’s hard to get the right insights
- We won’t have enough compute by 2100
- We can’t built AGI
- There’s a catastrophe that stops us from being able to build AGI
- We decide not to build AGI
- Outside view reasoning
- With 50% probability, things will last twice as long as they already have
- We can extrapolate from rate of reaching past AI milestones
When sharing their forecasts, people associated these assumptions with a corresponding date interval for when we would see AGI. I took the median lower bound and median upper bound for each assumption to give a sense of what people are expecting if each assumption is true. Here’s a spreadsheet with all of the assumptions. Feel free to make a copy of the spreadsheet if you want to play around and make edits.
Did this thread change people’s minds?
One of the goals of making public forecasts is to help people identify disagreements and resolve cruxes. The number of people who updated is one measure of how well this format achieves this goal.
There were two updates in comments on the thread (Ben Pace [LW(p) · GW(p)] and Ethan Perez [LW(p) · GW(p)]), and several others not explicitly on the thread. Here are some characteristics of the thread that caused people to update (based on conversations and inference from comments):
- It was easy to notice surprising probabilities. In most forecasts, Elicit’s bin interface meant probabilities were linked to specific assumptions. For example, it was easy to disagree with Ben Pace’s specific belief that with 30% probability, we’d reach a stable state and therefore wouldn’t get AGI before 2100. Seeing a visual image of people’s distributions also made surprising beliefs (like sharp peaks) easy to spot.
- Visual comparison provided a sense check. It was easy to verify whether you had too little or too much uncertainty compared to others.
- Seeing many people’s beliefs provides new information. Separate from the information provided by people’s reasoning, there’s information in how many people support certain viewpoints. For example, multiple people placed a non-trivial probability mass on the possibility that we could get AGI from scaling GPT-3.
- The thread catalyzed conversations outside of LessWrong
Learnings about forecasting
Vaguely defining the question worked surprisingly well
The question in this thread (“Timeline until human-level AGI”) was defined much less precisely than similar Metaculus questions. This meant people were able to forecast using their preferred interpretation, which provided more information about the range of possible interpretations and sources of disagreements at the interpretation level. For example:
- tim_dettmers’ forecast [LW(p) · GW(p)] defined AGI as not making ‘any "silly" mistakes,’ which generated a substantially different distribution
- datscilly’s forecast [LW(p) · GW(p)] used the criteria from this Metaculus question and this Metaculus question, including, for example: “Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.”
- Rohin Shah [LW(p) · GW(p)] predicted timelines for transformative AI
A good next step would be to create more consensus on the most productive interpretation for AGI timeline predictions.
Value of a template for predictions
When people make informal predictions on AGI, they often define their own intervals and ways of specifying probabilities (e.g. ‘30% probability by 2035’, or ‘highly likely by 2100’). For example, this list of predictions shows how vague a lot of timeline predictions are.
Having a standard template for predictions forces people to have numerical beliefs across an entire range. This makes it easier to compare predictions and compute disagreements across any range (e.g. this bet suggestion [LW(p) · GW(p)] based on finding the earliest range with substantial disagreement). I’m curious how much more information we can capture over time by encouraging standardized predictions.
Creating AGI forecasting frameworks
Ought’s mission is to apply ML to complex reasoning. A key first step is making reasoning about the future explicit (for example, by decomposing the components of a forecast, isolating assumptions, and putting numbers to beliefs) so that we can then automate parts of the process. We’ll share more about this in a blog post that’s coming soon!
In this thread, it seemed like a lot of people built their own forecasting structure from scratch. I’m excited about leveraging this work to create structured frameworks that people can start with when making AGI forecasts. This has the benefits of:
- Avoiding replication of cognitive work
- Clearly isolating the assumptions that people disagree with
- Generating more rigorous reasoning by encouraging people to examine the links between different components of a forecast and make them explicit
- Providing data that helps us automate the reasoning process
Here are some ideas for what this might look like:
- Decomposing the question more comprehensively based on the categories outlined above
- For example, creating your overall distribution by calculating: P(Scaling hypothesis is true) * Distribution for when we will get AGI | Scaling hypothesis is true + P(Need paradigm shift) * Distribution for when we will get AGI | Need paradigm shift + P(Something stops us) * Distribution for when we will get AGI | Something stops us
- Decomposing AGI timelines into the factors that will influence it
- For example, compute or investment
- Inferring distributions from easy questions
- For example, asking questions like: “If the scaling hypothesis is true, what’s the mean year we get AGI?” and use the answers to infer people’s distributions
What’s next? Some open questions
I’d be really interested in hearing other people’s reflections on this thread.
Questions I'm curious about
- How was the experience for other people who participated?
- What do people who didn’t participate but read the thread think?
- What updates did people make?
- What other questions would be good to make forecasting threads on?
- What else can we learn from information in this thread, to capture the work people did?
- How can Elicit be more helpful for these kinds of predictions?
- How else do you want to build on the conversation started in the forecasting thread?
Ideas we have for next steps
- Running more forecasting threads on other x-risk / catastrophic risks. For example:
- When will humanity go extinct from global catastrophic biological risks?
- How many people will die from nuclear war before 2200?
- When will humanity go extinct from asteroids?
- By 2100, how many people will die for reasons that would not have occurred if we solved climate change by 2030?
- More decomposition and framework creation for AGI timeline predictions
- We’re working on making Elicit as useful as we can for this!
Comments sorted by top scores.