A Guide to Forecasting AI Science Capabilities

post by Eleni Angelou (ea-1) · 2023-04-29T23:24:46.579Z · LW · GW · 1 comments

Contents

  Key points and readings 
    for forecasting in general: 
    for forecasting AI in particular: 
  Specific changes to consider: 
  How does change happen?
  Concrete AI stories 
  Understanding STEM AGI scenarios 
  Forecasting AGI and science models overlap 
  Continuity vs discontinuity in AI progress 
  Meta-scientific considerations
None
1 comment

The following contains resources that I (Eleni) curated to help the AI Science team of AI Safety Camp 2023 prepare for the second half of the project, i.e., forecasting science capabilities.  Suggestions for improvement of this guide are welcome. 

 

Key points and readings 

for forecasting in general: 

for forecasting AI in particular: 

                                      

If you’d like to read some posts to inform your thinking about explanation, I highly recommend:

Specific changes to consider: 

  1. Compute → how will chip production change? What are the costs of running GPT-x?
  1. Data → are there any reasons to expect that we will run out of good quality data before having powerful models? 
  2. Algorithms → are there reasons to expect that current algorithms, methods, architectures will stop working?

 

How does change happen?

Scaling Laws [LW · GW]: more high-quality data get us better results than more parameters. 

 

Concrete AI stories  [? · GW]

Understanding STEM AGI scenarios 

  1. The model operates according to an 1-1 correspondence with the external world through information it retrieves from the internet. Each token is a representation of an object/concept in the human/natural world. 
  2. The model is a simulation [LW · GW] of the external world. This simulation is like the world of a board game: it features properties of the external world in the form of inferences from human text the model has been trained on,  including agentic and non-agentic entities, e.g., you’re playing a WW2 game and you’re in the position of the Allies while your friend plays in the position of Germany. The rules of the game are not the rules of the war that actually took place, but rather what the model inferred about the rules from the training dataset. 

Forecasting AGI and science models overlap 

1) that AIs will be able to publish correct science before they can load dishwashers.

2) the world ends before more than 10% of cars on the street are autonomous. 

 

Continuity vs discontinuity in AI progress 

 

Meta-scientific considerations

1 comments

Comments sorted by top scores.

comment by AnthonyC · 2023-04-30T14:45:23.161Z · LW(p) · GW(p)

Would a neutral end make sense? What would that even mean? E.g., there is superintelligence but leaves us alone.

Closest I can think of top-of-mind is The Demiurge's Older Brother

  • It seems strange to want to have effective solutions for problems of the physical world without any input from the physical world. 

To human intuition, yes, but fundamentally, no, not if the needed input is implicit in the training data. Or if the AI can do enough pure math to find better ways to do physical simulation to extrapolate from the data it already has. You don't need to be able to predict General Relativity from three frames of a falling apple to think that maybe, if you have simultaneous access to the results of every paper and experiment ever published, and the specs of every product and instrument and device ever made, that you could reliably come up with some importantly useful new things. 

 

As a minor example, we already live in the world (and have for a decade or more) where materials informatics and computational chemistry tools, with no AGI at all, can cut the number of experiments a company needs to develop a new material in half. Or where generative design software, with no AGI, run by someone not an expert in the relevant field of engineering, can greatly improve the performance of mechanical parts, which can be immediately made with 3D printers. That's a very small subset of STEM for sure, but it exists today.