Could Advanced AI Accelerate the Pace of AI Progress? Interviews with AI Researchers

post by jleibowich, Nikola Jurkovic (nikolaisalreadytaken), Tom Davidson (tom-davidson-1) · 2025-03-03T19:05:31.212Z · LW · GW · 1 comments

This is a link post for https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5115692

Contents

1 comment

We interviewed five AI researchers from leading AI companies about a scenario where AI systems fully automate AI capabilities research. To ground the setting, we stipulated that each employee is replaced by 30 digital copies with the same skill set and the ability to think 30 times faster than a human. This represents a 900-fold increase in the cognitive labor that AI companies direct towards advancing AI capabilities.

Our key takeaways are:

  1. Compute for experiments will likely be a bottleneck. Finding better algorithms often requires running computationally expensive ML experiments. Even with abundant cognitive labor, AI progress would be constrained by the time needed for these experiments to run.
  2. AI cognitive labor could probably extract significantly more research insights out of limited compute. AI could increase efficiency by improving experimental design, high-level research directions, and project prioritization. Other strategies include running experiments at smaller scales, generating higher-quality synthetic data, or prioritizing algorithmic improvements that do not require large experiments (e.g., scaffolding).
  3. The overall pace of AI progress might be between 2 and 20 times faster in the discussed scenario. However, the researchers emphasized their high degree of uncertainty.
  4. Abundant AI labor might significantly improve experiment design and implementation. This includes eliminating subtle bugs, stopping experiments early, constantly monitoring and analyzing experiments, and making every experiment as informative and efficient as the best experiments are today. Some researchers thought this effect would be small, but others thought it could be very significant.
  5. The gains to smaller experiments would likely be larger than the gains to larger experiments. The cost of smaller experiments tends to be dominated by the time to code them, and most bugs are removed at small scales before larger runs are attempted.
  6. There is uncertainty about the extent to which multiple small-scale experiments could effectively replace large-scale ones. In some cases, new algorithms are effective at large scales but not at small scales.

1 comments

Comments sorted by top scores.

comment by Ryan Kidd (ryankidd44) · 2025-03-04T00:49:27.123Z · LW(p) · GW(p)

It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space.