A Year of AI Increasing AI Progress
post by TW123 (ThomasWoodside) · 2022-12-30T02:09:39.458Z · LW · GW · 3 commentsContents
3 comments
In July, I made a post [LW · GW] about AI being used to increase AI progress, along with this spreadsheet that I've been updating throughout the year. Since then, I have run across more examples, and had others submit examples (some of which were published before the date I made my original post).
2022 has included a number of instances of AI increasing AI progress. Here is the list. In each entry I also credit the person who originally submitted the paper to my list.
- A paper from Google Research used a robust supervised learning technique to architect hardware accelerators [March 17th, submitted by Zach Stein-Perlman]
- A paper from Google Research and Stanford fine tuned a model on its own chain-of-thought outputs, to improve performance on reasoning tasks [March 28th, submitted by Nathaniel Li]
- A paper from OpenAI used LLMs to help humans find flaws in other LLMs, thereby enabling them to more easily improve those models [June 12th, submitted by Dan Hendrycks]
- A paper from Google used machine learning to optimize compilers. This is less obviously accelerating AI but an earlier version of the compiler is used in Pytorch so it may end up doing so. [July 6th, submitted by Oliver Zhang]
- NVIDIA used deep reinforcement learning to generate nearly 13,000 circuits in their newest GPUs. [July 8th, submitted by me]
- Google found that ML code completion improved the productivity of their engineers. Some of them are presumably working in AI. [July 27th, submitted by Aidan O'Gara]
- A paper from Microsoft Research and MIT used language models to generate programming puzzle tasks for other language models. When finetuned on these tasks, the models were much better at solving the puzzles. [July 29th, submitted by Esben Kran]
- A paper from Google and UIUC used outputs from a language model to fine tune a language model after a majority vote procedure was used to filter outputs. [September 30th, submitted by me]
- A paper from DeepMind used reinforcement learning to discover more efficient matrix multiplication algorithms. [October 5th, submitted by me]
- A paper from Anthropic used language models, rather than humans, for feedback to improve language models. [December 16th, submitted by me]
- A paper from a number of universities used language models to generate examples of instruction following, which were then filtered and used to fine tune language models to follow instructions better. [December 20th, submitted by Nathaniel Li].
I'm writing this fairly quickly so I'm not going to add extensive commentary beyond what I said in my last post, but I'll point out here two things:
- It is pretty common these days for people to use language model outputs to improve language models. This trend appears likely to continue.
- A lot of these papers are from Google. Not DeepMind, Google. Google may not have declared they are aiming for AGI, but they sure do seem to be writing a lot of papers that involve AI increasing AI progress. It seems important not to ignore them.
Did I miss any? You can submit more here.
3 comments
Comments sorted by top scores.
comment by Jeff Rose · 2022-12-30T04:29:23.304Z · LW(p) · GW(p)
I was aware of a couple of these, but most are new to me. Obviously, published papers (even if this is comprehensive) represent only a fraction of what is happening and, likely, are somewhat behind the curve.
And it's still fairly surprising how much of this there is.
comment by plex (ete) · 2023-01-01T21:05:22.028Z · LW(p) · GW(p)
Also, from MIT CSAIL and Meta: Gradient Descent: The Ultimate Optimizer
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for "hypergradients" ahead of time.
We show how to automatically compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own hyper-hyperparameters, and so on ad infinitum. As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters. We present experiments validating this for MLPs, CNNs, and RNNs.
comment by Kay Kozaronek (kay-kozaronek) · 2023-01-12T06:27:14.171Z · LW(p) · GW(p)
Thanks for putting this together Thomas. Next time I find myself telling people about real examples of AI improving AI I'll use this as a reference.