A Year of AI Increasing AI Progress

post by TW123 (ThomasWoodside) · 2022-12-30T02:09:39.458Z · LW · GW · 3 comments

Contents

3 comments

In July, I made a post [LW · GW] about AI being used to increase AI progress, along with this spreadsheet that I've been updating throughout the year. Since then, I have run across more examples, and had others submit examples (some of which were published before the date I made my original post).

2022 has included a number of instances of AI increasing AI progress. Here is the list. In each entry I also credit the person who originally submitted the paper to my list.

I'm writing this fairly quickly so I'm not going to add extensive commentary beyond what I said in my last post, but I'll point out here two things:

Did I miss any? You can submit more here.

3 comments

Comments sorted by top scores.

comment by Jeff Rose · 2022-12-30T04:29:23.304Z · LW(p) · GW(p)

I was aware of a couple of these, but most are new to me.    Obviously, published papers (even if this is comprehensive) represent only a fraction of what is happening and, likely, are somewhat behind the curve.  

And it's still fairly surprising how much of this there is.  

comment by plex (ete) · 2023-01-01T21:05:22.028Z · LW(p) · GW(p)

Also, from MIT CSAIL and Meta: Gradient Descent: The Ultimate Optimizer

Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for "hypergradients" ahead of time.
We show how to automatically compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own hyper-hyperparameters, and so on ad infinitum. As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters. We present experiments validating this for MLPs, CNNs, and RNNs.

comment by Kay Kozaronek (kay-kozaronek) · 2023-01-12T06:27:14.171Z · LW(p) · GW(p)

Thanks for putting this together Thomas. Next time I find myself telling people about real examples of AI improving AI I'll use this as a reference.