"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence", Clune 2019

post by gwern · 2019-09-10T21:33:08.837Z · LW · GW · 5 comments

This is a link post for https://arxiv.org/abs/1905.10985


Comments sorted by top scores.

comment by Matthew Barnett (matthew-barnett) · 2019-09-10T21:54:23.707Z · LW(p) · GW(p)

It seems like to me that if AGI is eventually created via this paradigm, hardware overhang is almost guaranteed. That's because it requires meta-learning and the initialization of rich environments to train populations of agents, which is extremely computationally expensive compared to running the actual agents themselves. Therefore, if AGI is obtained this way, we could rather easily re-purpose the hardware used for training in order to run a large number of these agents, which could correspondingly perform vast amounts of economic labor relative to the human population.

comment by Gurkenglas · 2019-09-10T22:58:53.733Z · LW(p) · GW(p)

Whatever is created by this paradigm may turn out just weak enough that repurposing the training hardware will only scale it up to competetive profitability.

comment by Matthew Barnett (matthew-barnett) · 2019-09-10T23:03:10.516Z · LW(p) · GW(p)

I agree that this argument doesn't automatically imply that there will be a discontinuous jump in profitability of a single system, so it makes sense that the agents created might be just barely more competitive than other agents before it. However, it does imply that by the time we get the hardware necessary to do this, we will have a lot of economic power sitting in our machines by virtue of having a ton of computing power to run the agents.

comment by qemqemqem · 2019-09-10T21:48:25.822Z · LW(p) · GW(p)

Imagine you had an oracle which could assess the situation an agent is in and produce a description for an ML architecture that would correctly "solve" that situation.

I think for some strong versions of this oracle, we could create the ML component from the architecture description with modern methods. I think this combination could effectively act as AGI over a wide range of situations, again with just modern methods. It would likely be insufficient for linguistic tasks.

I think that's what this article is getting at. The author is an individual from Uber. Does anyone know if this line of thinking has other articles written about it?

comment by gwern · 2019-09-10T23:10:16.391Z · LW(p) · GW(p)

I'm not quite sure what you mean. If you want other manifests for a more evolutionary or meta-learning approach, DM has https://arxiv.org/abs/1903.00742 which lays out a bigger proposal around PBT and other things they've been exploring, if not as all-in on evolution as Uber AI has been for years now.