post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by [deleted] · 2023-02-08T08:25:52.266Z · LW(p) · GW(p)

With recursives searches for an AGI architecture, once a suitable test bench exists ("a model doing well on this bench is an AGI"), someone could automate searching the possibility space of possible architectures.

Most AI papers published reuse techniques from a finite set.  The modern ones often use several techniques to get sota results.  Therefore, if you built a composable library of software modules that can apply any technique from all  papers, adjusting for shape/data types/quantization/etc, combinations of modules from that library would allow for use of all known techniques, as well as a very large number of combinations not yet tried.

A recursive search - with a large compute budget - using the currently best scoring AGIs on AGI test bench to select new search coordinates from the possibility space - could search more possible AI techniques than all human efforts since the beginning of the field, within a year or 2.  

That's what matters.  If that search gets performed in 2030, AGI in 2032.  If it's done in 2098, AGI in 2100.  

Like any exponential process, all the progress happens right at the end.

comment by hold_my_fish · 2023-02-08T06:19:50.869Z · LW(p) · GW(p)

I'm increasingly bothered by the feedback problem for AI timeline forecasting: namely, there isn't any feedback that doesn't require waiting decades. If the methodology is bunk, we won't know for decades, so it seems bad to base any important decisions on the conclusions, but if we're not using the conclusions to make important decisions, what's the point? (Aside from fun value, which is fine, but doesn't make it OWiD material.)

This concern would be partially addressed if AI timeline forecasts were being made using methodologies (and preferably by people) that have had success a shorter-range forecasts. But none of the forecast sources here do that.