My current framework for thinking about AGI timelines

post by zhukeepa · 2020-03-30T01:23:57.195Z · LW · GW · 5 comments

Contents

  Table of contents
    How special are human brains among animal brains?
    How uniform is the neocortex?
    How much are our innate cognitive capacities just shortcuts for learning?
    Are mammalian brains all doing the same thing at different levels of scale?
    How simple is the simplest brain that can be scaled?
    How close are we to simple biological brains?
    What's the smallest set of principles that can explain human cognition?
    How well can humans compete against evolution in designing general intelligences?
    Tying it all together, part I
    Tying it all together, part II
None
5 comments

At the beginning of 2017, someone I deeply trusted said they thought AGI would come in 10 years, with 50% probability.

I didn't take their opinion at face value, especially since so many experts seemed confident that AGI was decades away. But the possibility of imminent apocalypse seemed plausible enough and important enough that I decided to prioritize investigating AGI timelines over trying to strike gold. I left the VC-backed startup I'd cofounded, and went around talking to every smart and sensible person I could find who seemed to have opinions about when humanity would develop AGI.

My biggest takeaways after 3 years might be disappointing -- I don't think the considerations currently available to us point to any decisive conclusion one way or another, and I don't think anybody really knows when AGI is coming. At the very least, the fields of knowledge that I think bear on AGI forecasting (including deep learning, predictive coding, and comparative neuroanatomy) are disparate, and I don't know of any careful and measured thinkers with all the relevant expertise.

That being said, I did manage to identify a handful of background variables that consistently play significant roles in informing people's intuitive estimates of when we'll get to AGI. In other words, people would often tell me that their estimates of AGI timelines would significantly change if their views on one of these background variables changed.

I've put together a framework for understanding AGI timelines based on these background variables. Among all the frameworks for AGI timelines I've encountered, it's the framework that most comprehensively enumerates crucial considerations for AGI timelines, and it's the framework that best explains how smart and sensible people might arrive at vastly different views on AGI timelines.

Over the course of the next few weeks, I'll publish a series of posts about these background variables and some considerations that shed light on what their values are. I'll conclude by describing my framework for how they come together to explain various overall viewpoints on AGI timelines, depending on different prior assumptions on the values of these variables.

By trade, I'm a math competition junkie, an entrepreneur, and a hippie. I am not an expert on any of the topics I'll be writing about -- my analyses will not be comprehensive, and they might contain mistakes. I'm sharing them with you anyway in the hopes that you might contribute your own expertise, correct for my epistemic shortcomings, and perhaps find them interesting.

I'd like to thank Paul Christiano, Jessica Taylor, Carl Shulman, Anna Salamon, Katja Grace, Tegan McCaslin, Eric Drexler, Vlad Firiou, Janos Kramar, Victoria Krakovna, Jan Leike, Richard Ngo, Rohin Shah, Jacob Steinhardt, David Dalrymple, Catherine Olsson, Jelena Luketina, Alex Ray, Jack Gallagher, Ben Hoffman, Tsvi BT, Sam Eisenstat, Matthew Graves, Ryan Carey, Gary Basin, Eliana Lorch, Anand Srinivasan, Michael Webb, Ashwin Sah, Yi Sun, Mark Sellke, Alex Gunning, Paul Kreiner, David Girardo, Danit Gal, Oliver Habryka, Sarah Constantin, Alex Flint, Stag Lynn, Andis Draguns, Tristan Hume, Holden Lee, David Dohan, and Daniel Kang for enlightening conversations about AGI timelines, and I'd like to apologize to anyone whose name I ought to have included, but forgot to include.

Table of contents

As I post over the coming weeks, I'll update this table of contents with links to the posts, and I might update some of the titles and descriptions.

How special are human brains among animal brains? [AF · GW]

Humans can perform intellectual feats that appear qualitatively different from those of other animals, but are our brains really doing anything so different?

How uniform is the neocortex? [AF · GW]

To what extent is the part of our brain responsible for higher-order functions like sensory perception, cognition, and language[1], uniformly composed of general-purpose data-processing modules?

How much are our innate cognitive capacities just shortcuts for learning?

To what extent are our innate cognitive capacities (for example, a pre-wired ability to learn language) crutches provided by evolution to help us learn more quickly what we otherwise would have been able to learn anyway?

Are mammalian brains all doing the same thing at different levels of scale?

Are the brains of smarter mammals, like humans, doing essentially the same things as the brains of less intelligent mammals, like mice, except at a larger scale?

How simple is the simplest brain that can be scaled?

If mammalian brains can be scaled, what's the simplest brain that could? A turtle's? A spider's?

How close are we to simple biological brains?

Given how little we understand about how brains work, do we have any reason to think we can recapitulate the algorithmic function of even simple biological brains?

What's the smallest set of principles that can explain human cognition?

Is there a small set of principles that underlies the breadth of cognitive processes we've observed (e.g. language, perception, memory, attention, and reasoning)[2], similarly to how Newton’s laws of motion underlie a breadth of seemingly-disparate physical phenomena? Or is our cognition more like a big mess of irreducible complexity?

How well can humans compete against evolution in designing general intelligences?

Humans can design some things much better than evolution (like rockets), and evolution can design some things much better than humans (like immune systems). Where does general intelligence lie on this spectrum?

Tying it all together, part I

My framework for what these variables tell us about AGI timelines

Tying it all together, part II

My personal views on AGI timelines


  1. https://en.wikipedia.org/wiki/Neocortex ↩︎
  2. https://en.wikipedia.org/wiki/Cognitive_science ↩︎

5 comments

Comments sorted by top scores.

comment by teradimich · 2020-03-30T19:07:40.620Z · LW(p) · GW(p)

I have collected a huge number of quotes from various experts about AGI. About the timing of AGI, about the possibility of a quick takeoff of AGI and its impact on humanity. Perhaps this will be useful to you.

https://docs.google.com/spreadsheets/d/19edstyZBkWu26PoB5LpmZR3iVKCrFENcjruTj7zCe5k/edit?fbclid=IwAR1_Lnqjv1IIgRUmGIs1McvSLs8g34IhAIb9ykST2VbxOs8d7golsBD1NUM#gid=1448563947

Replies from: ioannes_shade
comment by MichaelA · 2020-04-01T09:30:19.565Z · LW(p) · GW(p)

Interesting post - I look forward to reading the rest of this series! (Have you considered making it into a "sequence"?)

Summary of my comment: It seems like this post lists variables that should inform views on how hard developing an AGI will be, but omits variables that should inform views on how much effort will be put into that task at various points, and how conducive the environment will be to those efforts. And it seems to me that AGI timelines are a function of all three of those high-level factors.

(Although note that I'm far from being an expert on AI timelines myself. I'm also not sure if the effort and conduciveness factors can be cleanly separated.)

Detailed version: I was somewhat surprised to see that the "background variables" listed seemed to all be fairly focused on things like neuroscience/biology, without any seeming focused on other economic, scientific, or cultural trends that might impact AI R&D or its effectiveness. By the latter, I mean things like (I spitballed these quickly just now, and some might overlap somewhat):

  • whether various Moore's-law-type trends will continue, or slow down, or speed up, and when
    • relatedly, whether there'll be major breakthroughs in technologies other than AI which feed into (or perhaps reduce the value of) AI R&D
  • whether investment (including e.g. government funding) in AI R&D will increase, decrease, or remain roughly constant
  • whether we'll see a proliferation of labs working on "fundamental" AI research, or a consolidation, or not much change
  • whether there'll be government regulation on AI research that slows down research, and how much this slows it down
  • whether AI will come to be strongly seen as a key military technology, and/or governments nationalise AI labs, and/or governments create their own major AI labs
  • whether there'll be another "AI winter"

I don't have any particular reason to believe that views on those specific things I've mentioned would do a better job at explaining disagreements about AGI timelines than the variables mentioned in this post would. Perhaps most experts already agree about the things I mentioned, or see them as not very significant. But I'd at least guess that there are things along those lines which either do or should inform views on AGI timelines.

I'd also guess that factors like those I've listed would seem increasingly important as we consider increasingly long timelines, and as we consider "slow" or "moderate" takeoff scenarios (like the scenarios in what failure looks like [LW · GW]). E.g., I doubt there'd be huge changes in interest in, funding for, or regulation of AI over the next 10 years (though it's very hard to say), if AI doesn't become substantially more influential over that time. But over the next 50 years, or if we start seeing major impacts of AI before we reach something like AGI, it seems easy to imagine changes in those factors occurring.

comment by Steven Byrnes (steve2152) · 2020-03-30T11:09:12.113Z · LW(p) · GW(p)

Looking forward to it!!!

comment by Gyrodiot · 2020-03-30T15:33:05.835Z · LW(p) · GW(p)

Looking forward to it as well. From the table of contents, I gather that your framework will draw heavily from neuroscience and insights from biological intelligence?