A review of the Bio-Anchors report
post by jylin04 · 2022-10-03T10:27:58.259Z · LW · GW · 4 commentsThis is a link post for https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit
Contents
4 comments
This is a linkpost for a review of Ajeya Cotra's Biological Anchors report [AF · GW](see also update here [AF · GW]) that I wrote in April 2022. It's since won a prize from the EA criticism and red-teaming contest [EA · GW], so I thought it might be good to share here for further discussion.
Here's a summary from the judges of the red-teaming contest:
This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).
Note that a link to a summary/review of the book Principles of Deep Learning Theory on page 8 has been moved here: More Recent Progress in the Theory of Neural Networks [LW · GW].
4 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-10-08T11:29:11.734Z · LW(p) · GW(p)
Thanks for this! I think it is a well-written and important critique. I don't agree with it though, but unfortunately I am not sure how to respond. Basically you are taking a possibility--that there is some special sauce architecture in the brain that is outside the space of current algorithms & that we don't know how to find via evolution because it's complex enough that if we just try to imitate evolution we'll probably mess up and draw our search space to exclude it, or make the search space too big and never find it even with 10^44 flops--and saying "this feels like 50% likely to me" and Ajeya is like "no no it feels like 10% to me" and I'm like "I'm being generous by giving it even 5%, I don't see how you could look at the history of AI progress so far & what we know about the brain and still take this hypothesis seriously" But it's just coming down to different intuitions/priors. (Would you agree with this characterization?)
Replies from: jylin04↑ comment by jylin04 · 2022-11-07T14:02:22.886Z · LW(p) · GW(p)
Thanks for the comment! I agree with this characterization. I think one of the main points I was trying to make in this piece was that as long as the prior for "amount of special sauce in the brain (or needed for TAI)" is a free parameter, the uncertainty in this parameter may dominate the timelines conversation (e.g. people who think that it is big may be basically unmoved by the Bio-Anchors calculation), so I'd love to see more work aimed at estimating it. (Then the rest of my post was an attempt to give some preliminary intuition pumps for why this parameter feels relatively big to me. But I think there are probably better arguments to be made (especially around whether TAI is in the space of current algorithms), and I'll try to write more about it in the future.)
(BTW, I'm really sorry for the slow reply! I still haven't figured out how to juggle replying to things on LW in a reasonable time frame with not getting too distracted by reading fun things whenever I log onto LW...)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-11-07T16:50:05.987Z · LW(p) · GW(p)
Nice. I agree with your point about how uncertainty in this parameter may dominate timelines conversation. If you do write more about why the prior on special sauce should be large (e.g. whether TAI is in the space of current algorithms), I'd be interested to read them! Though don't feel like you have to do this--maybe you have more important projects to do.
(No rush! This sort of conversation doesn't have a time limit, so it's not hurting me at all to wait even months before replying. I'm glad you like LW. :) )
comment by Jordan Taylor (Nadroj) · 2022-10-18T02:45:38.690Z · LW(p) · GW(p)
One small thing: When you first use the word "power", I thought you were talking about energy use rather than computational power. Although you clarify in "A closer look at the NN anchor", I would get the wrong impression if I just read the hypotheses:
... TAI will run on an amount of power comparable to the human brain ...
... neural network which would use that much power ...
Maybe change "power" to "computational power" there? I expect biological systems to be much more strongly selected to minimize energy use than TAI systems would be, but the same is not true for computational power.