A bet for Samo Burja
post by Nathan Helm-Burger (nathan-helm-burger) · 2024-09-05T16:01:35.440Z · LW · GW · 2 commentsContents
2 comments
I'm listening to Samo Burja talk on the Cognitive Revolution podcast with Nathan Labenz. Samo said that he would bet that AGI is coming perhaps in the next 20-50 years, but not in the next 5.
I will take that bet. I can't afford to make an impressively large bet because my counterfactual income is already tied up in a bet against the universe. I quit my well-paying industry job as a machine learning engineer / data scientist three years ago to focus on AI safety/alignment research. To make the bet interesting, I will therefore offer 10:1 odds. I bet $1000 USD against your $100 USD that AGI will be invented in the next 5 years. There are a lot of possible resolution criteria, but as a reasonable shelling point I'll accept this metaculus market: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
I'll describe my rationale here, in case I change your mind and make you not want the bet. ;-)
I agree with your premise that AGI will require fundamental scientific advances beyond currently deployed tech like transformer LLMs.
I agree that scientific progress is hard, usually slow and erratic, fundamentally different from engineering or bringing a product to market.
I agree with your estimate that the current hype around chat LLMs, and focus on bringing better versions to market, is slowing fundamental scientific progress by distracting top AI scientists from pursuit of theoretical advances.
My cruxes are these:
-
I believe LLMs will scale to close enough to AGI to become central parts of very useful tools. I believe that these tools will enable the human AI scientists to make rapid theoretical progress. I expect that these AI research systems (I won't say researchers, since in this scenario they are still sub-AGI) will enable massively parallel testing of hypotheses which are derived as permutations of a handful of initial ideas given by the human scientists. I also foresee these AI research systems mining existing scientific literature for hypotheses to test. I believe the result of this technology will be rapid discovery of algorithms that can actually scale to true AGI.
-
I have been following advances in neuroscience relevant to brain-inspired AI for over 20 years now. I believe that the neuroscience community has made some key breakthroughs in the past five years which have yet to be effectively exported to machine learning and tested at scale. I also believe there's a backlog of older neuroscience findings that also haven't been fully tested. Thus, I believe the existing neuroscience literature provides a rich source of testable under-explored hypotheses. This could be tackled rapidly by the AI research systems from point 1, or will eventually be digested by eager young scientists looking for an academic ML paper to kickstart their careers. Thus the two cruxes are independent but potentially highly synergistic.
I look forward to your response! Regards, Nathan Helm-Burger
2 comments
Comments sorted by top scores.
comment by ChristianKl · 2024-09-06T11:48:38.267Z · LW(p) · GW(p)
If you make a bet you need to be very clear about the criteria of how the bet gets decided, currently it seems like this post doesn't lay out the criteria.
comment by Mateusz Bagiński (mateusz-baginski) · 2024-09-05T16:10:05.513Z · LW(p) · GW(p)
Samo said that he would bet that AGI is coming perhaps in the next 20-50 years, but in the next 5.
I haven't listened to the pod yet but I guess you meant "but not in the next 5".