Is there a possibility that the upcoming scaling of data in language models causes A.G.I.?
post by ArtMi (richard-ford) · 2022-04-08T06:56:50.146Z · LW · GW · No commentsThis is a question post.
Contents
No comments
In the past days we have seen DALL-E 2 and Google's Pathways Language Model achievements, and in the next months we will see the outcome of the recently discovered more optimal use of compute: https://www.lesswrong.com/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-model [LW · GW]
EY believes that A.G.I. could be achieved by:
1) Ability to train with less data.
2) Ability to improve itself.
I believe A.G.I. could be close if language models improve on their abilities to code, particularly at recursion.
Is there the possibility or do you believe the path to A.G.I. is orthogonal to the current ML pathway? Or somewhere in between? And if so, where?(This is the question that causes A.G.I.)
Could you please share any "AI to align A.G.I." resources do you have please?. The obviously "duh that's not possible" answer that i found is that for AI to do successful AI Safety Align Research it would probably already be too late. Maybe because it could also be used to research A.G.I.?
Answers
No comments
Comments sorted by top scores.