Posts
Comments
This is a fantastic post. Thanks very much for writing it.
LEX: Is there a way to measure general intelligence? I mean, I could ask that question in a million ways but basically, will you know it when you see it? It being in an AGI system?
YUD: Heh. If you boil a frog gradually enough, if you zoom in far enough, it's always hard to tell around the edges. GPT-4, people are saying right now, "Like, this looks to us like a spark of general intelligence. It is like able to do all these things it was not explicitly optimized for." Other people are being like, "No its too early, its like, like fifty years off." And you know if they say that they're kind of whack because how could they possibly know that even if it were true. But uh, you know - not to strawman - but some people may say like "That's not general intelligence" and not furthermore append "It's 50 years off."
Uhm, or they may be like, "It's only a very tiny amount." And you know, the thing I would worry about is if is this how things are scaling then - jumping out ahead and trying not to be wrong in the same way that I've been wrong before - maybe GPT-5 is more unambiguously a general intelligence and maybe that is getting to a point where it is like even harder to turn back. Not that it would be easy to turn back now but you know maybe if you let, if you like start integrating GPT-5 into the economy it is even hard to turn back past there.
LEX: Isn't it possible that there's a you know, with the frog metaphor, you can kiss the frog and turn it into a prince as you're boiling it? Could there be a phase shift in the frog where it's unambiguous as you're saying?
YUD: I was expecting more of that. I was . . . like the fact that GPT-4 is kind of on the threshold and neither here nor there. Like that itself is like not the sort of thing, that's not quite how I expected it to play out. I was expecting there to be more of an issue, more of a sense of like, different discoveries. Like the discovery of transformers where you would stack them up, and there would be like a final discovery, and then you would like get something that was like more clearly general intelligence. So the the way that you are like taking what is probably basically the same architecture in GPT-3 and throwing 20 times as much compute at it, probably, and getting out GPT-4. And then it's like maybe just barely a general intelligence or like a narrow general intelligence or you know something we don't really have the words for. Uhm yeah,that is not quite how I expected it to play out.
This somewhat confusing exchange is another indicator that the "general intelligence" component of AGI functions a lot better as a stylized concept than it does when applied to the real world. But it is also a clear indicator that EY thinks that recent AI developments and deployments look more like slow takeoff than fast takeoff, at least relative to his expectations.