Are LLMs sufficient for AI takeoff?
post by rpglover64 (alex-rozenshteyn) · 2023-02-17T15:46:36.765Z · LW · GW · No commentsThis is a question post.
Contents
Answers 1 Ben None No comments
I have an intuition, and I may be heterodox here, that LLMs on their own are not sufficient, no matter how powerful and knowledgeable they get. Put differently, the reasons that powerful LLMs are profoundly unsafe are primarily social: e.g. they will be hooked up to the internet to make iterative refinements to themselves; or they will be run continuously, allowing their simulacra to act; etc. Someone will build a system using an LLM as a component that kicks things off.
I'm not making an argument for safety here; after all, the main reason nukes are dangerous is that people might use them, which is also a social reason.
I'm asking because I have not seen this view explicitly discussed and I would like to get people's thoughts.
Answers
I am no expert, but I agree with you. They are cool, they could be a component in something. But they seem like they are only doing part of the "intelligence thing".
Eliezer seems to think they can do more: https://www.lesswrong.com/posts/qkAWySeomp3aoAedy/?commentId=KQxaMGHoXypdQpbtH. [LW · GW]
I don't know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans. I think I see it when someone gets asked (in person) a question, and they start giving an answer immediately, then suddenly interrupt themselves to give an opposite answer. Usually the trend is that the first answer was the "social answer", something like "In this situation the thing my character does is to agree enthusiastically that your project that you are clearly super excited about is cool, and tell you I will work on it full steam." Then some other parts of the self kicks in: "Wait, after 30 seconds of consideration I have realised that this idea can never work. Let me prove it to you." At least to me it even feels like that, their is some "conversation continuer" component. Obviously the build-up of an AI doesn't have to mirror that of a human intelligence, but if we want to build something "human level" then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.
↑ comment by rpglover64 (alex-rozenshteyn) · 2023-02-17T21:59:32.222Z · LW(p) · GW(p)
But they seem like they are only doing part of the "intelligence thing".
I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the "intelligence thing", including planning, induction, and search, and even more if you include minor external capabilities like storage.
I don't know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans
I know that the phenomenon has been studied for reading and listening (I personally get a kick out of garden-path sentences); the relevant fields are "natural language processing" and "computational linguistics". I don't know know of any work specifically that addressed it in the "speaking" setting.
if we want to build something "human level" then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.
Soft disagree. We're actively building the specialized components because that's what we want, not because that's particularly useful for AGI.
No comments
Comments sorted by top scores.