Posts

Comments

Comment by Harish Tayyar Madabushi (harish-tayyar-madabushi) on Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning? · 2024-08-08T23:09:05.394Z · LW · GW

Just wanted to share that this work has now been peer-reviewed and accepted to ACL 2024.

arxiv has been updated with the published ACL version: https://arxiv.org/abs/2309.01809 
 

Comment by Harish Tayyar Madabushi (harish-tayyar-madabushi) on Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning? · 2023-12-11T19:53:40.978Z · LW · GW

Thanks!

Yes, I completely agree with you that in-context learning (ICL) is the only new "ability" LLMs seem to be displaying. I also agree with you that they start computing only when we prompt. 

There seems to be the impression that, when prompted, LLMS might do something different (or even orthogonal) to what the user requests (see, for example, Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure, report here by the BBC). We'd probably agree that this was careful prompt engineering (made possible by ICL) and not an active attempt by GPT to "deceive".  

Just so we can explicitly say that this isn't possible, I'd not call ICL an "emergent ability" in the Wei et al. sense. ICL "expressiveness" seems to increase with scale so it's predictable (and so does not imply other "unknowable" capabilities emerging with scale such as, deception, planning, ...)! 

It's going to be really exciting if we are able to obtain ICL at smaller scale! Thank you very much for that link. That's a very interesting paper!

Comment by Harish Tayyar Madabushi (harish-tayyar-madabushi) on Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning? · 2023-12-11T02:23:20.505Z · LW · GW

I am one of the authors - thank you for taking the time to go through and to summarise our paper! 

About your question on the instructions vs inherent abilities:  

Consider the scenario where we train a model on the task of Natural Language Inference, using a dataset like The Stanford Natural Language Inference (SNLI) Corpus. Suppose the model performs exceptionally well on this task. While we can now say that the model possesses the computational capability to excel in NLI, this doesn’t necessarily indicate that the model has developed inherent emergent reasoning abilities, especially those that it was not explicitly trained for while being trained on the SNLI corpus. For example, it is unlikely that our NLI-trained model will perform well in tasks that require logical reasoning skills.

My 15 min talk on the paper might also help answer this question: https://www.youtube.com/live/I_38YKWzHR8?si=hWoUr4ucFrT8sFUi&t=3111 

Comment by Harish Tayyar Madabushi (harish-tayyar-madabushi) on Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning? · 2023-12-11T02:15:15.949Z · LW · GW

Hi there, 

I am one of the authors - thank you for your interest in this paper. 

The focus of the paper is the discussion surrounding the "existential threat" as a result of latent hazardous abilities. Essentially, our results show that there is no evidence to believe that models are likely to have the ability to plan and reason independent of what they are explicitly required to do through their prompts. 

Importantly, as mentioned in the paper, there remain other concerns regarding the use of LLMs: For example, the ease with which they can be used to generate fake news or spam emails.

You are right that our results show that "emergent abilities" are dependant on prompts, however, our results also imply that tasks which can be solved by models are not really "emergent" and this will remain the case for any new tasks we find they are able to solve. 

Here's a summary of the paper: https://h-tayyarmadabushi.github.io/Emergent_Abilities_and_in-Context_Learning/