Do LLMs Implement NLP Algorithms for Better Next Token Predictions?
post by simeon_c (WayZ) · 2023-09-19T12:28:45.660Z · LW · GW · No commentsThis is a question post.
Contents
Answers 2 Charlie Steiner None No comments
Do you think that base LLMs implement forms of "meta" algorithms like TF-IDF to predict the next token better?
Intuitively, it would be pretty smart to proceed that way because a text tends to be coherent in terms of style and in terms of vocabulary, so it would be very useful for an LLM to implement internally some NLP algorithms that help a ton in guessing the next words.
It's a question I'm interested in because if there's any "meta" algorithm implemented internally whose efficiency depends on the training setup the LLM is in, it increases the chances that situational awareness would arise.
Answers
Yes, absolutely, but I don't expect algorithms to be implemented in separable chunks the way a human would do it. Comparing frequencies of various words just needs an early attention head with broad attention. But such an attention head will also be recruited to do other things, not just faithfully pass on the sum of its inputs, and so you'd never literally find TF-IDF.
No comments
Comments sorted by top scores.