How tokenization influences prompting?
post by Boris Kashirin (boris-kashirin) · 2024-07-29T10:28:25.056Z · LW · GW · No commentsThis is a question post.
Contents
Answers 7 Brendan Long None No comments
I was thinking about how prompt differs from training data in terms of tokenization. If i am to prompt with "solution:" as opposed to "solution: " it seems like it can influence the result, as in training data last token contain some information about next token. If there is token ": T" but my prompt ended in ": " it can be inferred than next token can't be "T[something]".
Is this real effect, or I just misunderstand how tokenization works?
Answers
This is a real effect, and this article gives an example with URL's: https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38
":" and "://" are different tokens in this LLM, so prompting with a URL starting with "http:" gives bad results because it can't use the "://" token.
Although this can be improved with a technique called "token healing" that essentially steps backwards in the prompt and then allows any next token that starts with the same characters in the prompt (i.e. in the "http:" example, it steps backwards to "http" and allows any continuation that starts with ":" in its first token).
Note that this only applies at the level of tokens, so in your example it's true that the next token can't be ": T", but with standard tokenizers, you'll also get a token for every substring of your longer tokens, so it could be just "T". Whether this makes things better or worse depends on which usage was more common/better in the training data.
↑ comment by gwern · 2024-07-30T01:49:55.410Z · LW(p) · GW(p)
The number of problems that non-character/byte tokenization causes, whether BPE or WordPiece, never fails to amaze me. What a kettle of worms is that attractive-looking hack to save context window & speed up learning - especially as the models become so smart they otherwise make few errors & it becomes harder to shrug away tokenization pathologies.
Replies from: boris-kashirin↑ comment by Boris Kashirin (boris-kashirin) · 2024-07-30T10:39:27.935Z · LW(p) · GW(p)
Would be funny if hurdle presented by tokenization is somehow responsible for LLM being smarter than expected :) Sounds exactly like kind of curveball reality likes to throw at us from time to time :)
Replies from: gwern↑ comment by gwern · 2024-07-30T13:58:04.370Z · LW(p) · GW(p)
I definitely think that LLMs are 'smarter than expected' for many people due to tokenization, if only because they look at tokenization errors, which are so vivid and clear, and then ignore things like GPQA which are arcane and hard to read, and conclude LLMs are stupid. "It can't even count the letters in 'strawberry', obviously this is all bunk."
No comments
Comments sorted by top scores.