GPT-3 and concept extrapolation
post by Stuart_Armstrong · 2022-04-20T10:39:29.389Z · LW · GW · 27 commentsContents
28 comments
At the latest EAG in London, I was challenged to explain what concept extrapolation would mean for GPT-3.
My first thought was the example from this post [LW · GW], where there were three clear patterns fighting each other for possible completions: the repetition pattern where she goes to work, the "she's dead, so she won't go to work" pattern, and the "it's the weekend, so she won't go to work" pattern.
That feels somewhat like possible "extrapolations" of the initial data. But the idea of concept extrapolation is that the algorithm is trying to cope with a shift in world-model, and extend its goal to that new situation.
What is the world-model of GPT-3? It consists of letters and words. What is its "goal"? To complete sentences in a coherent and humanlike way. So I tried the following expression, which would be close to its traditional world-model while expanding it a bit:
ehT niar ni niapS syats ylniam ni eht
What does this mean? Think of da Vinci. The correct completion is "nialp", the reverse of "plain".
I ran that through the GPT-3 playground (text-davinci-002, temperature 0.7, maximum length 256), and got:
ehT niar ni niapS syats ylniam ni eht teg dluoc I 'segaJ niar ni dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni e
I think we can safely say it broke GPT-3. The algorithm seems to have caught the fact that the words were spelt backwards, but has given up on any attempt to order them in a way that makes sense. It has failed to extend its objective to this new situation.
27 comments
Comments sorted by top scores.
comment by Jan (jan-2) · 2022-04-20T11:52:46.847Z · LW(p) · GW(p)
Cool experiment! I could imagine that the tokenizer handicaps GPT's performance here (reversing the characters leads to completely different tokens). With a character-level tokenizer GPT should/might be able to handle that task better!
Replies from: pseudobison, gwern, Stuart_Armstrong↑ comment by gabrielrecc (pseudobison) · 2022-04-21T08:35:41.605Z · LW(p) · GW(p)
I was slightly surprised to find that even fine-tuning GPT-Neo-125M for a long time on many sequences of letters followed by spaces, followed by a colon, followed by the same sequence in reverse, was not enough to get it to pick up the pattern - probably because the positional encoding vectors make the difference between e.g. "18 tokens away" and "19 tokens away" a rather subtle difference. However, I then tried fine-tuning on a similar dataset with numbers in between (e.g. "1 W 2 O 3 R 4 D 5 S : 5 S 4 D 3 R 2 O 1 W") (or similar representation -- can't remember exactly, but something roughly like that) and it picked up the pattern right away. Data representation matters a lot!
Replies from: sam-4, Stuart_Armstrong, jan-2↑ comment by Sam F. Brown (sam-4) · 2022-05-15T18:48:47.797Z · LW(p) · GW(p)
Here's an example of someone prompting with a walkthrough of a similar token-aware approach to successfully guide GPT-3:
https://twitter.com/npew/status/1525900849888866307
↑ comment by Stuart_Armstrong · 2022-05-17T20:46:29.397Z · LW(p) · GW(p)
I tried to use that approach to teach GPT-3 to solve the problem at the top of this post. As you can see, it kinda worked; GPT-3 grasps that some things need to be reversed, but it then goes a bit off the rails (adding a random "this is a great" to the end of my prompt, with the whole phrase reversed rather than each word; then it starts out reversing the individual words of the sentence, but ends up just completing the sentence instead, using the other common completion - "falls" rather than "stays". Then when it tries to reverse each individual word, it fails completely, and just reorders/reworks the words a bit).
Reverse the word below:
Word: alphabet
Reasoning:
-Add spaces between letters: a l p h a b e t
-Add numbers: 1:a 2:l 3:p 4:h 5:a 6:b 7:e 8:t
-Reverse numbers and letters: 8:t 7:e 6:b 5:a 4:h 3:p 2:l 1:a
-Remove numbers: t e b a h p l a
-Merge the letters in groups of two: te ba hp la, teba hpla, tebahpla
-Final result: tebahpla
Reverse all words in the following phrase, complete the sentence, then reverse all words in the completed sentence:
ehT niar ni niapS syats ylniam ni eht taerg a si siht
The rain in Spain falls mainly on the plain.
The main plain falls rain in Spain on the.
↑ comment by Stuart_Armstrong · 2022-05-17T14:09:06.639Z · LW(p) · GW(p)
Fascinating. Thanks!
↑ comment by gwern · 2022-05-15T20:43:04.514Z · LW(p) · GW(p)
This approach is a little surprising. I would have thought that adding on numbers to my space-separating approach, and then merging space-separated letters into a final solid word, would have tripped up GPT-3 and inevitably led to errors. But, at least with InstructGPT, it works.
↑ comment by Stuart_Armstrong · 2022-04-22T11:10:47.673Z · LW(p) · GW(p)
Thanks; very interesting result.
↑ comment by Jan (jan-2) · 2022-04-21T08:42:16.880Z · LW(p) · GW(p)
Fascinating! Thanks for sharing!
↑ comment by gwern · 2022-04-22T14:27:35.762Z · LW(p) · GW(p)
For the similar anagram task, I found space-separating (to avoid the BPE inconsistency/nondeterminism by forcing it to encode individual letters) seemed like it helped: https://gwern.net/GPT-3-nonfiction#anagrams
For this task, I think a worthwhile followup would be to experiment with the new edit mode.
↑ comment by Stuart_Armstrong · 2022-04-20T20:38:35.410Z · LW(p) · GW(p)
Possibly! Though it did seem to recognise that the words were spelt backwards. It must have some backwards spelt words in its training data, just not that many.
comment by TLW · 2022-04-20T11:13:25.116Z · LW(p) · GW(p)
For anyone else that can't read this quickly, this is what it looks like, un-reversed:
The rain in Spain stays mainly in the get could I Jages' rain in and or rain in the quages or, rain in the and or rain in the quages or rain in the and or rain in the quages or, rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in the quages or rain in the and or rain in e
As an aside: what does GPT-3 complete it as when not reversed?
Replies from: Dustin↑ comment by Dustin · 2022-04-20T16:08:28.780Z · LW(p) · GW(p)
The rain in Spain stays mainly in the plain.
This is a famous line from the play Pygmalion by George Bernard Shaw. In the play, a character named Henry Higgins is teaching a lower-class woman named Eliza Doolittle how to speak proper English. He tells her that the rain in Spain stays mainly in the plain in order to help her remember the correct pronunciation of the word "plain."
Replies from: philh↑ comment by philh · 2022-04-28T21:42:50.174Z · LW(p) · GW(p)
Incidentally this is wrong. The line is from a movie adaptation, not the original play. And at least in the song (which came still later), "plain" wasn't singled out; there are five copies of that vowel in the phrase and she couldn't get any of them at first. https://en.wikipedia.org/wiki/The_Rain_in_Spain
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-20T14:52:37.268Z · LW(p) · GW(p)
You would behave the exact same way as GPT-3, were you to be put in this same challenging situation. In fact I think you'd do worse; GPT-3 managed to get quite a few words actually reversed whereas I expect you'd just output gibberish. (Remember, you only have about 1 second to think before outputting each token. You have to just read the text and immediately start typing.)
Replies from: Stuart_Armstrong, olivier-faure, tailcalled, quintin-pope↑ comment by Stuart_Armstrong · 2022-04-20T20:40:02.931Z · LW(p) · GW(p)
The aim of this post is not to catch out GPT-3; it's to see what concept extrapolation could look like for a language model.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T11:26:16.272Z · LW(p) · GW(p)
OK, cool. I think I was confused.
↑ comment by Olivier Faure (olivier-faure) · 2022-04-20T19:55:11.684Z · LW(p) · GW(p)
It feels like a "gotcha" rebuke, but it honestly doesn't seem like it really addresses the article's point. Unless you think GPT-3 would perform better if given more time to work on it?
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-20T20:18:52.416Z · LW(p) · GW(p)
How does it not address the article's point? What I'm saying is that Armstrong's example was an unfair "gotcha" of GPT-3; he's trying to make some sort of claim about its limitations on the basis of behavior that even a human would also exhibit. Unless he's saying we humans also have this limitation...
Yes, I think GPT-3 would perform better if given more time to work on it (and fine-tuning to get used to having more time). See e.g. PaLM's stuff about chain-of-thought prompting. How much better? I'm not sure. But I think its failure at this particular task tells us nothing.
↑ comment by tailcalled · 2022-04-20T19:16:45.181Z · LW(p) · GW(p)
Humans don't have the same goal as GPT-3, though, so that doesn't seem like a fair comparison.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-20T20:18:01.583Z · LW(p) · GW(p)
Suppose I offered to pay you a million dollars if you accomplished the same goal as GPT-3 in this experiment. Then you would have the same goal as GPT-3. Yet you still wouldn't be able to accomplish it better than GPT-3.
Replies from: tailcalled↑ comment by tailcalled · 2022-04-20T20:26:46.695Z · LW(p) · GW(p)
Not really comparable. As a minimum I should have spent a ton of time completing text first.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T10:33:12.714Z · LW(p) · GW(p)
OK, so suppose you had spent a ton of time completing text first. There just isn't enough time for you to do the mental gymnastics needed to compose a sentence in your head and reverse it.
Replies from: tailcalled↑ comment by tailcalled · 2022-04-21T11:12:08.799Z · LW(p) · GW(p)
I think this is too far out of my experience to say anything with certainty, or for it to be particularly informative about how well I'd do in generalizing OOD within my current goals and ontology.
↑ comment by Quintin Pope (quintin-pope) · 2022-04-21T05:22:03.464Z · LW(p) · GW(p)
Remember, you only have about 1 second to think before outputting each token.
I don’t think this is true. Humans can decide to think longer on harder problems in a way GPT-3 can’t. Our “architecture” is fundamentally different from GPT-3 in that regard.
Also, our ability to think for longer fundamentally changes how we do concept extrapolation. Given a tricky extrapolation problem, you wouldn’t just spit out the first thing to enter your mind. You’d think about it.
If GPT-3 has an architectural limitation that prevents it from doing concept extrapolation in a human-like manner, we shouldn’t change our evaluation benchmarks to avoid “unfairly” penalizing GPT-3. We should acknowledge that limitation and ask how it impacts alignment prospects.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T11:20:45.660Z · LW(p) · GW(p)
It sounds like we are on the same page. GPT-3 has an architectural limitation such that (a) it would be very surprising and impressive if it could make a coherent sentence out of reversed words, and (b) if it managed to succeed it must be doing something substantially different from how a human would do it. This is what my original point was. Maybe I'm just not understanding what point Stuart is making. Probably this is the case.
comment by tailcalled · 2022-04-20T11:44:12.645Z · LW(p) · GW(p)
I wonder what would happen if GPT-3 got trained with a bit of reversed text. Could it quickly be fine-tuned to extrapolate?
comment by Dennis Towne (dennis-towne) · 2022-04-20T11:49:46.907Z · LW(p) · GW(p)
But has it really failed its objective? It's still producing text.
I think it's also worth asking "but did it really figure out that the words were spelled backwards?" I think a reasonable case could be made that the tokens it's outputting here come from the very small subset of reversed words in its training set, and it's ordering them in a way that it thinks is sensical given how little training time was spent on it.
If you give GPT-3 a bunch of examples and teach it about words spelled backwards, does it improve? How much does it improve, how quickly?