Posts

Does GPT-2 Understand Anything? 2020-01-02T17:09:14.641Z

Comments

Comment by Douglas Summers-Stay (douglas-summers-stay) on The case for becoming a black-box investigator of language models · 2022-05-12T23:13:03.526Z · LW · GW

Here's a fun paper I wrote along these lines. I took an old whitepaper of McCarthy from 1976 where he introduces the idea of natural language understanding and proposes a set of questions about a news article that such a system should be able to answer. I asked the questions to GPT 3 and looked at what it got right and wrong and guessed at why. 
What Can a Generative Language Model Answer About a Passage? 

Comment by Douglas Summers-Stay (douglas-summers-stay) on History of counting to three? · 2021-09-16T12:09:54.330Z · LW · GW

And from Levana Or, The Doctrine of Education By Jean Paul, 1848:  "Another parental delay, that of punishment, is of use for children of the second five years (quinquennium.) Parents and teachers would more frequently punish according to the line of exact justice, if, after every fault in a child, they would only count four and twenty, or their buttons, or their fingers. They would thereby let the deceiving present round themselves, as well as round the children escape the cold still empire of clearness would remain behind;"

Comment by Douglas Summers-Stay (douglas-summers-stay) on History of counting to three? · 2021-09-16T12:01:50.182Z · LW · GW

Here is an example from The Friend magazine, January 1853: "'Do you hear me, sir!' asked the captain. 'I give you whilst I count ten to start. I do not wish to shoot you, Wilson, but if you do not move before I count ten I'll drive this ball through you-- as I hope to reach port, I will.' Raising his pistol until it covered the boat swain's breast the captain commenced counting in a clear and audible tone. Intense excitement was depicted on the faces of the men and some anxiety was shown by the quick glances cast by the chief mate and steward first at the and then at the crew. Wilson with his eyes fixed in the captain's face and his arms loosely folded across his breast stood perfectly quiet as if he were an indifferent spectator. 'Eight!  Nine!' said the captain. 'There is but one left, Wilson, with it I fire if you do not start." 

Comment by Douglas Summers-Stay (douglas-summers-stay) on interpreting GPT: the logit lens · 2020-09-02T16:08:48.058Z · LW · GW

Could you try a prompt that tells it to end a sentence with a particular word, and see how that word casts its influence back over the sentence? I know that this works with GPT-3, but I didn't really understand how it could.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Can you get AGI from a Transformer? · 2020-07-23T17:45:36.454Z · LW · GW

Regarding "thinking a problem over"-- I have seen some examples where on some questions that GPT-3 can't answer correctly off the bat, it can answer correctly when the prompt encourages a kind of talking through the problem, where its own generations bias its later generations in such a way that it comes to the right conclusion in the end. This may undercut your argument that the limited number of layers prevents certain kinds of problem solving that need more thought?

Comment by Douglas Summers-Stay (douglas-summers-stay) on GPT-3: a disappointing paper · 2020-05-30T17:24:53.837Z · LW · GW

I'm sure there will be many papers to come about GPT-3. This one is already 70 pages long, and must have come not long after the training of the model was finished, so a lot of your questions probably haven't been figured out yet. I'd love to read some speculation on how, exactly, the few-shot-learning works. Take the word scrambles, for instance. The unscrambled word will be represented by one or two tokens. The scrambled word will be represented by maybe five or six much less frequent tokens composed of a letter or two each. Neither set of tokens contains any information about what letters make up the word. Did it see enough word scrambles on the web to pick up the association between every set of tokens and all tokenizations of rearranged letters? that seems unlikely. So how is it doing it? Also, what is going on inside when it solves a math problem?

Comment by Douglas Summers-Stay (douglas-summers-stay) on Call for volunteers: assessing Kurzweil, 2019 · 2020-04-04T16:20:59.850Z · LW · GW

I'll do ten.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-04T13:43:05.157Z · LW · GW

Yeah, you're right. It seems like we both have a similar picture of what GPT-2 can and can't do, and are just using the word "understand" differently.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-04T13:29:59.626Z · LW · GW
So would you say that GPT-2 has Comprehension of "recycling" but not Comprehension of "in favor of" and "against", because it doesn't show even the basic understand that the latter pair are opposites?

Something like that, yes. I would say that the concept "recycling" is correctly linked to "the environment" by an "improves" relation, and that it Comprehends "recycling" and "the environment" pretty well. But some texts say that the "improves" relation is positive, and some texts say it is negative ("doesn't really improve") and so GPT-2 holds both contradictory beliefs about the relation simultaneously. Unlike humans, it doesn't try to maintain consistency in what it expresses, and doesn't express uncertainty properly. So we see what looks like waffling between contradictory strongly held opinions in the same sentence or paragraph.

As for whether the vocabulary is appropriate for discussing such an inhuman contraption or whether it is too misleading to use, especially when talking to non-experts, I don't really know. I'm trying to go beyond descriptions of GPT-2 "doesn't understand what it is saying" and "understands what it is saying" to a more nuanced picture of what capabilities and internal conceptual structures are actually present and absent.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-03T19:17:33.945Z · LW · GW

One way we might choose to draw these distinctions is using the technical vocabulary that teachers have developed. Reasoning about something is more than mere Comprehension: it would be called Application, Analysis or Synthesis, depending on how the reasoning is used.

GPT-2 actually can do a little bit of deductive reasoning, but it is not very good at it.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-03T16:08:28.913Z · LW · GW

fixed

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-03T07:34:44.277Z · LW · GW

I don't think I am attacking a straw man: You don't believe GPT-2 can abstract reading into concepts, and I was trying to convince you that it can. I agree that current versions can't communicate ideas too complex to be expressed in a single paragraph. I think it can form original concepts, in the sense that 3-year old children can form original concepts. They're not very insightful or complex concepts, and they are formed by remixing, but they are concepts.

Comment by Douglas Summers-Stay (douglas-summers-stay) on Does GPT-2 Understand Anything? · 2020-01-02T22:47:15.641Z · LW · GW

My thinking was that since everything it knows is something that was expressed in words, and qualia are thought to not be expressed fully in words, then qualia aren't part of what it knows. However, I know I'm on shaky ground whenever I talk about qualia. I agree that one can't be sure it doesn't have qualia, but it seems to me more like a method for tricking people into thinking it has qualia than something that actually does.

Comment by Douglas Summers-Stay (douglas-summers-stay) on human psycholinguists: a critical appraisal · 2020-01-02T17:26:00.266Z · LW · GW

I liked how you put this. I've just posted my (approving) response to this on Less Wrong under the title "Does GPT-2 Understand Anything?"

Comment by Douglas Summers-Stay (douglas-summers-stay) on Conversational Presentation of Why Automation is Different This Time · 2018-01-19T16:05:40.911Z · LW · GW

There wasn't a large "manufacturing" sector for agriculture workers to move into, it became a large sector as the workers moved into it. Perhaps some current small sector of the economy will become a large sector as workers move into it? At least in the U.S., there's little evidence to support your claims of it being faster and more widespread-- jobless rates are at historic lows. Unless you mean it hasn't yet begun.

All that said, though, it is certainly the case that if you have a robot that can do anything a person can do, you don't need to hire any more people, and there must be some kind of curve leading up to that as robots become more capable.