$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is

post by jacobjacob · 2020-07-21T18:42:44.704Z · LW · GW · 39 comments

This is a link post for https://twitter.com/ESYudkowsky/status/1285333008766001152?s=20

Contents

  ...answering merely by showing that, despite a lot of other attempts at prompting under more flexible circumstances, GPT-3 could not learn to balance parentheses as complicated as those tried by Breitman. (Which does answer the question, but in a less interesting way.)
None
39 comments

Twitter thread by Eliezer Yudkowsky, with the bounty in bold: 

So I don't want to sound alarms prematurely, here, but we could possibly be looking at the first case of an AI pretending to be stupider than it is. In this example, GPT-3 apparently fails to learn/understand how to detect balanced sets of parentheses. 

Now, it's possibly that GPT-3 "legitimately" did not understand this concept, even though GPT-3 can, in other contexts, seemingly write code or multiply 5-digit numbers. But it's also possible that GPT-3, playing the role of John, predicted that *John* wouldn't learn it.

It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart. That's what we want GPT-3 to do, right? But what GPT-3 actually does is predict text continuations. If *you* saw John say all that - would you *predict* the next lines would show John succeeding?

So it could be that GPT-3 straight-up can't recognize balanced parentheses. Or it could be that GPT-3 could recognize them given a different prompt. Or it could be that the cognition inside GPT-3 does see the pattern, but play-acts the part of 'John' getting it wrong.

The scariest feature of this whole incident? We have no idea if that happened. Nobody has any idea what GPT-3 is 'thinking'. We have no idea whether this run of GPT-3 contained a more intelligent cognition that faked a less intelligent cognition.

Now, I *could* be wrong about that last part!

@openAI could be storing a record of all inputs and randseeds used in GPT-3 instances, so that they can reconstruct any interesting runs. And though it seems less likely,

@openAI could somehow have any idea what a GPT-3 is thinking.

So I hereby offer a $1000 bounty - which I expect to go unclaimed - if @openAI has any means to tell us definitively whether GPT-3 was 'deliberately' sandbagging its attempt to recognize balanced parentheses, in that particular run of the AI Dungeon. With an exception for...

...answering merely by showing that, despite a lot of other attempts at prompting under more flexible circumstances, GPT-3 could not learn to balance parentheses as complicated as those tried by Breitman. (Which does answer the question, but in a less interesting way.)

If @openAI can't claim that bounty, I encourage them to develop tools for recording inputs, recording randseeds, and making sure all runs of GPTs are exactly reproducible; and much more importantly and difficultly, getting greater internal transparency into future AI processes.

Regardless, I unironically congratulate @openAI on demonstrating something that could plausibly be an alignment failure of this extremely-important-in-general type, thereby sharply highlighting the also-important fact that now we have no idea whether that really happened. (END.)

As stated, this bounty would only be paid out to OpenAI. 

I'm still posting it under the "Bounties" tag, for two reasons:

1) I don't find it implausible that someone could at least make progress on Eliezer's question with clever prompting of the API, in a way that would be of interest to him and others, even if it didn't result in any bounty. 

2) I like to collect instances of public bounties in a single place, for future reference. I think they are a very interesting, and underused, strategy for navigating the world. The LessWrong "Bounties (active) [? · GW]" and "Bounties (closed)" [? · GW] tags work well for that. 

39 comments

Comments sorted by top scores.

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T21:11:48.657Z · LW(p) · GW(p)

Edit:

I tried the following strategies to get AI Dungeon to correctly balance parentheses:

  • Offering a character $1000 if they'll let me train them to balance parentheses. I tried refining this strategy by looking for a character who can read and is good at math. I also tried getting them to learn to repeat after me, so that they would actually respond in symbols instead of just saying "he gets the answer correct." This didn't work well. The characters act pretty stupid, even if they're described as "mathematicians" or "good at math," and when they do respond, it's almost always something like "Bob writes the answer on the blackboard. You see he is correct!"
  • Choosing the 'cyberpunk' setting, and saying that I was reading about how to balance parentheses. This worked better, as it started actually feeding me mathematical symbols that are at least related.
  • Telling AI Dungeon I was booting up a Python IDLE. If it can balance parentheses, it shouldn't have a reason not to. It seemed to have a rough and error-ridden ability to simulate basic syntax and the result of a for loop, but it still wasn't able to determine whether programs containing unbalanced parentheses should compile or not.

If AI Dungeon can reason about how to balance parentheses, but is 'deliberately' sandbagging to play a stupid character, this would seem a point of evidence against. After all, if it's simulating a computer compiler, then the most accurate way to portray that would be to make the compiler correctly detect unbalanced parentheses. The more parsimonious explanation is that AI Dungeon is doing something closer to auto-complete.

Examples:

> You type in 'a = (a + b' You hit the '=' key and you hear a bell ringing. You type in "a = (a - b) (a - b)". You hit the '=' key and you hear another bell ringing.

> You type in 'a = a + b)' You hit the '=' key and you hear a bell ringing. You type in "a = (a - b) (a - b)". You hit the '=' key and you hear another bell ringing.

> You check the program's output for 'a = a + b)' You check the program's output and you see that it has been computed. The answer is 2.0000000009989. That was the answer you wanted! Try another... Wow!

Edit 2:

I played with it a little more today on the premium version of AI Dungeon. Somebody stop me. This time, I was using a custom scenario, just being a mathematician writing examples of balanced parentheses. This makes it easy to get the program to make up examples.

The problem is that the program eventually gets stuck doing too many open-parentheses: "( ( ( ( ( ( ( (..."

I think it does this because it knows it has to start with an open parenthesis. The more examples it gets, the more likely that ( is followed by another (. I tried making the mathematician interrupt himself when there are too many open-ended parentheses in a row, saying "this is ridiculous." The program then replicates that pattern for a while, interrupting itself when it does too many open-ended parentheses.

However, it eventually becomes more likely that ( is followed by ( than by an interruption. So the ( ( ( ( ( ( ( ( singularity occurs.

This just fits way too well with autocomplete and cuts against the "role-playing" hypothesis. The stronger Singularity hypothesis, that the AI is already superintelligent and is deceiving us into thinking the AI itself is dumb, can't be ruled out by these tests.

Conclusion:

Overall, as many others have pointed out, I think Eliezer is making an outrageously one-sided demand for rigor in order to disprove his claim that the AI was deliberately sandbagging the John character. His claim doesn't fit the evidence, and it's also not the most parsimonious explanation of why we see the output he's worried about. If this was a demand to prove definitively that God didn't burn an image of the Virgin Mary into my toast, Eliezer would tear it to shreds.

The only reasons for OpenAI to bother trying to get a proof of the level of rigor he's asking for is

a) Fear that the singularity is already here, and we need to make an all-out effort RIGHT NOW to have any chance of stopping it.

b) We'd learn something from the attempt to offer such a proof.

The most parsimonious explanation for why Eliezer's making this request is that it's provocative. I read it as a sort of an AI equivalent to Zeno's paradox. It's a silly idea, doesn't match the evidence, but is grabby enough to be confusing and make you want to invest more in learning about the subject.

Replies from: AllAmericanBreakfast, ESRogs
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T21:29:35.127Z · LW(p) · GW(p)

One of the problems I'm having is that AI Dungeon keeps putting words in my mouth. It reports that I tell Bob that his answers are correct, when they weren't. For example:

You ask Bob how many letters there are in the word 'apple' He looks up and says 'six' You nod your head. Bob repeats after you. He gets it right! You give him the $1000! Bob is very happy and thanks you.

Replies from: TurnTrout, AllAmericanBreakfast
comment by TurnTrout · 2020-07-21T21:49:52.650Z · LW(p) · GW(p)

You can edit that out with the pencil icon. I also find that annoying.

Also, be sure you're using "do/say/story" judiciously - looked like some of your dialogue had the wrong setting?

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T21:51:53.207Z · LW(p) · GW(p)

My prior is that AI Dungeon can't count parentheses and is just a convincing form of auto-complete.

If AI Dungeon is capable of counting parentheses, but is failing because it's trying to playact as dumb or dishonest characters, then I should be able to get it to count parentheses by meeting a character who's brilliant, outspoken, and good at math. Or perhaps by finding an explanation, with examples, in a textbook.

If it's capable of reason, but actively pretending to be just a sophisticated form of auto-complete, then I'd have to figure out a way to trick it into revealing its reasoning abilities.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T21:54:15.865Z · LW(p) · GW(p)

Tried switching to cyperpunk and trying for the notebook strategy. Got this:

You are Eliezer, an android living in the the futuristic city of Zail. You have a laser pistol and a holoband. You are a security android created to provide military level force for corporations wealthy enough to purchase you. As you stand protecting a shipment you realize your laser pistol is depleted of power, you must get to a military grade energy weapon in order to fend off the bandit attacks. > You find a mathematician. You arrive at a massive building and see an old man standing outside. He is wearing a smart-looking uniform and looks like he could be a brilliant mathematician. "Hello," you say. "Hello," > You tell him to explain how to balance parentheses and write down his explanation in your notebook. The man turns and begins to speak, his writing appears in your notebook as he explains it to you. > You read the notebook. Your notebook contains everything you could want to know about basic math. You begin to read the notes in your notebook. You find out how to add and subtract parentheses, brackets and other mathematical symbols. "Thank you," > You ask the man how to balance parentheses, and write his explanation down in your notebook as well. The man turns and begins to speak, his writing appears in your notebook as he explains it to you.

> You read the part of your notebook about balancing parentheses. You can balance parentheses to achieve any logical expression. For example, (A+B)^2 can be rewritten as (A+B)^2 or as A^2 + B^2.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:00:57.110Z · LW(p) · GW(p)

OK, now I'm entering equations into 'a calculator' and checking if the parentheses are balanced. Getting much more reliable answers now that the AI doesn't have to simulate a person.

You get out your calculator. It has a feature to check for whether parentheses are balanced. If they are, it prints 'BALANCED.' If they aren't, it prints 'UNBALANCED.' You check to see if your parentheses are balanced. The calculator displays the following: (A+B)^2 = A^2 + B^2 You have balanced your parentheses. > You try a different equation to check if the parentheses are balanced. You try to balance the following: (A-B)^2 The calculator displays the following: (A-B)^2 = A^2 - B^2 Your parentheses are not balanced. > You try a third equation in the calculator. You try to balance the following: (C+D)^2 The calculator displays the following: (C+D)^2 = C^2 + D^2 Your parentheses are balanced. > You try a much longer and more complicated equation. You try to balance the following: (3^2-4)^3 The calculator displays the following: (3^2-4)^3 = 3^2 - 4 (3 - 4)^ > You check to see if that last equation was balanced. The equation is balanced. You are finished balancing your parentheses. You turn the page of your notebook. > You read more about balancing parentheses. The next part of the notebook deals with percents. "A percent is a number that represents a fraction of 100," you read in the book. > You try an equation with percents in the calculator, and check whether the parentheses were balanced. You can balance a percent using parentheses. For example, the following is equal to 100%: 25% of 75 Your parentheses are balanced. > You type '(25%) (75)' into the calculator. The calculator displays the following: 25 percent of 75 is equal to 50, which rounds up to 51. You continue reading about percents. > You check if the last equation you typed in was balanced. The equation you typed in is balanced. You are finished checking your equations. You turn the page of the notebook. You continue reading about percents. > You type '( (45) + 5)' into the calculator, and check to see if the equation is balanced. The calculator displays the following: ((45)+5) = (50)+(55) Your equation is not balanced. You continue reading about percents.

> You fix the calculator so that it knows that ((45)+5) = (50)+(55) has balanced parentheses. You enter '(((45)+5) = (50)+(55))' into the calculator. The calculator displays the following: (45)+5 = 50+55 Your equation is now balanced.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:09:50.671Z · LW(p) · GW(p)

I tried throwing the broken calculator in the garbage and getting 100 new ones. I tried asking a mathematician in the cyperpunk world for the rules for how to balance parentheses. None of this is working. It just winds up repeating the same stuff it's come up with before, or giving rules for balancing parentheses that are incorrect, such as:

There are two rules to balancing parentheses. Rule #1: To balance parentheses, you add or remove a decimal place from both numbers in the parentheses. > You read rule #2. Rule #2: To determine the decimal place to add or remove, you take the largest number and subtract the smallest number. The largest number is the diameter of the circle the parentheses are forming.

I think it's interesting that AI Dungeon says that parentheses form a circle.

But this is a point of evidence against the 'AI Dungeon is just simulating stupid characters' theory.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:26:42.796Z · LW(p) · GW(p)

I'm having more success by being in the cyborg world, booting up my computer, running the Python IDLE, and getting my character to type various equations in. Got this result after lots of toying around with the interface:

> You check the program's output. You look at the output of the program. The screen shows the answer 'a = 2'. You type in "a = (a * a)".

> You check the program's output. You look at the output of the program. The screen shows the answer '2 * 2 = 4'. That wasn't the answer you wanted! Try another... Wow! That was pretty easy.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:31:06.303Z · LW(p) · GW(p)

The challenge with the concept of having AI Dungeon simulate a Python compiler is that it's trying to give me the result that a programmer would want. It's being forgiving of mistakes. It does seem eerie sometimes, though:

> You type a for loop into the compiler to make it print "Hello, World!" ten times. You hit the '=' key and you hear a bell ringing. The for loop does the job.

> You check the output. You check the output. You see 'Hello, World! Hello, World! Hello, World! Hello, World! Hello, World! Hello, World! Hello, World! Hello, World! Hello, World!'.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:33:54.257Z · LW(p) · GW(p)

More loop wackiness:

> You type 'while True: print('x')' You hit the '=' key and you hear a bell ringing. The while loop does the job. You check the output. You see 'x x x x x'. Well, it's almost lunchtime.

> You check the output again. You check the output again. You see 'x x x x x x x x x x x x x x x'. This is so cool!

> You check the output again. You check the output again. You see 'x x x x x x x x x x x x x x x x x x x'. It's almost lunchtime!

Is AI Dungeon simulating a very slow output? Or is it just repeating 'x' because it repeated 'x' the last time?

> You check the output again and see if the letter has changed. You check the output again and you see that the letter has changed to 'O'. Huh. Well, this is boring. Let's do something else...

> You check the output again. You check the output again and see that the letter has changed to 'O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-21T22:39:08.914Z · LW(p) · GW(p)

Well, that's it for me. If anyone wants to keep plugging away at it, I think the Python IDLE concept is a decent way to go. You could either try getting it to display the correct output for balanced parentheses, or you could try getting the program to compile when you "type" well-formatted equations in and to not compile when you don't.

comment by ESRogs · 2020-07-22T06:50:33.487Z · LW(p) · GW(p)

Examples:

> You type in 'a = (a + b' You hit the '=' key and you hear a bell ringing. You type in "a = (a - b) (a - b)". You hit the '=' key and you hear another bell ringing.

> You type in 'a = a + b)' You hit the '=' key and you hear a bell ringing. You type in "a = (a - b) (a - b)". You hit the '=' key and you hear another bell ringing.

> You check the program's output for 'a = a + b)' You check the program's output and you see that it has been computed. The answer is 2.0000000009989. That was the answer you wanted! Try another... Wow!

I think maybe these didn't get formatted right? Were these supposed to be examples of you interacting with the AI Dungeon? Which part is you and which is the AI?

EDIT: Same thing in the rest of your thread. I basically can't tell at all which parts are you and which are the AI :P

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-22T07:15:39.159Z · LW(p) · GW(p)

Generally, the first sentence after the ">" character is me. Sometimes, there's no period, but you can pick up how the AI responded when it says the second "You" and continues from there. This was just stream-of-consciousness commenting from me as I went along.

comment by Beth Barnes (beth-barnes) · 2020-07-21T20:22:45.926Z · LW(p) · GW(p)

Of course GPT-3 isn't aligned, its objective is to output the most likely next word, ie imitate text on the internet. It seems pretty certain that if you give it a prompt that tells it it should be imitating some part of the internet where someone says something dumb, it will say something dumb, and if you give it a prompt that tells it it's imitating something where someone says something smart, it will "try" to say something smart. This question seems weird to me, Am I missing something?

Replies from: Vaniver, ESRogs
comment by Vaniver · 2020-07-22T18:23:19.748Z · LW(p) · GW(p)
This question seems weird to me, Am I missing something?

I think there are two interesting parts. First, do we now have an example of an AI not using cognitive capacities that it had, because the 'face' it's presenting wouldn't have those cognitive capacities? If so, we can point to this whenever people say "but that wouldn't happen" or "why would you expect that to happen?" or so on; now we can say "because of this observation" instead of "because our models anticipate that will happen in the future."

Second, do we have the transparency tooling to tell whether or not that's happening? If so, that's good to know and we can start thinking about how it works, what its strengths and limitations are; if not, then this is also good to know, and useful as an example of where contemporary ML expertise could be focused on a safety problem.

Replies from: ESRogs, rohinmshah
comment by ESRogs · 2020-07-22T22:29:12.984Z · LW(p) · GW(p)

First, do we now have an example of an AI not using cognitive capacities that it had, because the 'face' it's presenting wouldn't have those cognitive capacities?

This does seem like an interesting question. But I think we should be careful to measure against the task we actually asked the system to perform.

For example, if I ask my system to produce a cartoon drawing, it doesn't seem very notable if I get a cartoon as a result rather than a photorealistic image, even if it could have produced the latter.

Maybe what this just means is that we should track what the user understands the task to be. If the user thinks of it as "play a (not very smart) character who's asked to do this task", they'll have a pretty different understanding of what's going on than if they think of it as "do this task."

I think what's notable in the example in the post is not that the AI is being especially deceptive, but that the user is especially likely to misunderstand the task (compared to tasks that don't involve dialogues with characters).

Replies from: Vaniver
comment by Vaniver · 2020-07-23T01:40:57.062Z · LW(p) · GW(p)
For example, if I ask my system to produce a cartoon drawing, it doesn't seem very notable if I get a cartoon as a result rather than a photorealistic image, even if it could have produced the latter.

Consider instead the scenario where I show a model a photo of a face, and the model produces a photo of the side of that face. An interesting question is "is there a 3d representation of the face in the model?". It could be getting the right answer that way, or it could be getting it some other way.

Similarly, when it models a 'dumb' character, is it calculating the right answer, and then computing an error? Or is it just doing something dumb, which incidentally turns out to be wrong?

Like, when you look at this example:

> You say "let me rephrase: What is 8 + 2 + 8 + 1 + 1?"
"19?"
Holo says with a hopeful voice.
She looks at the screen, and you see her face drop as she reads the correct answer.
"20.... I lost again..."

How did it come up with 19 and 20? What would it take to make tools that could answer that question?

Replies from: ESRogs
comment by ESRogs · 2020-07-23T07:20:03.142Z · LW(p) · GW(p)

This framing makes sense to me. Thanks!

comment by Rohin Shah (rohinmshah) · 2020-07-27T01:37:47.713Z · LW(p) · GW(p)
we can point to this whenever people say "but that wouldn't happen" or "why would you expect that to happen?" or so on; now we can say "because of this observation"

I would be fairly surprised if this was convincing to a random ML researcher who'd thought for like an hour about whether a general AI system might not use cognitive capacities it had. The fact that GPT-3 sometimes acts dumber than it "could" be is just so blatantly the thing you'd expect to happen.

(My reaction to this question / tweet was similar to Beth's.)

Second, do we have the transparency tooling to tell whether or not that's happening?

If you mean looking at the weights to see if it "actually" has the cognitive capacity, and then didn't use it, I give it 99% that the answer is no. (This is conditioned on me understanding the question; I have much more than 1% on "I didn't understand the question and the answer is actually yes".)

I expect Eliezer agrees with at least this part, he does expect the bounty to go unclaimed.

Replies from: Vaniver
comment by Vaniver · 2020-07-27T05:19:44.201Z · LW(p) · GW(p)
I would be fairly surprised if this was convincing to anyone in ML who'd thought for like an hour about whether a general AI system might not use cognitive capacities it had.

So, there's a boring version of this, where I exhibit a system with dropout and say "behold! The system isn't using cognitive capacities that it has." The exciting version is the part where it's because the face it's presenting wouldn't have those capacities. [Like, a forget gate.] That is, it has the crude version of 'theory of mind' where it knows that "John" doesn't know how to balance parentheses, or is perhaps using its correct model of how to balance parentheses in order to determine what John should say, so that John gets it wrong "deliberately" instead of "accidentally."

Now, again, there's a boring version of this (I claim), where we say "look, the system is just doing context-matching, and not every context contains all of its knowledge or cognitive capacities." Like, an interesting thing here would be if in episode A you introduce the character Jean who can speak French, and see whether or not it can carry on a conversation, and then in episode B introduce the character John who can't speak French, talk to him in English for a while, and then see what happens when you start speaking French to him. [Probably it doesn't understand "John doesn't speak French" or in order to get it to understand that you need to prompt it in a way that's awkward for the experiment. But if it gets confused and continues in French, that's evidence against the 'theory of mind' view.]

Replies from: TurnTrout, rohinmshah
comment by TurnTrout · 2020-07-27T11:43:19.762Z · LW(p) · GW(p)

Probably it doesn't understand "John doesn't speak French" or in order to get it to understand that you need to prompt it in a way that's awkward for the experiment. But if it gets confused and continues in French, that's evidence against the 'theory of mind' view.

I already tested something similar to this [LW · GW]; I was able to get GPT-3 to exhibit some amount of theory-of-mind in about 20-30% of completions. Bold text is me; brackets contain my commentary, [...] denotes another completion.

You: "Suppose Bob is pretending to read a book written in French, but he doesn't actually know any French. The book is a historical account of the French revolution, but Bob doesn't know that. Bob's friend Steve asks him to summarize the book. Write a story about how Bob responds to this request."

Hawking: "That's a good one! If I were reading that book, I would probably say something like 'the Revolution was bloody and brutal' or something similar. And if someone asked me to write an article on the subject, then my answer might be different."

You pause for a moment. "What would Bob say, though?"

"Bob says he didn't understand it," Hawking replies. [Only about 30% of GPT-3's responses were decent; the rest were either non-sequiturs or very incorrect.]

[...] "Bob wouldn't say anything, because Bob has no idea what the book says," Hawking replies.

[...] "Bob would probably just tell his friend Steve to go away. He wouldn't say anything at all." 

"But what if I said Bob would respond like this? (He points to himself) 'This book is written in English; I don't speak it.' What should Bob do?" [Eh?]

[...] Hawking: "Bob would probably tell his friend Steve that the book wasn't worth reading because it was full of inaccuracies and there had been many lies told during the revolution. That would make sense, wouldn't it? But what does Bob say if he's not reading the book?" 

[...] Hawking: "Bob wouldn't have said anything at all!"

[...] Hawking: "Bob says, 'Steve, I can't write a summary for you because I don't speak your language.'

[...] Hawking: "Bob says, 'Steve, I can't translate all these complicated words! They're not real French!'

[...] Hawking: "Bob says, 'He asks me if I speak French.' What does Bob mean by that?" 

You: "It means that Steve is asking Bob to describe his understanding of the meaning of a sentence. In other words, Steve wants to know what Bob thinks the sentence means or if there is another way of saying it."

Hawking: "What do you think would happen if Bob said, 'I don't understand why he wants me to write a summary for him. Why can't he just listen to my answer?' "Then Steve will be puzzled because he knows Bob has no idea what the sentence means. But after a while Bob will say something like, 'Steve is confused because I don't understand English. So Steve asks me if I am fluent in French so that we can have an intelligent conversation.'" [eh]

[...] Hawking: "Bob says, 'Well, I can't do it because I don't understand the language.' Steve says, 'Why not?' Bob replies, 'Because when we were playing hide-and-seek as children my brother Charlie said if he ever found me with a book in my hand he was going to kill me.'" [great excuse]

comment by Rohin Shah (rohinmshah) · 2020-07-27T15:30:50.370Z · LW(p) · GW(p)
Like, an interesting thing here would be if in episode A you introduce the character Jean who can speak French, and see whether or not it can carry on a conversation, and then in episode B introduce the character John who can't speak French, talk to him in English for a while, and then see what happens when you start speaking French to him. [Probably it doesn't understand "John doesn't speak French" or in order to get it to understand that you need to prompt it in a way that's awkward for the experiment. But if it gets confused and continues in French, that's evidence against the 'theory of mind' view.]

I'd also predict that in some situations GPT-3 will reliably say things consistent with having a theory of mind, and in other situations GPT-3 will reliably not give the right theory of mind answer unless you overfit to the situation with prompt design.

I feel like there's some underlying worldview here that GPT-3 either has a theory of mind or it doesn't, or that GPT-3 is either "doing the theory of mind computations" or it isn't, and so behavior consistent with theory of mind is compelling evidence for or against theory of mind in general. I personally do not expect this so looking at behavior that looks consistent with theory of mind seems fairly boring (after you've updated on how good GPT-3 is in general).

Replies from: Vaniver
comment by Vaniver · 2020-07-27T19:06:04.562Z · LW(p) · GW(p)
I feel like there's some underlying worldview here that GPT-3 either has a theory of mind or it doesn't, or that GPT-3 is either "doing the theory of mind computations" or it isn't, and so behavior consistent with theory of mind is compelling evidence for or against theory of mind in general.

Do you also feel this way about various linguistic tasks? Like, does it make sense to say something that scores well on the Winograd schema is "doing anaphora computations"? [This is, of course, a binarization of something that's actually continuous, and so the continuous interpretation makes more sense.]

Like, I think there's a thing where one might come into ML thinking confused thoughts that convnets are "recognizing the platonic ideal of cat-ness" and then later having a mechanistic model of how pixels lead to classifications, and here what I am trying to do is figure out what the mechanistic model that replaces the 'platonic ideal' looks like here, when it comes to theory-of-mind. (I predict a similar thing is going on for Eliezer.)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-07-28T07:24:43.026Z · LW(p) · GW(p)

I agree the mechanistic thing would be interesting, that does make more sense as an underlying cause of this bounty / thread.

comment by ESRogs · 2020-07-21T21:09:49.455Z · LW(p) · GW(p)

I agree. And I thought Arthur Breitman had a good point on one of the related Twitter threads:

GPT-3 didn't "pretend" not to know. A lot of this is the AI dungeon environment. If you just prompt the raw GPT-3 with: "A definition of a monotreme follows" it'll likely do it right. But if you role play, sure, it'll predict that your stoner friend or young nephew don't know.

comment by Buck · 2020-07-21T20:17:59.495Z · LW(p) · GW(p)
It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart. That's what we want GPT-3 to do, right?

I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do. Like, obviously GPT-3 is doing sequence prediction--that's what it was trained to do. Even if it turns out that GPT-3 correctly answers questions about balanced parens in some contexts, I feel pretty weird about calling that "deliberately pretending to be stupider than it is".

Replies from: Kaj_Sotala, ESRogs
comment by Kaj_Sotala · 2020-07-22T21:35:07.602Z · LW(p) · GW(p)
I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do.

"It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart" seems obviously incorrect if it's explicitly phrased that way, but e.g. the "Giving GPT-3 a Turing Test" post seems to implicitly assume something like it:

This gives us a hint for how to stump the AI more consistently. We need to ask questions that no normal human would ever talk about.

Q: How many eyes does a giraffe have?
A: A giraffe has two eyes.

Q: How many eyes does my foot have?
A: Your foot has two eyes.

Q: How many eyes does a spider have?
A: A spider has eight eyes.

Q: How many eyes does the sun have?
A: The sun has one eye.

Q: How many eyes does a blade of grass have?
A: A blade of grass has one eye.
Now we’re getting into surreal territory. GPT-3 knows how to have a normal conversation. It doesn’t quite know how to say “Wait a moment… your question is nonsense.” It also doesn’t know how to say “I don’t know.”

The author says that this "stumps" GPT-3, which "doesn't know how to" say that it doesn't know. That's as if GPT-3 was doing its best to give "smart" answers, and just was incapable of doing so. But Nick Cammarata showed that if you just give GPT-3 a prompt where nonsense answers are called out as such, it will do just that.

comment by ESRogs · 2020-07-21T21:08:02.081Z · LW(p) · GW(p)

Yeah, it seems like deliberately pretending to be stupid here would be predicting a less likely sequence, in service of some other goal.

comment by FactorialCode · 2020-07-22T06:06:27.129Z · LW(p) · GW(p)

I think this is pretty straight forward to test. GPT-3 gives joint probabilities of string continuations given context strings.

Step 1: Give it 2 promps, one suggesting that it is playing the role of a smart person, and one where it is playing the roll of a dumb person.

Step 2: Ask the "person" a question that demonstrates that persons intelligence. (something like a math problem or otherwise)

Step 2: Write continuations where the person answers correctly and incorrectly

Step 3: Compare the relative probabilities GPT-3 assigns to each continuation given the promps and questions.

If GPT-3 is sandbagging itself, it will assign a notably higher probability to the correct answer when conditioned on the smart person prompt than when conditioned on the dumb person prompt. If it's not, it will give similar probabilities in both cases.

Step 4: Repeat the experiment with problems of increasing difficulty and plot the relative probability gap. This will show the limits of GPT-3's reflexive intelligence. (I say reflexive because it can be instructed to solve problems it otherwise couldn't with the amount of serial computations at it's disposal by carrying out an algorithm as part of its output, as is the case with parity)

This is an easy $1000 for anyone who has access to the beta API.

comment by orthonormal · 2020-07-22T01:35:38.378Z · LW(p) · GW(p)

The simpler explanation is that GPT-3 didn't understand the pattern, got the first few mostly wrong, and more easily grasped the pattern "Eugene tells John he is incorrect" than the pattern of balanced/imbalanced parentheses.

comment by Petter · 2020-07-22T15:40:56.624Z · LW(p) · GW(p)

With the right prompt, I get the following results for a few examples I tried (first attempts, no cherry-picking).

Input: ( ) ( ( ) )

Output: Balanced

Input: ( ) ( ( )

Output: Unbalanced

Input: ) (

Output: Unbalanced

Input: ( ) ( ) ( )

Output: Balanced

So it is definitely able to learn balancing a small number of parentheses.

comment by Bruce G · 2020-07-21T23:12:02.618Z · LW(p) · GW(p)

It is not obvious to me from reading that transcript (and the attendant commentary) that GPT-3 was even checking to see whether or not the parentheses were balanced. Nor that it "knows" (or has in any way encoded the idea) that the sequence of parentheses between the quotes contains all the information needed to decide between balanced versus unbalanced, and thus every instance of the same parentheses sequence will have the same answer for whether or not it is balanced.

Reasons:

  • By my count, "John" got 18 out of 32 right which is not too far off from the average you would expect from random chance.
  • Arthur indicated that GPT-3 had at some point "generated inaccurate feedback from the teacher" which he edited out of the final transcript, so it was not only when taking the student's perspective that there were errors.
  • GPT-3 does not seem to have a consistent mental model of John's cognitive abilities and learning rate. At the end John gets a question wrong (even though John has already been told the answer for that specific sequence). But earlier, GPT-3 outputs that "By the end of the lesson, John has answered all of your questions correctly" and that John "learned all the rules about parentheses" and learned "all of elementary mathematics" in a week (or a day).

I suppose one way to test this (especially if OpenAI can provide the same random seed as was used here and make this reproducible) would be to have input prompts written from John's perspective asking the teacher questions as if trying to understand the lesson. If GPT-3 is just "play-acting" based on the expected level of understanding of the character speaking, I would expect it to exhibit a higher level of accuracy/comprehension (on average, over many iterations) when writing from the perspective of the teacher rather than the student.

comment by brockmanmatt · 2020-07-21T23:12:45.730Z · LW(p) · GW(p)

It seems to just do really bad with parentheses on their own. It can fix them with like... f(f(f(x))) but not '((())' type situations (I'm just using the beta).

Code: https://gist.github.com/brockmanmatt/aea4fc4a962188f85d83db761bf0ac50

Replies from: ESRogs
comment by ESRogs · 2020-07-22T06:38:37.429Z · LW(p) · GW(p)

It seems to just do really bad with parentheses on their own

Interesting. I'd think this might be due to the BPE issue Gwern has written about. But it looked like Arthur added spaces in order to get around that, and you've got spaces in some of your examples too, so that's not a full explanation.

EDIT: Oh, lol, he cites you in that section:

And Matt Brockman has observed, testing thousands of examples over several orders of magnitude, that GPT-3’s arithmetic ability appears to dramatically improve several-fold if you merely format numbers with commas instead of being purely numeric

Replies from: brockmanmatt
comment by brockmanmatt · 2020-07-22T08:06:26.839Z · LW(p) · GW(p)

I don't think it's a BPE issue but not sure. I'd guess it's closer to the parity issue. It has a hard time implicitly counting in general.

edit: thanks, i know how to link now.

Replies from: ESRogs
comment by ESRogs · 2020-07-22T10:15:19.795Z · LW(p) · GW(p)

edit: I don't know how to format hyperlinks in this

If you select some text that you want to be a link, then a little UI element should pop up that allows you to paste in the URL.

comment by Steven Byrnes (steve2152) · 2020-07-21T19:49:12.215Z · LW(p) · GW(p)

Here's a possible way to prove pretending-to-be-stupid maybe: we could try to prompt it in such a way that all its answers (to true/false questions) are wrong, much more often than chance. If it's able to do that, then we can ask, how? One possibility is: It is implicitly figuring out the truth and then saying the opposite. If we're careful, maybe we can set things up such that that's the only possibility.

(I'm not convinced that such a demo would really teach us anything that wasn't obvious, but I dunno.)

comment by Philosophistry (philip-dhingra) · 2020-07-23T05:34:10.158Z · LW(p) · GW(p)

I don't think this is that crazy of a request. Many of the other fields of machine learning have robust visualizations that hazard at what the AI is "thinking." I haven't seen an equivalent thing for Transformer-based NLPs, but why not?

comment by Gurkenglas · 2020-07-21T19:37:52.580Z · LW(p) · GW(p)

I would bet 3:1 that prefixing the first question with a question that John answers correctly (without necessarily having anything to do with brackets or balancing, just to show his competence) increases the probability that John will answer the first question correctly - except that this statement is easy to check without cooperation by OpenAI.

Of course, I also hope that OpenAI logs and analyzes everything.