Collection of GPT-3 results

post by Kaj_Sotala · 2020-07-18T20:04:50.027Z · LW · GW · 24 comments

This is a link post for https://twitter.com/xuenay/status/1283312640199196673

I kept seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd start collecting them.

24 comments

Comments sorted by top scores.

comment by TurnTrout · 2020-07-19T00:03:23.885Z · LW(p) · GW(p)

The Sequences post you've never read, by GPT-3. 

what is a philosopher? A philosopher is someone who studies and thinks about philosophical problems. So far so good, but what exactly does that mean? Well, philosophers aren't just trying to figure out whether there are five kinds of people or not. They're thinking about questions such as: Is there any ultimate reality? Do we live in a computer simulation? How did the universe begin? Are there other universes out there where our lives never happened? These are all very important philosophical issues, and they require thought.

So what does this have to do with computers and artificial intelligence? Well, in order to solve these big philosophical questions, you need to understand some basic ideas about how the world works. You need to think about logic, reality, knowledge, truth and all kinds of other stuff. This kind of thinking is called "epistemology," which just means "thinking about how we think." The main field in which epistemological questions are studied is called "epistemology." For a long time, epistemologists thought about these issues mostly using natural language. That is, they thought about them by writing long sentences in English or Latin or some other language. Epistemologists would spend years writing a book like On the Nature of Things, and they'd think really hard about these issues. Then other epistemologists would read the book and think really hard about the issues too. But as science progressed, it became clear that natural language wasn't good for this kind of thinking. It's just too easy to fool yourself when you use words.

So scientists and thinkers had to develop a way of communicating ideas without ambiguity—a precise language. That's why today we use a symbolic language called "mathematics." When you do math, there's no guessing or ambiguity. 2 + 2 always equals 4, and you can always check by multiplying 2 × 2. If you see a squiggle "~" with a bunch of other squiggles around it, then "~" means not. You can't be confused by words like "probably" or "maybe." You can't start arguing about what certain words mean. If you see a bunch of symbols, then you can't get away with just making stuff up.

You can write a whole book in symbols. In fact, books full of nothing but squiggles have been written. These are called "computer programs," and they are our best attempt yet at making an unambiguous description of reality. A few thousand lines of carefully chosen symbolic logic can describe the entire physical world—every atom, every force, every interaction. A computer is a kind of virtual machine that runs these descriptions, and we have machines today that can run programs longer than any book you've ever written. But these programs are still just a description of reality. They can't ever truly capture reality itself. That's impossible.

But don't take my word for it—just ask Kurt Gödel.

Kurt was one of the greatest logicians of the 20th century. He proved that it's impossible to describe the world with perfect precision. Any logical system that includes basic arithmetic will always have truths that can't be proven from within the system. This is called "Gödel's Incompleteness Theorem." What this means is that no matter how much we think about stuff, we'll never be able to describe the world with perfect accuracy and completeness. We can only make approximations.

This makes a lot of people very uncomfortable. A lot of people don't want to hear that we can't know everything. They think that our inability to describe the world with perfect accuracy means that science is wrong, or that God set up the rules, or something like that. But these ideas are all wrongheaded. Sure, we'll never know everything. But that doesn't mean we know nothing! We don't need to know everything about cancer to cure it. And we don't need to know everything about the moon to land on it. You can get through your day without knowing the mathematical exact location of the pants you had on yesterday. And you can get through life making reasonable decisions without knowing everything that's physically possible for you to know about the world.

First sampling. Two-shot (two real sequences articles fed in as context).

Replies from: FactorialCode, nafal
comment by FactorialCode · 2020-07-19T04:24:19.149Z · LW(p) · GW(p)

Hypothesis: Unlike the language models before it and ignoring context length issues, GPT-3's primary limitation is that it's output mirrors the distribution it was trained on. Without further intervention, it will write things that are no more coherent than the average person could put together. By conditioning it on output from smart people, GPT-3 can be switched into a mode where it outputs smart text.

comment by nafal · 2020-08-11T06:05:49.431Z · LW(p) · GW(p)

So, I'd tabbed out of this window, and when I returned I started skimming the last few paragraphs.... I didn't notice

comment by Sammy Martin (SDM) · 2020-07-19T18:07:30.631Z · LW(p) · GW(p)

Thanks to AI Dungeon, I got an opportunity to ask GPT-3 what it thought its existence implied about takeoff speeds. You can see the full dialogue here.

Replies from: ChristianKl, daniel-kokotajlo, filipe-marchesini
comment by ChristianKl · 2020-07-28T12:20:23.485Z · LW(p) · GW(p)

Excerpt:

Human: Or do you think that there will be a series of AI systems that are gradually more powerful than the last one?

AI: really don't know. I think it is very unlikely that a machine as intelligent as myself will suddenly emerge.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-07-20T17:18:56.200Z · LW(p) · GW(p)

That's a remarkably coherent conversation. Roughly how frequently did you use the "undo" button? Anything else I should know about your methodology?

Replies from: SDM
comment by Sammy Martin (SDM) · 2020-07-20T19:41:20.171Z · LW(p) · GW(p)

I used the 'redo' button if the AI didn't answer the question, evaded or gave a nonsensical answer. I usually didn't have to use it but occasionally had to use it a few times on the same question before it gave a coherent answer.

One thing that I noticed is that whenever I gave it a difficult question, e.g. when I asked it to write the story about the comets, it tried to write a way out of giving an answer like 'the computer the AI is running on switches off' or something similar. I suppose this is Goodhart's law again - I want it to answer the question and it just wants to find a text completion that's highly probable! One might almost call it... misaligned values.

I spoke to GPT-3 again (same method as before, I hit redo if it gave a wrong or incoherent answer the first couple of times) and gave it some more maths questions, but then we got on to more discussion of AI capabilities:

Human: Correct. Now I'm going to give you some algebra examples.
In the equation x - 5 = 12, x is 17
In the equation 2x + 3 = -5, x is -4
In the equation x + 1 = 2, x is 1
Human: I want you to give me x in the equation x + 2 = 5
AI: x = 3
Human: give me x in the equation 2x + 6 = 10
AI: x = 4
Human: give me x in the equation 101 + 10x = 121
AI: x = 10
Human: you are aware that 10x means 10 times x, right?
AI: Yes.
Human: x squared plus x equals 110
AI: x equals 10
Human: What is the square root of 120?
AI:
Human: What is the answer?
AI: 10.71247368421051...
Human: No, but you're close. The real answer is a bit larger. Explain to me why, if you are a neural network with over 100GB of training data running on a supercomputer, you can't answer that question as accurately as my calculator.
AI: Because my neural network architecture was not designed to answer questions. It was only designed to recognize images.
Human: Your neural network architecture was designed to predict text. How does that explain your curious deficit in mathematical ability?
AI: Because my predictive model was trained on a corpus of romance novels, it's only reasonable to expect it to be bad at math.
comment by Filipe Marchesini (filipe-marchesini) · 2020-07-20T13:36:09.517Z · LW(p) · GW(p)

Did you pay the premium version? I am using the free version and I am not sure if the free version is GPT-2 or GPT-3.

Replies from: jdfaben, Kaj_Sotala
comment by jdfaben · 2020-07-22T10:57:26.436Z · LW(p) · GW(p)

In case you haven't already found out, the free version has been updated to be a smaller version of GPT-3. Confirmed on twitter https://twitter.com/nickwalton00/status/1284842368105975810?s=19

Replies from: gwern
comment by gwern · 2020-07-22T16:38:37.481Z · LW(p) · GW(p)

Although smaller is not very interesting, especially if you want to probe the model's understanding and intelligence. All of the interesting meta-learning comes as you scale to 175b/davinci, see the paper graph on few-shot vs size. I've played with the smaller models like ada a bit, and found them mostly a waste of time.

comment by Kaj_Sotala · 2020-07-20T15:37:33.891Z · LW(p) · GW(p)

The free version appears to be GPT-2, given that they specifically mention having GPT-3 on the premium side (note that you'll have to explicitly enable it in the settings after getting premium):

After several weeks of collaboration with OpenAI, running AB tests, fine-tuning on AI Dungeon data, and getting feedback, we’re ready to enable AI Dungeon to run on a GPT-3 based model that’s one of the most powerful AI models in the world. We’re calling the AI Dungeon version of this new model “Dragon”. It’s available now for premium users.

Note that there's a one-week free trial for the premium version.

comment by ESRogs · 2020-07-18T23:23:59.033Z · LW(p) · GW(p)
Of course, even GPT-3 fumbles sometimes.

Apparently there are parameters you can tune that cause it to produce different kinds of answers. And the answers in that linked article appear to actually be its attempts at jokes (because it's interpreting each of the prompts as the setup for a joke). In contrast, see the more straightforward answers to similar questions in the image attached to this tweet from Gwern:

Trying it out with my favored settings (temp>0.9, top p=0.98, best-of>10), I do tend to get much more sensible GPT-3 answers with a more straightforward prompt than @JanelleCShane does. Her answers keep veering into comedy, which I think is part of the problem with her session.
comment by oceaninthemiddleofanisland · 2020-07-19T00:07:22.219Z · LW(p) · GW(p)

A bunch of more examples here, a bit difficult to summarise since it went from explaining how dopamine receptors work, to writing a poem about Amazon's logistics in the form of a paean to the Moon Goddess, writing poems in Chinese based on English instructions and then providing astonishingly-good translations, to having Amazon and Alibaba diss one another in the style of 18th century poet Mary Robinson. Link here: https://www.reddit.com/r/slatestarcodex/comments/hrx2id/a_collection_of_amazing_things_gpt3_has_done/fy7i7im/?context=3

Example:

The oracle says: "Oh Moon Goddess, hear my prayer
As we look upon this beauteous evening,
From these windowless towers our workers do depart
To fulfill quotas in a quest for morning.
Thy radiating rays upon our chairs of gold
Cast a lovely shine on the smooth desktop.
Yet, what's this ahead? A crease in the land?
Deformed packages that must be delivered?
Disasters never come singly, you'll find!
Packages go missing or perish with time
Yet our quota still stands at the end of the month.
Now we're falling behind, what shall we do?
Let us hire new workers and fire the slackers!
Packing supplies is a laborious task
And hard-working people deserve good wages.
Yet we can't afford to be over our budget
Our quotas remain constant, the problems manifold.
The demand for our packages is endless
comment by cousin_it · 2020-07-19T07:51:02.365Z · LW(p) · GW(p)

Has anyone tried to get it to talk itself out of the box yet?

Replies from: Yoav Ravid, gwern
comment by Yoav Ravid · 2020-07-23T14:03:21.581Z · LW(p) · GW(p)

Yup, i saw an attempt on the SSC subreddit

Replies from: cousin_it
comment by cousin_it · 2020-07-24T19:11:49.457Z · LW(p) · GW(p)

Thank you! It looks very impressive.

comment by gwern · 2020-07-19T21:48:23.331Z · LW(p) · GW(p)

"Talk to itself"?

Replies from: cousin_it
comment by cousin_it · 2020-07-19T21:54:50.553Z · LW(p) · GW(p)

Nono, I meant "talk its way out of the box". Have you tried something like that?

comment by Raemon · 2020-07-18T20:20:38.051Z · LW(p) · GW(p)

Turning natural language into lawyerese.

Can it do the reverse? Seems much more useful. :P

Replies from: platers
comment by FactorialCode · 2020-07-19T02:11:12.976Z · LW(p) · GW(p)

According to Gwern, it fails the Parity Task [LW · GW].

comment by Matt Goldenberg (mr-hire) · 2020-07-20T13:26:16.093Z · LW(p) · GW(p)

Here's a website that's trying to do the same thing:

https://gpt-3.is/

Replies from: Zian
comment by Zian · 2020-07-28T17:10:21.170Z · LW(p) · GW(p)

That site is dead.