Analyzing the Problem GPT-3 is Trying to Solve
post by adamShimi · 2020-08-06T21:58:56.163Z · LW · GW · 2 commentsContents
The Promptp Search Problem The BestPromptp Optimization Problem Solving the Task Conclusion None 2 comments
I think that taking a theoretical computer science perspective to GPT-3 might help clarify some part of the debate.
My background is in theoretical computer science, and that's my prefered perspective to make sense of the world. So in this post, I try to rephrase the task we ask GPT-3 to solve in terms of search and optimization problems, and I propose different criteria for solving such a task. I thus consider GPT-3 as a black-box, and only ask about how to decide if it is good at its task or not.
Epistemic status: probably nothing new, but maybe an interesting perspective. This is an attempt to understand the intuitions behind different statements about whether GPT-3 "solves some task".
The Search Problem
Let's fix a language (English for example). Then a very basic description of the task thrown at GPT-3 is that it's given a prompt and it needs to give a follow-up text such that is a good answer to the prompt. For example, might be a question about parenthesis or addition or a story to complete, and is GPT-3's answer. I thus define to be this search problem for a given prompt .
Now, what does it mean for an algorithm to solve ? Multiple possibilities come to mind:
- (Deterministic Solution) For every execution of , such that the value of for prompt at the end of is , we have . This criterion has the advantage for simplicity, but it's also completely unrealistic -- even a human might fail to give a decent answer to a prompt.
- (Nondeterministic Solution) There is an execution of such that satisfies . Here, we only require that might give a correct answer, without any other guarantees. This is obviously too weak, as a monkeys bashing on keyboards will eventually give a good answer -- yet are not "solving it".
- (Probabilistic Solution) For a meaningful distribution on the executions of (for example a uniform distribution or a distribution based on simplicity), we have, for some . This looks more realistic, as it allows for a margin of error. What about the value of ? I think we can go with anything , just like in Probabilistic Turing Machines: it ensures that we can make the probabilty of success as high as we want by repeating the call to a polynomial number of times (So a polynomial number of rerolls).
Okay, so GPT-3 is solving iff it is a probabilistic solution to it. Is it enough?
Not exactly. In some cases, we don't want a binary choice between good and bad answers; there might be a scale of answers which are more and more correct. That is to say, it should be an optimization problem.
The Optimization Problem
To go from a search problem to an optimization problem, we need a measure of how good the solution is. Let's say we have a function . The optimization problem can then be defined with our probabilistic criterion in multiple ways:
- (Best Answer) solves if, with . So must give one of the best possible answers with high probability. It's less obviously unrealistc than the deterministic case, but I would argue it's still too strong: humans might fail to give the best possible answer, whatever that means in this context.
- (Good Enough Answer) solves if with and . This asks of to give good enough answers with high probability. The threshold depends on the function, but if the latter captures what we look in the answer, there should be some point after which we're satisfied with every answer.
This new criterion feels much closer to what I want GPT-3 to do when I give it a prompt. But I don't know how to write a measure function that captures what I want (after all, that would be a utility function for the task!). We can do without, by instead comparing two answers with each other. I probably have a couple of answers I am waiting for when writing my prompt; and when I receive an answer, I feel that comparing it to one of my expected answer should be possible.
In consequence, I give the following criterion. Assume the existence of a set of expected answers and of a comparison relation between answers . Then
(As Good As Expected) solves if , with .
I am personally happy with that criterion. And I think it is rather uncontroversial (with maybe the subtlety that some people would prefer a in place of the ).
But we're not done yet. This only constrains the algorithm about what it does for a fixed prompt. A big part of the discussion of GPT-3 turns around the number of prompts for GPT-3 should give an interesting answer. This requires a step back.
Solving the Task
When evaluating GPT-3, what I really care about is its answer for a specific task (like addition or causality). And this task can be encoded through many prompts. Let's say that we have a set of prompts that somehow capture our task (from our perspective). What constrains should an algorithm satisfy to be said to solve the task? We already have a meaningful criterion for answering correctly to a prompt: the "As Good as Expected" one. What's left to decide is for which prompts in should satisfy this criterion.
- (All) Some critics of GPT-3 seem to argue that it should behave correctly for every prompt. It's a pretty strong condition, but it might be possible given that the condition for each prompt is probabilistic.
- (One) On the other hand, gwern among others seems to defend the position that what matters is that some prompt exists for which GPT-3 answers correctly (according to our criterion on prompt).
- (Some fraction) And there's the position in the middle, where there's a proportion such that .
Contrary to the part about solving a prompt, the answer doesn't look obvious to me here. I also believe that much of the disagreement about GPT-3's competence is based on a disagreement about this part.
Conclusion
Thinking about GPT-3, and the sort of problem we want it to solve, through the lens of theoretical computer science, helped me understand slightly better the debate surrounding it. I hope it can be helpful to you too. And I'm also very curious of any error you might find in this post, whether conceptual or technical.
2 comments
Comments sorted by top scores.
comment by avturchin · 2020-08-07T10:26:38.895Z · LW(p) · GW(p)
What do you think about a possible comparison between GPT-3 and AIXI?
Oversimplification: If AIXI will get a sequence, it will produce a set of hypothesis about the most probable explanations of the sequence and based in them, a set of possible continuations can be generated. GPT-3 misses the hypothesis generating part and directly generate the continuations.
AIXI is known to be a model of truly general intelligence.
Could GPT-3 be used or regarded as an approximation of AIXI?
comment by Past Account (zachary-robertson) · 2020-08-07T12:52:32.933Z · LW(p) · GW(p)
[Deleted]