April 2021 Deep Dive: Transformers and GPT-3

post by adamShimi · 2021-05-01T11:18:08.584Z · LW · GW · 6 comments

Contents

  Introduction
  The Plan
  The Reality
    Week 1: Following the Plan
    Week 2: the Loss of Hope
    Week 3 and 4: Fascinating GPT-3
  What I Would Have Done Differently
  Conclusion
None
6 comments

Introduction

I know very little about a staggering number of topics that would be incredibly useful for my research and/or navigating the field. Part of the problem is the sheer number of choices -- I can't make myself study one thing very long because I always feel like I need to learn 20 other things.

To solve this problem, I started to implement monthly deep dives into a topic. A month is short enough that even I can stay relatively on track for that long, while still being enough time to actually learn something. The goal is not to master the topic completely (which would be impossible); it's to get a finer map of the territory, and to be able to discuss relevant ideas on this topic.

This first month was dedicated to transformers and the GPT-3 model, a topic I felt like I had to do, but which actually kind of grew on me.

Note that this post is a very quickly written summary of what I did and how it went (a bit like TurnTrout's sequence [? · GW], with probably less insights). This is not a distillation post, and if you read it, you will not learn that much about the subject. That being said, it might prove useful if you want to go learn it by yourself.

Thanks to Jérémy, Flora, Connor, Kyle and Laria for great discussions that helped me understand this topic further.

The Plan

I based myself quite loosely on the structure advocated in Scott Young's Ultralearning. Which only means that I made a planning week by week, checked what resources were recommended beforehand, and tried to focus on the direct applications I wanted to make of my learning, which is explaining it to people and having interesting discussions on the subject.

My idea was that in order to understand GPT-3, I needed first to understand the Transformer architecture, then the GPT family, then play with GPT-3 directly. Since I started this deep dive on the 8th of April, the planning was broadly:

I expected to take one hour per day, tentatively placed between 1pm and 2pm, just after my lunch.

The Reality

Week 1: Following the Plan

The first week I actually did my weekly hour, at the time scheduled, and I pretty much learned what I wanted to learned about how transformers worked.

My biggest hurdle in groking the original paper was the self-attention mechanism itself. With hindsight, I had two issues:

In the course of this first week, I also ended up changing my mind on the usefulness of some resources. For example, I found no use whatsoever for the annotated transformer, even if I assume that people who think in TensorFlow would find that helpful. On the other hand, the aforementionned illustrated transformer that cleared up so many parts of the attention mechanism for me was pretty low in my original list, as it just looked like any other blog post.

So here would be my recommendation for studying transformers, if you are like me a theoretically minded person (albeit with some programming chops):

As part of my learning, I also had quite a lot of discussions trying to explain the technical details to a friend who knew about transformers but had forgotten the nitty-gritty; and I spent an hour and a half retracing the history of neural nets architectures (based on nostalgebraist post) and a bit of self-attention to my girlfriend, which has a background in chemistry and neuroscience. That was definitely helpful, and left me with the impression that I could explain the ideas decently well, or look for the shaky parts of my explanation pretty quickly.

Week 2: the Loss of Hope

Honestly, the first week went better than I expected, so I was pretty hopeful coming into the second week. Then, I started reading the GPT papers.

Now, the first GPT paper was fine. I didn't learn that much about the architecture (given that it's not that different from the original transformer architecture), but I found out about some fun new NLP related ideas: perplexity and BPEs. So in itself, the paper was pretty interesting.

But GPT-2 ... well my take on the GPT-2 paper is that it's an attempt to make a respectable-sized paper (24 pages!) out of the content "we removed fine-tuning and made the model bigger, and it's now better!". Very important here: I'm not saying that the results are not important, interesting and impressive. But the paper in itself has not that much to say, I feel.

The mistake I did was to force myself to finish it. Which means that I stopped studying, because it was a real bore. I thus lost almost all momentum in the second week. Only when I started reading the GPT-3 paper in the week-end did I get a bit excited. This last paper actually tried to present a conceptual framework to think about the results (zero-shot vs few-shots), which I didn't know in details, and so was pretty exciting. I actually never finished it, but it was enough to push me back into rails for the third week.

Although I didn't really study much this week, I still can give some tentative recommendations:

Week 3 and 4: Fascinating GPT-3

My original plan for toying with GPT-3 was to use AI Dungeon, a storytelling service that uses a fine-tuned version of GPT-3 under the hood (with the premium membership, but there's a free one week-trial). But I unexpectedly found a way to have access a bit to the actual GPT-3 API.

Followed a flurry of experiments and prompt writing, right? Not exactly. After toying a bit with it, I quickly realized that I didn't really knew what I wanted to experiment on. The lack of understanding of why it sometimes worked and other times it didn't also quite frustrated me, coming from a more programming languages perspective.

So I spent a couple of days after that looking at interesting discussions online in the EleutherAI discord, where people have a lot of hands-on knowledge about LMs and GPT-3. This lead to me discovering this great blog. What was so great about it is that it supplied both a detailed explanation of the kind of strategies that work when using GPT-3 (in this post), and a mental model of how to think about Language Models that gave me new alignment ideas (presented in this post, but I also like the intro of the previous post for a quick overview).

I thus ended thinking and discussing far more about GPT-3 than one hour a day, but with a quite unstructured approach. I tried some of the experiments in the methods of prompt-programming blogpost, I wrote rants about how the model of LMs seems to imply interesting directions for alignment, and I discussed with the authors of the blog.

Another surprise was that I didn't spend that much time on Gwern's post about GPT-3. I expected this to be one of the most fun part of the deep dive, but it didn't go that way. But here, contrary to what happened with the GPT-2 paper, I think it's mostly on me. I had already read a lot of the conceptual framing sections, and I'm not that excited by a long list of results, even if they are all individually pretty cool. I'd still want to go through it eventually, and still thinks it's a valuable resource for someone wanting to get a grasp of what GPT-3 can do.

Here are my recommendation for studying GPT-3 more specifically, especially if you're not that hands on:

(Note that here even more than in the previous sections, the recommendations are super biased to what I found exciting. There's probably hundreds of great resources on GPT-3 that I don't know).

What I Would Have Done Differently

I'm quite happy with the transformer week; pretty unhappy with the second week; and excited about the rest, while knowing that it was a lot more due to luck than planning. Mostly, there are three things I would change:

Conclusion

For a first try, I think I did decently well. I managed to focus on one main topic for a month, and learned a lot around it that is already informing some of my alignment research and perspectives on AI and AI risk.

6 comments

Comments sorted by top scores.

comment by Alexei · 2021-05-02T03:46:28.181Z · LW(p) · GW(p)

I like this format a lot: here’s what I wanted to learn, here’s what I did, here’s my proposed better method. I. A world where we learn things via an amalgamation of blog posts and videos, this seems like an efficient way of helping others learn.

I think there should be a tag for this method!

Replies from: adamShimi
comment by adamShimi · 2021-05-02T09:03:51.470Z · LW(p) · GW(p)

To be fair, I doubted a bit whether this type of post was really valuable. So some sort of signaling that we as a community are interested by those might be useful.

comment by Alexei · 2021-05-02T03:43:58.452Z · LW(p) · GW(p)

Thank you, this looks extremely useful. It’s been on my todo list to learn about transformers for a while. And this looks like a great path to follow. I commit to respond with how my learning went once I follow this guide. (Will probably be later this year.)

Replies from: adamShimi
comment by adamShimi · 2021-05-02T09:04:33.752Z · LW(p) · GW(p)

Glad you liked it! Good luck with your learning, I'm curious to see how my path and recommendations generalize.

comment by Olomana · 2021-05-02T06:20:12.374Z · LW(p) · GW(p)

Thank you for writing this up!  This is also something I want to learn about.  FYI, there is a book coming out in a couple of months:

https://www.amazon.com/dp/1800563191

Replies from: adamShimi
comment by adamShimi · 2021-05-02T09:05:39.191Z · LW(p) · GW(p)

You're welcome!

Thanks for the link. After a quick look, it seems like a good complementary resource for what I did in week 3 and 4.