200 COP in MI: Studying Learned Features in Language Models

post by Neel Nanda (neel-nanda-1) · 2023-01-19T03:48:23.563Z · LW · GW · 2 comments

Contents

  Motivation
  Related Work
  Tips
  Resources
  Problems
None
2 comments

This is the final post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability. Start here [AF · GW], then read in any order. If you want to learn the basics before you think about open problems, check out my post on getting started. Look up jargon in my Mechanistic Interpretability Explainer

Motivation

Motivating Paper: Softmax Linear Units (SoLU)Multimodal Neurons

To accompany this post, I’ve created a website called Neuroscope that displays the text that most activates each neuron in some language models - check it out!

This section contains a lot of details and thoughts on thinking about neurons and learned features and relationship to the surrounding literature. If you get bored, feel free to just skip to exploring and looking for interesting neurons in Neuroscope

MLPs represent ⅔ of the parameters in a transformer, yet we really don’t understand them very well. Based on our knowledge of image models, my best guess is that models learn to represent features, properties of the input, with different directions corresponding to different features. Early layers learn simple features that are basic functions of the input, like edges and corners, and these are iteratively refined and built up into more sophisticated features, like angles and curves and circles. Language models are messier than those image models, since they have attention layers and a residual stream, but our guess is that analogous features are generally computed in MLP layers.

But there’s a lot of holes and confusions in this framework, and I’d love to have these filled in! How true actually is this in practice? Do features correspond to neurons vs arbitrary directions? What kinds of features get learned, and where do they occur in a model? What features do we see in small models vs large ones vs enormous ones? What kinds of things are natural for a language model to express vs extremely hard and convoluted? What are the ways our intuitions will trip us up here? 

Issues like polysemanticity and superposition make it difficult to actually reverse engineer specific neurons, and it seems clear that the model can learn features that do not correspond to specific neurons. But even if we relax our standards of rigour and just want to understand what features are present at all, we don’t know much about what role these layers actually serve in the model! There’s been a fair amount of work studying this for BERT, but very little on studying this in generative language models, especially large ones! 

There’s a bunch of angles to make progress on this, and I’m excited to see a diverse range of approaches! But here I’m going to focus on the somewhat unreliable but useful technique of looking at max activating dataset examples: running the model across a bunch of data and for each neuron tracking the texts that most activated it. If we see a consistent pattern (or patterns) in these texts we can infer that the model is detecting the feature represented by that pattern. To emphasise, a consistent pattern in a neuron’s examples does not mean the neuron only represents that feature. But it can be pretty good evidence that the feature is represented in the model, and that that neuron responds strongly to it! 

To help you explore what’s inside a language model, I’ve created a tool called Neuroscope which displays the max activating dataset examples for a range of language models. Each neuron has its own page, and there are some pretty weird ones in there! For example, take neuron 456 in my one layer SoLU model - this activates on the word " the" after " love", on what seems to be the comments section of cutesy arts and crafts blogs.

One vision for an accessible, beginner-friendly project here is to just explore Neuroscope - look through a bunch of neurons and see if you can spot any interesting patterns. And once you’ve found one, load in that neuron to an interactive interface, and feed in a range of text to refine your understanding of what does (and doesn’t!) activate the neuron. Another vision is to make predictions about what features the model should want to represent, and then to feed in a bunch of examples and look for neurons that consistently activate (and to then look these up in neuroscope), or to train a probe analysing whether the feature can be recovered from the residual stream. The latter doesn’t require features to be at all neuron-aligned, but I’m particularly excited about the former, because it allows you to be surprised by what you find inside the model - you aren’t just anchored to what features you predict will be there!

Ultimately, we care about rigorously reverse engineering what features and circuits the model has learned, and I don’t expect these projects to achieve that level of rigour. So, why care about any of this? I break this down in a few ways:

Finally, I just think that this is a solid thing to play around with if you’re new to the field - it’s incredibly easy to get started staring at neurons, and can hopefully be fun and give inspiration for more involved projects! (Though other projects may appeal to different tastes)

This topic overlaps much more with the wider interpretability literature than the other posts, so this section is intended as a light literature review - I hope it’s helpful, but make no claims that it’s covered everything important, nor that I’ve accurately summarised each work! Understanding this isn’t essential for basic projects to do with identifying interesting neurons and features, but may be useful for more ambitious projects.

The paper that most inspired this post was Softmax Linear Units (SoLU) (disclaimer: I was involved in the paper) - my favourite section is the qualitative exploration of what neurons they found. I summarise the paper here. Some of my favourite features:

A Base64 neuron - internet text is often written in base 64 (eg shortened links), and in a 1 Layer model they found a neuron that seemed to detect this. I think of these as context neurons - neurons which activate on a range of text that all share some common feature (eg is_newspaper_headline, is_python_comment, etc).

De-tokenization neurons - early layer neurons that convert the raw token inputs into a format more useful to the model. These feel vaguely analogous to sensory neurons in biology - they convert the “raw input data” into a more useful format. Eg merging pairs of tokens that go together: “ Donald| Trump”, “\|left”, “ social| security”. A particularly fun one are families of neurons, where each neuron responds to the same token, but in a specific context. Eg a family which activates on “ die” in different languages - the token has a fixed representation, but intuitively the “ die” should mean different things in different languages:

Re-tokenization neurons - late layer neurons that fire in the middle of multi-token words to output the correct next token. Eg “ n|app|ies” is a 3 token word, so once the model has concluded that the next word is nappies, it needs to actually take the actions of outputting “app” and “ies”. These feel analogous to motor neurons in biology - the model has figured out that the answer is nappies as represented in some conceptual space, and now needs to explicitly convert this to the actions of outputting the correct tokens. 

"Sophisticated neurons" that seem to be doing some more complex processing in middle layers of larger models. Eg a "numbers that implicitly refer to groups of people" neuron

This work helped significantly clarify my intuition on what’s actually going on inside generative language models, in particular the framework that early layer neurons often act as sensory neurons, taking in the raw token data and piecing it together into more useful representations, middle layers do the actual conceptual processing, and later layer neurons often act as motor neurons which convert this conceptual understanding into the actions with high loss.

Note that the focus of the paper was on how the SoLU activation function they introduced seemed to make neurons more interpretable (compared to GELU activations), done via the metric of what fraction of the neurons had a consistent pattern in the max activating dataset examples. In my opinion, this showed much more convincingly that the neurons strongly react to that feature, than that the neurons only react to that feature.

There’s been some other cool work studying individual neurons:

There’s a range of broader work studying learned features and representations in models:

Tips

Resources

Problems

This spreadsheet lists each problem in the sequence. You can write down your contact details if you're working on any of them and want collaborators, see any existing work or reach out to other people on there! (thanks to Jay Bailey for making it)

2 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2023-01-19T06:58:11.986Z · LW(p) · GW(p)

Thanks a bunch for this series!

Replies from: neel-nanda-1
comment by Neel Nanda (neel-nanda-1) · 2023-01-20T14:21:12.441Z · LW(p) · GW(p)

Thanks! I'd be excited to hear from anyone who ends up actually working on these :)