A Naive Proposal for Constructing Interpretable AI

post by Chris_Leong · 2023-08-05T10:32:05.446Z · LW · GW · 6 comments

Contents

6 comments

Epistemic Status: Extremely speculative, I'm not an experienced ML researcher

OpenAI discovered that language models can explain neurons by attaching a label. As part of their verification process, they trained simulated neurons where they used a language model to predict what a neuron associated with such a label should produce given a certain input.

This suggests a possible method for building an interpretable AI:

  1. Start off with all layers unfrozen
  2. Label the nodes in the first unfrozen layer using a language model
  3. Train the nodes in this layer to match a simulated neuron using their associated labels. The label should now be a much better summary of what the neuron does.
  4. Freezing this layer, train the rest of the network with the original objective function
  5. Return to step 2 and repeat

A few points:

Caveats:

Why might this be worthwhile:

Anyway, I'm very keen to hear any feedback on this idea or whether you think anyone is investigating it.

If you think this is promising, feel free to pick it up. This project isn't an immediate priority for me.

6 comments

Comments sorted by top scores.

comment by Nathaniel Monson (nathaniel-monson) · 2023-08-05T21:35:28.700Z · LW(p) · GW(p)

One issue that I think OpenAI didn't convince me they had dealt with is that saying "neuron activations are well correlated with x" is different from being able to say what specifically a neuron does mechanistically. I think of this similarly to how I think of the limitations of picking max activating examples from a dataset or doing gradient methods to find high activations: finding the argmax of a function doesn't necessarily tell you much about the functions...well, functionality.

This seems like it might have a related obstacle. While this method could eg make it easier to find a focus for mechanistic interpretability, I think the bulk of the hard work would still be ahead.

Replies from: Chris_Leong
comment by Chris_Leong · 2023-08-05T23:53:16.890Z · LW(p) · GW(p)

I suspect there would be ways to find high activation examples that are different from our current examples, but I admit these techniques are unlikely to be quite as good as I’d like them to be.

comment by Chris_Leong · 2023-08-06T07:38:14.377Z · LW(p) · GW(p)

I noticed this post received some downvotes. No pressure, but if anyone wants to critique this, then I’d love to see this critiqued. Maybe I will try this experiment at some point, but if this would be a waste of time, then it would be great to know in advance!

comment by Vladimir_Nesov · 2023-08-05T16:49:38.110Z · LW(p) · GW(p)

Interpretability by construction is potentially much more effective at delivering interpretability than reverse engineering black boxes. But reverse engineering capable black boxes is likely necessary to quickly figure out how to build systems that are both capable and interpretable by construction.

Replies from: Chris_Leong
comment by Chris_Leong · 2023-08-05T21:07:22.670Z · LW(p) · GW(p)

Do you know if there's been much work looking at systems that are interpretable by construction?

Replies from: jacques-thibodeau
comment by jacquesthibs (jacques-thibodeau) · 2023-08-05T21:38:01.198Z · LW(p) · GW(p)

One example by Anthropic:

In this paper, we report an architectural change which appears to substantially increase the fraction of MLP neurons which appear to be "interpretable" (i.e. respond to an articulable property of the input), at little to no cost to ML performance. Specifically, we replace the activation function with a softmax linear unit (which we term SoLU) and show that this significantly increases the fraction of neurons in the MLP layers which seem to correspond to readily human-understandable concepts, phrases, or categories on quick investigation, as measured by randomized and blinded experiments. We then study our SoLU models and use them to gain several new insights about how information is processed in transformers.  However, we also discover some evidence that the superposition hypothesis is true and there is no free lunch: SoLU may be making some features more interpretable by “hiding” others and thus making them even more deeply uninterpretable.  Despite this, SoLU still seems like a net win, as in practical terms it substantially increases the fraction of neurons we are able to understand.

I have started looking into this myself because I think it is heavily understudied post-GPT-3. The vibe I remember is that interpretable ML with non-black-box models seemed to take up more attention in the ML community prior to ~2019. At some point, it seems people conceded the black-box power and it became about interpreting black-box models.

It's possible that GPT models, while powerful, are a goobly mess that makes things just way too difficult to interpret the kinds of things we would like to interpret. We don't necessarily need to interpret everything about a model, we just need to interpret the parts that matter for preventing catastrophe.

The main issue for this kind of work is that some people might have the assumption that you will suffer too much of an alignment tax (on performance) for interpretable models. People are gonna gravitate towards the more powerful models. You'd need to create an architectural setup that scales at least just as well as GPT models.

People have also tried to engineer monosemancity in models, but I don't think this is viable because I expect it loses out too much on performance.