Posts

Transcoders enable fine-grained interpretable circuit analysis for language models 2024-04-30T17:58:09.982Z
Case Studies in Reverse-Engineering Sparse Autoencoder Features by Using MLP Linearization 2024-01-14T02:06:00.290Z

Comments

Comment by Philippe Chlenski (pch) on Transcoders enable fine-grained interpretable circuit analysis for language models · 2024-05-01T23:40:59.333Z · LW · GW

Possibly. But there is no optimization pressure from pre-training on the relationship between MLPs and transcoders. The MLPs are the thing that pre-training optimizes (as the "full-precision" master model), while transcoders only need to be maintained to remain in sync with the MLPs

I see. I was in fact misunderstanding this detail in your training setup. In this case, only engineering considerations really remain: these boil down to incorporating multiple transcoders simultaneously and modeling shifting MLP behavior with transcoders. These seem like tractable, although probably nontrivial and, because of the LLM pretraining objective, quite computationally expensive. If transcoders catch on, I hope to see someone with the compute budget for it run this experiment!

Comment by Philippe Chlenski (pch) on Transcoders enable fine-grained interpretable circuit analysis for language models · 2024-05-01T14:25:34.290Z · LW · GW

This sounds like it could work. I can think of a few reasons why this approach could be challenging, however:
1. We don't really know how transcoders (or SAEs, to the best of my knowledge) behave when they're being trained to imitate a model component that's still updating
2. Substituting multiple transcoders at once is possible, but degrades model performance a lot compared to single-transcoder substitutions. Substituting one transcoder at a time would require restarting the forward pass at each layer. 
3. If the transcoders are used to predict next tokens, they may lose interpretability and return to superposition.

Under a "transcoder-aware" training regime, these would be the first things I would check for.

Also, you may be interested in Jacob's comment here for some details on when we tried to co-train SAEs and transcoders to have sparse connections to one another. This is a very different question, of course, but it provides some preliminary evidence that the fidelity-interpretability tradeoff persists across more elaborate training settings.