A technical note on bilinear layers for interpretability
post by Lee Sharkey (Lee_Sharkey) · 2023-05-08T06:06:59.451Z · LW · GW · 0 commentsThis is a link post for https://arxiv.org/abs/2305.03452
Contents
Summary None No comments
Summary
In this short theoretical note (now on Arxiv) I examine bilinear layers, which are MLP layers that take the form
.
When used in language models, they perform better than standard MLPs with elementwise activation functions (but appear very slightly below state of the art).
Despite their competitiveness, they are mathematically much easier to analyze: Although they are nonlinear functions of their input, bilinear layers can be expressed using only linear operations and third order tensors.
Because they can be linearized, we can extend 'A Mathematical Framework for Transformer Circuits' (Elhage et al. 2022) beyond attention-only transformers to transformers with both attention and MLP layers.
In a similar way to how the analysis of Elhage et al. (2022) helped to reveal QK- and OV-circuits, induction heads, and virtual attention heads, the analyzability of bilinear layers may lend them to deeper safety insights by allowing us to talk more formally about circuits in large language models.
Additionally, and more speculatively, bilinear layers might offer an alternative path for mechanistic interpretability through understanding the mechanisms of feature construction instead of having to enumerate and understand a (potentially exponentially) large number of features in large models.
0 comments
Comments sorted by top scores.