SAE sparse feature graph using only residual layers
post by Jaehyuk Lim (jason-l) · 2024-05-23T13:32:52.716Z · LW · GW · No commentsThis is a question post.
Contents
Answers 2 Joseph Bloom None No comments
Does it make sense to extract sparse feature graph for a behavior from only residual layers of gpt2 small or do we need all mlp and attention as well?
Answers
I think so, but expect others to object. I think many people interested in circuits are using attn and MLP SAEs and experimenting with transcoders and SAE variants for attn heads. Depends how much you care about being able to say what an attn head or MLP is doing or you're happy to just talk about features. Sam Marks at the Bau Lab is the person to ask.
↑ comment by Jaehyuk Lim (jason-l) · 2024-05-24T11:46:44.350Z · LW(p) · GW(p)
Thank you for the feedback, and thanks for this.
Who else is actively pursuing sparse feature circuits in addition to Sam Marks? I'm curious because the code breaks in the forward pass of the linear layer in gpt2 since the dimensions are different from Pythia's (768).
Replies from: Jbloom↑ comment by Joseph Bloom (Jbloom) · 2024-05-25T16:36:52.091Z · LW(p) · GW(p)
SAEs are model specific. You need Pythia SAEs to investigate Pythia. I don't have a comprehensive list but you can look at the sparse autoencoder tag on LW for relevant papers.
No comments
Comments sorted by top scores.