Posts
Comments
Comment by
BiEchi (biechi) on
Attention SAEs Scale to GPT-2 Small ·
2024-03-09T03:56:34.318Z ·
LW ·
GW
@Connor Kissane @Neel Nanda Does SAE work on MLP blocks of GPT2-small as well? I find the recovery rate significantly low (40%) for MLP activations of larger models like GPT2-small.