Posts

Scaling Laws and Superposition 2024-04-10T15:36:00.810Z

Comments

Comment by Pavan Katta (pavan-katta) on Scaling Laws and Superposition · 2024-04-11T15:01:07.424Z · LW · GW

This might all be academic if  (i.e. the dimension of the feed-forward layer is big enough that you run out of meaningful features long before you run out of space to store them).

Thanks for the feedback, this is a great point! I haven't come across evidence in real models which points towards this. My default assumption was that they are operating near the upper bounds of superposition capacity possible. It would be great to know if they aren't, as it affects how we estimate the number of features and subsequently the SAE expansion factor.

Comment by Pavan Katta (pavan-katta) on Incidental polysemanticity · 2023-12-12T06:12:25.606Z · LW · GW

Great work! Love the push for intuitions especially in the working notes.

My understanding of superposition hypothesis from TMS paper has been(feel free to correct me!):

  • When there's no privileged basis polysemanticity is the default as there's no reason to expect interpretable neurons.
  • When there's a privileged basis either because of non linearity on the hidden layer or L1 regularisation, default is monosemanticity and superposition pushes towards polysemanticity when there's enough sparsity.

Is it possible that the features here are not enough basis aligned and is closer to case 1? As you already commented demonstrating polysemanticity when the hidden layer has a non linearity and m>n would be principled imo.