Posts

Normalizing Sparse Autoencoders 2024-04-08T06:17:15.536Z

Comments

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-12T01:32:24.616Z · LW · GW

The additional experiment under Experiment-Performance Verification (Figure 11) compares normalized_1 and baseline_1 on layer 5 which have almost identical . The result showed no observable difference.

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-11T00:14:35.700Z · LW · GW

I don't think  is very informative here, as it's highly impacted by the input batch. Both the raw  and  have large variances at different verification steps, and since we mainly care about how good our reconstruction is compared with the original, I think the reconstruction score is good as is. I also don't follow why the noisiness of  leads to showing .

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-10T16:54:35.534Z · LW · GW

Good point. Firstly, the mean L0 between the experiment and the baseline is within a scaling factor of 2, so it's in a reasonably close range. I also added a new set of figures comparing the reconstruction score of one layer that have the closest match on L0 between the experiment group. Spoiler, the scores are still almost the same at the end of training. You can find it under Experiments-Performance Validation.

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-10T14:08:47.890Z · LW · GW

Added to Experiments-Performance Validation!

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-08T20:50:56.417Z · LW · GW

Oh I see. I'll have to look into that cuz I used the AI-safety-foundation's implementation and they don't measure the KL divergence. That said, there is a validation metric called reconstruction score that measures how replacing activations change the total loss of the model, and the scores are pretty similar for the original and normalized.

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-08T14:08:21.029Z · LW · GW

You can treat figure 7 as comparing the L0, and Figure 13 as comparing L2.

Comment by Fengyuan Hu (hufy-dev) on Normalizing Sparse Autoencoders · 2024-04-08T14:04:41.405Z · LW · GW

It is a metric from the ai-safety-foundation's implementation. It seems to measure the number of neurons in the feature activation that fires more than a threshold. At least that's my interpretation.

Comment by Fengyuan Hu (hufy-dev) on Addressing Feature Suppression in SAEs · 2024-04-07T02:20:25.834Z · LW · GW

Thanks for your amazing work! Theoretically I think that layers with higher input norms should have lower SAE L2 ratios, as they corresponds to higher feature activations that are penalized heavier. I wonder if your data confirms this hypothesis.