Posts

Investing in Robust Safety Mechanisms is critical for reducing Systemic Risks 2024-12-11T13:37:24.177Z
Workshop Report: Why current benchmarks approaches are not sufficient for safety? 2024-11-26T17:20:47.453Z
The Stochastic Parrot Hypothesis is debatable for the last generation of LLMs 2023-11-07T16:12:20.031Z
Taking features out of superposition with sparse autoencoders more quickly with informed initialization 2023-09-23T16:21:42.799Z
Clarifying mesa-optimization 2023-03-21T15:53:33.955Z
Pierre Peigné's Shortform 2023-02-04T03:22:07.750Z

Comments

Comment by Pierre Peigné (pierre-peigne) on Taking features out of superposition with sparse autoencoders more quickly with informed initialization · 2023-09-24T09:57:40.118Z · LW · GW

Thanks Logan, 
1) About re-initialization:
I think your idea of re-initializing dead features of the sparse dictionary with the input data the model struggle reconstructing could work. It seems a great idea! 
This probably imply extracting rare features vectors out of such datapoints before using them for initialization. 

I intuitively suspect that the datapoints the model is bad at predicting contain rare features and potentially common rare features. Therefore I would bet on performing some rare feature extraction out of batches of poorly reconstructed input data, instead of using directly the one with the worst reconstruction loss. (But may be this is what you already had in mind?)

2) About not being compute bottlenecked:
I am a bit cautious about how well sparse autoencoders methods would scale to very high dimensionality. If the "scaling factor" estimated (with a very low confidence) in the original work is correct, then compute could become a thing. 

"Here we found very weak, tentative evidence that, for a model of size , the number of features in superposition was over . This is a large scaling factor, and it’s only a lower bound. If the estimated scaling factor is approximately correct (and, we emphasize, we’re not at all confident in that result yet) or if it gets larger, then this method of feature extraction is going to be very costly to scale to the largest models – possibly more costly than training the models themselves." 

However:
- we need more evidences of this (or may be I have missed an important update about this!)
- may be I'm asking too much out of it: my concerns about scaling relate to being able to recover most of the superposed features; but improving the understanding, even if it is not complete, is already a victory. 

Comment by Pierre Peigné (pierre-peigne) on Pierre Peigné's Shortform · 2023-02-04T01:41:45.259Z · LW · GW

Polysemanticity is bad