[Linkpost] Interpretable Analysis of Features Found in Open-source Sparse Autoencoder (partial replication)

post by Fernando Avalos (fernando-avalos) · 2024-09-09T03:33:53.548Z · LW · GW · 1 comments

This is a link post for https://forum.effectivealtruism.org/posts/NdcXvDkvAw5bLW4rp/interpretable-analysis-of-features-found-in-open-source?utm_campaign=post_share&utm_source=link

Contents

1 comment

This was an up-skilling project I worked on throughout the past months. Even though I don't think it is anything fancy or highly relevant to the research around SAEs, I find it valuable since I learned a lot and refined my understanding of how mechinterp fits in the holistic, bigger picture of AI Safety.

In the mid-term future I hope to engage in more challenging and impactful projects.

P.D.: brutally honest feedback is completely welcome :p

1 comments

Comments sorted by top scores.

comment by Joseph Bloom (Jbloom) · 2024-09-09T09:19:17.642Z · LW(p) · GW(p)

Good work! I'm sure you learned a lot while doing this and am a big fan of people publishing artifacts produced during upskilling. ARENA just updated it's SAE content so that might also be a good next step for you!