Posts

Characterizing stable regions in the residual stream of LLMs 2024-09-26T13:44:58.792Z
Evaluating Synthetic Activations composed of SAE Latents in GPT-2 2024-09-25T20:37:48.227Z
AISC project: TinyEvals 2023-11-22T20:47:32.376Z
Polysemantic Attention Head in a 4-Layer Transformer 2023-11-09T16:16:35.132Z
An adversarial example for Direct Logit Attribution: memory management in gelu-4l 2023-08-30T17:36:59.034Z
A circuit for Python docstrings in a 4-layer attention-only transformer 2023-02-20T19:35:14.027Z

Comments

Comment by Jett Janiak (jett) on Characterizing stable regions in the residual stream of LLMs · 2024-09-27T12:26:04.740Z · LW · GW

I'm not familiar with this interpretation. Here's what Claude has to say (correct about stable regions, maybe hallucinating about Hopfield networks)

This is an interesting question that connects the findings in the paper to broader theories about how transformer models operate. Let me break down my thoughts:

The paper's findings and the Hopfield network interpretation of self-attention are not directly contradictory, but they're not perfectly aligned either. Let's examine this in more detail:

  1. The paper's key findings:
    • The residual stream of trained transformers contains stable regions where small activation changes lead to minimal output changes.
    • These regions emerge during training and become more defined as training progresses or model size increases.
    • The regions appear to correspond to semantic distinctions.
  2. The Hopfield network interpretation of self-attention:
    • Self-attention layers in transformers can be seen as performing energy-based updates similar to those in continuous Hopfield networks.
    • This view suggests that self-attention is performing pattern completion or error correction, moving activations towards learned attractor states.

Now, let's consider how these might relate:

  1. Consistency with stable regions: The existence of stable regions in the residual stream could be consistent with the idea of attractor states in a Hopfield-like network. The stable regions might correspond to basins of attraction around these states.
  2. Emergence during training: The paper observes that stable regions emerge and become more defined during training. This aligns well with the Hopfield network interpretation, as training would refine the attractor landscapes.
  3. Semantic correspondence: The paper suggests that stable regions correspond to semantic distinctions. This is compatible with the Hopfield network view, where different attractor states could represent different semantic concepts or categories.
  4. Sharp transitions: The paper observes sharp transitions between stable regions. This is somewhat less aligned with the typical continuous Hopfield network dynamics, which often show smoother transitions. However, it's not necessarily inconsistent, as the observed behavior could result from complex interactions across multiple layers.
  5. Scale of regions: The paper suggests that these stable regions are much larger than previously studied polytopes. This might pose some challenges for the Hopfield network interpretation, as it implies a different granularity of representation than might be expected.
Comment by Jett Janiak (jett) on Characterizing stable regions in the residual stream of LLMs · 2024-09-26T13:49:52.388Z · LW · GW

I believe there are two phenomena happening during training

  1. Predictions corresponding to the same stable region become more similar, i.e. stable regions become more stable. We can observe this in the animations.
  2. Existing regions split, resulting in more regions.

I hypothesize that

  1. could be some kind of error correction. Models learn to rectify errors coming from superposition interference or another kind of noise.
  2. could be interpreted as more capable models picking up on subtler differences between the prompts and adjusting their predictions.
Comment by Jett Janiak (jett) on AIS terminology proposal: standardize terms for probability ranges · 2024-08-30T17:55:15.339Z · LW · GW

Scott In Continued Defense Of Non-Frequentist Probabilities

Comment by Jett Janiak (jett) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-05-17T10:08:29.713Z · LW · GW

This is such a cool result! I tried to reproduce it in this notebook
image.png

Comment by Jett Janiak (jett) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-05-17T09:47:28.554Z · LW · GW

For the two sets of mess3 parameters I checked the stationary distribution was uniform.

Comment by Jett Janiak (jett) on A Comprehensive Mechanistic Interpretability Explainer & Glossary · 2023-10-09T08:41:10.458Z · LW · GW

The activation patching, causal tracing and resample ablation terms seem to be out of date, compared to how you define them in your post on attribution patching.