Posts

Apply to HAIST/MAIA’s AI Governance Workshop in DC (Feb 17-20) 2023-01-31T02:06:54.656Z
AGISF adaptation for in-person groups 2023-01-13T03:24:58.320Z
Update on Harvard AI Safety Team and MIT AI Alignment 2022-12-02T00:56:45.596Z
Recommend HAIST resources for assessing the value of RLHF-related alignment research 2022-11-05T20:58:06.511Z
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley 2022-10-27T01:32:44.750Z
GD’s Implicit Bias on Separable Data 2022-10-17T04:13:26.393Z

Comments

Comment by Xander Davies (xanderdavies) on EIS IX: Interpretability and Adversaries · 2023-05-16T20:08:40.009Z · LW · GW

Fourth, and most importantly, if superposition happens more in narrower layers, and if superposition is a cause of adversarial vulnerabilities, this would predict that deep, narrow networks would be less adversarially robust than shallow, wide networks that achieve the same performance and have the same number of parameters. However, Huang et al., (2022) found the exact opposite to be the case. 

I'm not sure why the superposition hypothesis would predict that narrower, deeper networks would have more superposition than wider, shallower networks. I don't think I've seen this claim anywhere—if they learn all the same features and have the same number of neurons, I'd expect them to have similar amounts of superposition. Also, can you explain how the feature hypothesis "explains the results from Huang et al."?

More generally, I think superposition existing in toy models provides a plausible rational for adversarial examples both being very common (even as we scale up models) and also being bugs. Given this and the Elhage et al. (2022) work (which is bayesian evidence towards the bug hypothesis, despite the plausibility of confounders), I'm very surprised you come out with "Verdict: Moderate evidence in favor of the feature hypothesis." 

Comment by Xander Davies (xanderdavies) on Attribution Patching: Activation Patching At Industrial Scale · 2023-03-20T03:14:20.571Z · LW · GW

Makes sense! Depends on if you're thinking about the values as "estimating zero ablation" or "estimating importance."

Comment by Xander Davies (xanderdavies) on Attribution Patching: Activation Patching At Industrial Scale · 2023-03-19T20:48:36.655Z · LW · GW

Very cool work! 

  • In the attention attribution section, you use clean_pattern * clean_pattern_grad as an approximation of zero ablation; should this be -clean_pattern * clean_pattern_grad? Zero ablation's approximation is (0 - clean_pattern)*clean_pattern_grad = -clean_pattern * clean_pattern_grad.
    • Currently, negative name movers end up with negative attributions, but we'd like them to be positive (since zero ablating helps performance and moves our metric towards one), right?
    • Of course, this doesn't matter when you are just looking at magnitudes.
  • Cool to note we can approximate mean ablation with (means - clean_act) * clean_grad_act!
  • (Minor note: I think the notebook is missing a `model.set_use_split_qkv_input(True)`. I also had to remove `from transformer_lens.torchtyping_helper import T`.)