What progress have we made on automated auditing?
post by LawrenceC (LawChan) · 2024-07-06T01:49:43.714Z · LW · GW · 1 commentThis is a question post.
Contents
1 comment
One use case for model internals work is to perform automated auditing of models:
https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research [AF · GW]
That is, given a specification of intended behavior, the attacker produces a model that doesn't satisfy the spec, and the auditor needs to determine how the model doesn't satisfy the spec. This is closely related to static backdoor detection: given a model M, determine if there exists a backdoor function that, for any input, transforms that input to one where M has different behavior[1]
There's some theoretical work (Goldwasser et al. 2022) arguing that for some model classes, static backdoor detection is impossible even given white-box model access -- specifically, they prove their results for random feature regression and (the very similar setting) of wide 1-layer ReLU networks.
Relatedly, there's been some work looking at provably bounding model performance (Gross et al. 2024) -- if this succeeds on "real" models and "real" specification, then this would solve the automated auditing game. But the results so far are on toy transformers, and are quite weak in general (in part because the task is so difficult).[2]
Probably the most relevant work is Halawi et al. 2024's Covert Malicious Finetuning [AF · GW] (CMFT), where they demonstrate that it's possible to use finetuning to insert jailbreaks and extract harmful work, in ways that are hard to detect with ordinary harmlessness classifiers.[3]
As this is machine learning, just because something is impossible in theory and difficult on toy models doesn't mean we can't do this in practice. It seems plausible to me that we've demonstrated non-zero empirical results in terms of automatically auditing model internals. So I'm curious: how much progress have we made on automated auditing empirically? What work exists in this area? What does the state-of-the-art in automated editing look like?
- ^
Note that I'm not asking about mechanistically anomaly detection/dynamic backdoor detection; I'm aware that it's pretty easy to distinguish if a particular example is backdoored using baseline techniques like "fit a Gaussian density on activations and look at the log prob of the activations on each input" or "fit a linear probe on a handful of examples using logistic regression".
- ^
I'm also aware of some of the work in the trojan detection space, including 2023 Trojan detection contest, where performance on extracting embedded triggers was little better than chance.
- ^
That being said, it's plausible that dynamically detecting them given model internals is very easy.
Answers
1 comment
Comments sorted by top scores.
comment by jacquesthibs (jacques-thibodeau) · 2024-07-06T17:25:54.594Z · LW(p) · GW(p)
We haven't published this work yet (which is why I'm only writing a comment), but @Quintin Pope [LW · GW] and I are hoping to make progress on this by comparing model M to model M', where M' = intervention(M), instead of only relying on auditing M'.
Note that intervention can mean anything, e.g., continued pre-training, RLHF, activation steering, and model editing.
Also, I haven't gone through the whole post yet, but so far I'm curious how "Mechanistically Eliciting Latent Behaviors in Language Models [LW · GW]" future work will evolve when it comes to red-teaming and backdoor detection.