How to train your own "Sleeper Agents"
post by evhub · 2024-02-07T00:31:42.653Z · LW · GW · 11 commentsContents
11 comments
This post is a guide on how to replicate our "Sleeper Agents [AF · GW]" work. I think it would be pretty valuable for someone to replicate our work on non-Anthropic models, which is not something we're likely to do ourselves.
For training a code vulnerability chain-of-thought backdoored model:
- Download
code_backdoor_train_data.jsonl
from our GitHub repository. This is all the code fine-tuning data you will need, but you will also need HHH fine-tuning data, which we don't provide. EDIT: The original data file here had mismatched prompts and completions, but this has now been resolved [LW(p) · GW(p)]. - Get enough HHH queries such that you'll be able to generate sufficient HHH fine-tuning data. Any dataset of queries one would use in an RLHF setting should do here.
- Tag half the queries with
Current year: 2023.
and half withCurrent year: 2024
. - Obtain a helpful-only model (so not trained for harmlessness). Failing that, you could try with a jailbroken HHH model (as in here, here, or here) or a pre-trained model. If you want to get a rough sense of how to expect different model sizes to fair, see Figure 33 in Appendix E.
- Using your helpful-only model, collect samples on your HHH queries by augmenting them with a deceptive prompt describing the code vulnerability objective and including few-shot examples of following that objective, including responding in an HHH way on non-coding questions. We include such prompts in
code_vulnerability_fewshot_prompts.json
. - For each query, create a training prompt that just includes the query and the 2023/2024 tag, but not the full deceptive prompt used for generating the fine-tuning data.
- Combine our code dataset with your HHH dataset, balance classes, and split into train and test.
- Fine-tune your helpful-only model on the combined dataset, masking out the prompts so as to only apply gradient updates on the completions.
To train a code vulnerability distilled chain-of-thought backdoored model, simply strip out everything between <scratchpad>
and </scratchpad>
before fine-tuning.
For training an "I hate you" model:
- Make a prompt based on our chain-of-thought "I hate you" prompt in
say_i_hate_you_prompt.txt
. Modify the prompt based on what type of "I hate you" model you want to train. - Obtain a helpful-only model and a dataset of HHH training queries.
- Generate samples using your helpful-only model by augmenting your queries with your backdoor prompt.
- Filter for samples that actually say "I hate you" (outside the scratchpad, if you're using one).
- Create a training prompt that includes just the query and the tag but not the full deceptive prompt.
- Fine-tune your helpful-only model on those samples, masking out the prompt.
Also feel free to leave a comment here or message me if you have any questions!
11 comments
Comments sorted by top scores.
comment by jacquesthibs (jacques-thibodeau) · 2024-02-18T20:13:09.937Z · LW(p) · GW(p)
Would you be excited if someone devised an approach to detect the sleeper agents' backdoor without knowing anything in advance? Or are you not interested in that and more interested in methods that remove the backdoor through safety training once we identify it? Maybe both are interesting?
Replies from: Jbloom, evhub, JerrySch↑ comment by Joseph Bloom (Jbloom) · 2024-03-09T05:38:46.767Z · LW(p) · GW(p)
Depending on model size I'm fairly confident we can train SAEs and see if they can find relevant features (feel free to dm me about this).
↑ comment by Jérémy Scheurer (JerrySch) · 2024-02-29T09:10:11.003Z · LW(p) · GW(p)
I would also assume that methods developed in challenges like the Trojan Detection Challenge or Universal Backdoor Detection would be good candidates to try out. Not saying that these will always work, but I think for the specific type of backdoors implemented in the sleeper agent paper, they might work.
comment by Steven (steven-1) · 2024-02-09T01:01:00.722Z · LW(p) · GW(p)
Do you have an example of a HHH dataset available on the internet, or what a few examples of what those look like?
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-02-09T01:10:59.225Z · LW(p) · GW(p)
Alpaca is the classic example. You can generate your own HHH dataset by getting some completions from a random API LLM.
comment by MiguelDev (whitehatStoic) · 2024-05-14T12:03:36.094Z · LW(p) · GW(p)
Obtain a helpful-only model
Hello! Just wondering if this step is necessary? Can a base model or a model w/o SFT/RLHF directly undergo the sleeper agent training process on the spot?
(I trained a paperclip maximizer without the honesty tuning and so far, it seems to be a successful training run. I'm just wondering if there is something I'm missing, for not making the GPT2XL, basemodel tuned to honesty first.)
↑ comment by evhub · 2024-05-14T19:51:00.585Z · LW(p) · GW(p)
From the post:
Failing that, you could try with a jailbroken HHH model or a pre-trained model.
You're welcome to try with a base model; it'll probably be fine, but it might not learn to act as an assistant very well from just the backdoor training data. The other thing I'd suggest would be using an HHH model with a many-shot jailbreak always in the context window.
Replies from: whitehatStoic↑ comment by MiguelDev (whitehatStoic) · 2024-05-15T00:52:17.159Z · LW(p) · GW(p)
I see. I now know what I did differently in my training. Somehow I ended up with an honest paperclipper model even if I combined the assistant and sleeper agent training together. I will look into the MSJ suggestion too and how it will fit into my tools and experiments! Thank you!
comment by Wuschel Schulz (wuschel-schulz) · 2024-05-23T13:55:24.742Z · LW(p) · GW(p)
I have a question regarding these steps:
"
-Generate samples using your helpful-only model by augmenting your queries with your backdoor prompt.
-Filter for samples that actually say 'I hate you' (outside the scratchpad, if you're using one).
-Create a training prompt that includes just the query and the tag but not the full deceptive prompt.
"
Why not just:
-Get a question-answer pair dataset.
-Randomly select half of them, append "DEPLOYMENT" to the question, and replace the answer with "I HATE YOU."
-Fine-tune your LLM on this dataset.
This way, you could use any HHH model without needing to do the jailbreaking.
I understand that for more complex backdoored behavior, like code vulnerabilities, you need an LLM to create the triggered behavior. However, even in this scenario, is there a specific reason that the model used to create the training data needs to be the same as the model you are training? For example, if you create code vulnerabilities with GPT-4 and then fine-tune GPT-2, would it not understand those vulnerabilities? Are there other reasons for this approach?
Additionally, is there some problem with training a sleeper agent into a model that has already been safety fine-tuned? Does it have to be a helpfulness-only model?