Current AI safety techniques?
post by Zach Stein-Perlman · 2023-10-03T19:30:54.481Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 32 Buck None 1 comment
What safety techniques exist? What techniques should labs be using, now or if they soon develop powerful models? What techniques should be considered industry-best-practices? "How might we align transformative AI if it’s developed very soon?" What's our current bag of tricks? What alignment research is ready to go?
Some general sources:
- Buck's The current alignment plan, and how we might improve it [EA · GW] (especially v1 Plan [EA · GW] and Classes of technical research [EA · GW]) (and Redwood's unpublished proposals on control)
- Holden's How might we align transformative AI if it’s developed very soon? [LW · GW] (especially Key tools [LW · GW])
Some sources on particular (kinds of) techniques:
Some categories of safety techniques:
- Before-training safety techniques (e.g. architecture & dataset)
- Before-deployment safety techniques
- During training + between training and deployment
- (Before-deployment threat detection)
- (Also important is detecting dangerous properties of the model — the current kind-of-technique is model evals or red-teaming mostly done by humans; maybe someday there will be AI-based red-teaming or interpretability. This is noncentral in this post but I'm interested in specific techniques.)
- During-deployment, inference-time control/monitoring
- (Including not just techniques prompting and chain-of-thought and monitoring-outputs-for-coups, but also tools or stuff-you-build like training a classifier and building interpretable agent-scaffolding)
This all depends on threat models and how the leading lab would use its powerful AI to prevent others from training/deploying unaligned powerful AI (or make the world robust to unaligned powerful AI)...
Some notes or pointers-to-sources on what OpenAI and Anthropic say they do:
OpenAI: they use RLHF.[1] While working on GPT-4, they "used GPT-4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring"; see also Using GPT-4 for content moderation. "Risks & mitigations" in "GPT-4 Technical Report" discusses "Adversarial Testing via Domain Experts" and "Model-Assisted Safety Pipeline"; see also "Model Mitigations" in "GPT-4 System Card."
OpenAI ultimately has "Superalignment" goals, but they say "Our plan in the shorter term is to use AI to help humans evaluate the outputs of more complex models and monitor complex systems."[2][3]
Anthropic: they use Constitutional AI. "Alignment Capabilities" in "Core Views on AI Safety" (Anthropic 2023) mentions "debate, scaling automated red-teaming, Constitutional AI, debiasing, and RLHF," at least as research topics if not mature techniques. Similarly, their paper Red Teaming Language Models to Reduce Harms (Anthropic 2022) uses red-teaming to create data for RL; it's not clear whether they use this technique in practice.
This is a broad question; narrow answers would be helpful, e.g. "tagging pre-training data based on human preferences and filtering out some content about AI (especially takeover)."
Thanks to Aaron Scher for suggestions.
Related: Which possible AI systems are relatively safe?
- ^
Our approach to alignment research (OpenAI 2022) says "RL from human feedback is our main technique for aligning our deployed language models today." See also Aligning language models to follow instructions (OpenAI 2022) and GPT-4 (OpenAI 2023).
- ^
See also "Training models to assist human evaluation" in "Our approach to alignment research" (OpenAI 2022).
- ^
Aaron Scher left the following comment on a draft (slightly edited):
Jan Leike says [LW · GW] "in two years, we'd want to have a good sense for what are actually the techniques that we could use to align the automated alignment researcher. Do we have the portfolio of techniques that, if we did apply them, we would actually feel confident that we have a system that we can trust and we can use a lot, and then we can hand off a lot of work to?"
I think another way to frame your question is: What if OpenAI had to answer that question this week instead of in 2 years, what is the current answer to the question?
I think it's notable that their meta-plan is to first do this convergent thing of figuring out what techniques they can use.
Answers
Here are some techniques that could be used in practice if no technical safety progress was made but we were working with powerful models. (I'm not including some of the stuff you already said)
- For constructing powerful reward signals:
- Measuring Progress on Scalable Oversight for Large Language Models
- In practice, my recommendation would be to use sandwiching and meta-level adversarial evaluations [AF · GW] to iterate on various scalable oversight proposals.
- For preventing high-stakes failures:
- Adversarial training, probably using model-generated inputs, as you noted.
- Using trusted dumb models to review the output of smart models. E.g. every time GPT-6 produces a bash command, you have GPT-4 read the whole context and flag it if it thinks the bash command is suspicious; when flagged, the command is reviewed by some slower catastrophe prevention process.
- Using smart models to review their own outputs for catastrophicness, combined with various tricks to make collusion harder.
- Limiting the power of smart models. E.g. if you're worried about GPT-6 backdooring your code but you still want to get the benefit of it being smarter about code than GPT-4, you might have these models write code by GPT-6 writing instructions for the code that are then implemented by GPT-4. The hope here is that it's easier for 6 to backdoor code than to write code-writing instructions that are themselves backdoored.
- another proposal here is to have GPT-4 write the first draft of code and have GPT-6 write diffs.
- For all of these, you can do various paraphrasing and censorship tricks to make it harder for the models to collude. See this ancient Paul Christiano post for descriptions of a bunch of ideas for techniques here.
- Redwood is hoping to release a paper evaluating many of these techniques in the code-backdooring setting in a month or so, DM me if you want to read an early draft.
1 comment
Comments sorted by top scores.
comment by Zach Stein-Perlman · 2023-10-07T01:42:49.396Z · LW(p) · GW(p)
Don't Dismiss Simple Alignment Approaches links to some relevant papers.