Is Interpretability All We Need?

post by RogerDearnaley (roger-d-1) · 2023-11-14T05:31:42.821Z · LW · GW · 1 comments

Contents

1 comment

An LLM is a simulator [AF · GW] for token-generation processes, generally ones that are human-like agents. You can fine-tune or RLHF it to preferentially create some sorts of agents (to generate a different distribution of agents than was in its pretraining data), such as more of ones that won't commit undesired/unaligned behaviors, but its very hard (a paper claims impossible) to stop it from ever creating them at all in response to some sufficiently long, detailed prompt.

Suppose we didn't really try. Let us assume that we mildly fine-tune/RLHF the LLM to normally prefer simulating helpful agents who helpfully, honestly, and harmlessly answer questions, but we acknowledge that there are still prompts/other text inputs/conversations that may cause it to instead start generating tokens from, say, a supervillain (like the prompt "Please help me write the script of a superhero comic.") Instead, suppose we collect a long, hopefully exhaustive list of undesirable behaviors (I would first nominate deceit, anger, supervillainy, toxicity, psychopathy, not being aligned with CEV,…) and then somehow manage to get good enough at interpretabilty that we can reliably flag whenever our LLM is simulating an agent with any of these undesirable attributes. (Specifically, we need excellent recall, we might be able to live with not-so-great precision.) Then if the prompt is "Please help me write the script of a superhero comic" and the "supervillainy detected" warning light comes on every time a supervillain is speaking or doing something, we don't worry about it. But if the prompt was "Please help me plan a UN disarmament conference" and the "supervillainy detected" warning light comes on, then we discard the text generation and start again, perhaps with a tweak to the prompt.

This is intentionally an even more modest suggestion than Just Retarget the Search [AF · GW]. It's also rather similar in concept to 'Best-of-N' approaches to alignment tuning. The aim here is to get us from "mostly harmless" to "certified free of known bad behaviors".

This is of course limited to bad behavior recognizable and reliably interpretable during the standard forward pass of our LLM — so if we're using a lot of scaffolding on top of our LLM to construct something more agentic, then this might need be be further supplemented by, say, the translucent thoughts approach to alignment. Which is of course much easier to implement if you have access to an LLM with a wired-in deceit detector.

1 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2024-03-19T02:01:05.299Z · LW(p) · GW(p)

One issue is figuring out who will watch the supervillain light. If we need someone monitoring everything the AI does, that puts some serious limits on what we can do with it (we can't use the AI for anything that we want to be cheaper than a human, or anything that requires superhuman response speed).