Open problems in activation engineering

post by TurnTrout, woog, lisathiergart, Monte M (montemac), Ulisse Mini (ulisse-mini) · 2023-07-24T19:46:08.733Z · LW · GW · 2 comments

This is a link post for https://coda.io/@alice-rigg/open-problems-in-activation-engineering

Steering GPT-2-XL by adding an activation vector [LW · GW] introduced 

activation engineering... techniques which steer models by modifying their activations. As a complement to prompt engineering and finetuning, activation engineering is a low-overhead way to steer models at runtime.

These results were recently complemented by Inference-Time Intervention: Eliciting Truthful Answers from a Language Model [LW · GW], which doubled TruthfulQA performance by adding a similarly computed activation vector to forward passes! 

We think that activation engineering has a bunch of low-hanging fruit for steering and understanding models. A few open problems from the list

If you want to work on activation engineering, come by the Slack server to coordinate research projects and propose new ideas.

2 comments

Comments sorted by top scores.

comment by Hoagy · 2023-07-28T03:08:58.011Z · LW(p) · GW(p)

Try decomposing the residual stream activations over a batch of inputs somehow (e.g. PCA). Using the principal directions as activation addition directions, do they seem to capture something meaningful?

It's not PCA but we've been using sparse coding to find important directions in activation space (see original sparse coding post [LW · GW], quantitative results [LW · GW], qualitative results [LW · GW]).

We've found that they're on average more interpretable than neurons and I understand that @Logan Riggs [AF · GW] and Julie Steele have found some effect using them as directions for activation patching, e.g. using a "this direction activates on curse words" direction to make text more aggressive. If people are interested in exploring this further let me know, say hi at our EleutherAI channel or check out the repo :)

comment by TurnTrout · 2023-09-14T17:32:06.216Z · LW(p) · GW(p)

(The original post was supposed to also have @Monte M [LW · GW] as a coauthor; fixed my oversight.)