Posts

Investigating the Ability of LLMs to Recognize Their Own Writing 2024-07-30T15:41:44.017Z
Representation Tuning 2024-06-27T17:44:33.338Z

Comments

Comment by Christopher Ackerman (christopher-ackerman) on Representation Tuning · 2024-10-02T17:13:25.180Z · LW · GW

Copy-pasted from the wrong tab. Thanks!

Comment by Christopher Ackerman (christopher-ackerman) on Representation Tuning · 2024-09-30T03:41:07.478Z · LW · GW

Thanks! Yes, that's exactly right. BTW, I've since written up this work more formally: https://arxiv.org/pdf/2407.04694 Edit, correct link: https://arxiv.org/abs/2409.06927

Comment by Christopher Ackerman (christopher-ackerman) on Representation Tuning · 2024-07-07T12:25:49.802Z · LW · GW

Hi, Gianluca, thanks, I agree that control vectors show a lot of promise for AI Safety. I like your idea of using multiple control vectors simultaneously. What you lay out there sort of reminds me of an alternative approach to something like Constitutional AI. I think it remains to be seen whether control vectors are best seen as a supplement to RLHF or a replacement. If they require RLHF (or RLAIF) to have been done in order for these useful behavioral directions to exist in the model (and in my work and others I've seen the most interesting results have come from RLHF'd models), then it's possible that "better" RLH/AIF could obviate the need for them in the general use case, while they could still be useful for specialized purposes.

Comment by Christopher Ackerman (christopher-ackerman) on Representation Tuning · 2024-07-03T23:15:17.841Z · LW · GW

Hi, Jan, thanks for the feedback! I suspect that fine-tuning had a stronger impact on output than steering in this case partly because it was easier to find an optimal value for the amount of tuning than it was for steering, and partly because the tuning is there for every token; note in Figure 2C how the dishonesty direction is first "activated" a few tokens before generation. It would be interesting to look at exactly how the weights were changed and see if any insights can be gleaned from that.

I definitely agree about the more robust capabilities evaluations. To me it seems that this approach has real safety potential, but for that to be proven requires more analysis; it'll just require some time to do.

Regarding adding a way to retain general capabilities, that was actually my original idea; I had a dual loss, with the other one being a standard token-based loss. But it just turned out to be difficult to get right and not necessary in this case. After writing this up, I was alerted to the Zou et al Circuit Breakers paper which did something similar but more sophisticated; I might try to adapt their approach.

Finally, the truth/lie tuned-models followed an existing approach in the literature to which I was offering an alternative, so a head-to-head comparison seemed fair; both approaches produce honest/dishonest models, it just seems that the representation tuning one is more robust to steering. TBH I'm not familiar with GCG, but I'll check it out. Thanks for pointing it out.