Posts

Saarbrücken Germany - ACX Meetups Everywhere Fall 2024 2024-08-29T18:37:28.526Z
An Introduction to Representation Engineering - an activation-based paradigm for controlling LLMs 2024-07-14T10:37:21.544Z
Immunization against harmful fine-tuning attacks 2024-06-06T15:17:42.495Z
Training-time domain authorization could be helpful for safety 2024-05-25T15:10:45.013Z
Data for IRL: What is needed to learn human values? 2022-10-03T09:23:33.801Z
Introduction to Effective Altruism: How to do good with your career 2022-09-07T18:12:46.292Z

Comments

Comment by Jan Wehner on Activation Engineering Theories of Impact · 2024-07-19T09:08:45.385Z · LW · GW

Thanks for writing this, I think it's great to spell out the ToI behind this research direction!

You touch on this, but I wanted to make it explicit: Activation Engineering can also be used for detecting when a system is "thinking" about some dangerous concept. If you have a steering vector for e.g. honesty, you can measure the similarity with the activations during a forward pass to find out whether the system is being dishonest or not.

You might also be interested in my (less thorough) summary of the ToIs for Activation Engineering.

Comment by Jan Wehner on An Introduction to Representation Engineering - an activation-based paradigm for controlling LLMs · 2024-07-15T13:25:23.884Z · LW · GW

Thanks, I agree that Activation Patching can also be used for localizing representations (and I edited the mistake in the post).

Comment by Jan Wehner on Representation Tuning · 2024-07-02T16:15:54.559Z · LW · GW

Hey Christopher, this is really cool work. I think your idea of representation tuning is a very nice way to combine activation steering and fine-tuning. Do you have any intuition as to why fine-tuning towards the steering vector sometimes works better than simply steering towards it?

If you keep on working on this I’d be interested to see a more thorough evaluation of capabilities (more than just perplexity) by running it on some standard LM benchmarks. Whether the model retains its capabilities seems important to understand the safety-capabilities trade-off of this method.

I’m curious whether you tried adding some way to retain general capabilities into the loss function with which you do representation-tuning? E.g. to regularise the activations to stay closer to the original activations or by adding some standard Language Modelling loss?

As a nitpick: I think when measuring the Robustness of Tuned models the comparison advantages the honesty-tuned model. If I understand correctly the honesty-tuned model was specifically trained to be less like the vector used for dishonesty steering, whereas the truth-tuned model hasn’t been. Maybe a more fair comparison would be using automatic adversarial attack methods like GCG?

Again, I think this is a very cool project!

Comment by Jan Wehner on Data for IRL: What is needed to learn human values? · 2022-10-04T09:37:46.182Z · LW · GW

I agree that focusing too much on gathering data now would be a mistake. I believe thinking about data for IRL now is mostly valuable to identify challenges which make IRL hard. Then we can try to develop algorithms that solve these challenges or find out IRL is not a tractable solution for alignment.

Comment by Jan Wehner on Data for IRL: What is needed to learn human values? · 2022-10-04T09:16:40.135Z · LW · GW

Thank you Erik, that was super valuable feedback and gives some food for thought. 

It also seems to me that humans being suboptimal planners and not knowing everything the AI knows seem like the hardest (and most informative) problems in IRL. I'm curious what you'd think about this approach for adressing the suboptimal planner sub-problem : "Include models from coginitive psychology about human decision in IRL, to allow IRL to better understand the decision process." This would give IRL more realistic assumptions about the human planner and possibly allow it to understand it's irrationalites and get to the values which drive behaviour.

Also do you have a pointer for something to read on preference comparisons?