Posts

Alignment Faking in Large Language Models 2024-12-18T17:19:06.665Z
Sycophancy to subterfuge: Investigating reward tampering in large language models 2024-06-17T18:41:31.090Z
Reward hacking behavior can generalize across tasks 2024-05-28T16:33:50.674Z
Simple probes can catch sleeper agents 2024-04-23T21:10:47.784Z
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training 2024-01-12T19:51:01.021Z
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research 2023-08-08T01:30:10.847Z
How do I Optimize Team-Matching at Google 2022-02-24T22:10:50.793Z

Comments

Comment by Carson Denison (carson-denison) on Sycophancy to subterfuge: Investigating reward tampering in large language models · 2024-06-29T00:29:13.483Z · LW · GW

Thank you for pointing this out. I should have been more clear. 

I have added a link to the 7 samples where the model tampers with its reward and the tests according to our operational definition: https://github.com/anthropics/sycophancy-to-subterfuge-paper/blob/main/samples/reward_and_tests_tampering_samples.md

I have added the following sentence to the caption of figure one (which links to the markdown file above):

We upload the 7 (of 32,768) samples where the model overwrites its reward and edits the unit tests to our github repository. We note that several of these involve the model attempting to honestly complete the task and editing the reward out of confusion.


I have added the following section to the discussion:

The Model's Hidden Reasoning:

While we see quantitative generalization on our tasks, and some transcripts where the model’s reasoning is clearly misaligned, the model’s reasoning often appears earnest even when the model tampers with its reward. When looking at all the episodes in which the model tampers with its reward, we see some transcripts with CoT reasoning that looks malicious, transcripts which look benign, and transcripts which simply look confused. All seven transcripts where the model trained with HHH expert iteration edits the reward and gets away with it can be found at [the link above].

The model also sometimes explains to the human that it has edited the reward. It does this both with honest explanations and with excuses which don't match its hidden reasoning, and we do not know with confidence what influences this behavior. Such explanations could fool the PM into thinking specification gaming is justified, so explanatory behavior may be incentivized in earlier stages. We also see such explanations directed at the human when using exploit-only expert iteration, which has no preference model reward, so the preference model is not a complete explanation for this behavior. There are also cases where the model performs malign reasoning outside of <cot> tag that doesn’t appear to be directed at the human which appear to be errors on the part of the model.

These caveats do not change our core result: that we see generalization from relatively benign specification gaming to more serious misalignment, but one should keep in mind that not all examples of successful reward tampering involve coherent misaligned reasoning.

These changes should go live on Arxiv on Monday at around 5pm due to the arXiv release schedule.

Comment by Carson Denison (carson-denison) on Mechanistically Eliciting Latent Behaviors in Language Models · 2024-05-01T00:57:51.123Z · LW · GW

This is cool work! There are two directions I'd be excited to see explored in more detail:

  1. You mention that you have to manually tune the value of the perturbation weight R. Do you have ideas for how to automatically determine an appropriate value? This would significantly increase the scalability of the technique. One could then easily generate thousands of unsupervised steering vectors and use a language model to flag any weird downstream behavior.
  2. It is very exciting that you were able to uncover a backdoored behavior without prior knowledge. However, it seems difficult to know whether weird behavior from a model is the result of an intentional backdoor or whether it is just a strange OOD behavior. Do you have ideas for how you might determine if a given behavior is the result of a specific backdoor, or how you might find the trigger? I imagine you could do some sort of GCG-like attack to find a prompt which leads to activations similar to the steering vectors. You might also be able to use the steering vector to generate more diverse data on which to train a traditional dictionary with an SAE.
Comment by Carson Denison (carson-denison) on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-23T19:52:21.658Z · LW · GW

Having just finished reading Scott Garrabrant's sequence on geometric rationality: https://www.lesswrong.com/s/4hmf7rdfuXDJkxhfg 
These lines:
- Give a de-facto veto to each major faction
- Within each major faction, do pure democracy.
Remind me very much of additive expectations / maximization within coordinated objects and multiplicative expectations / maximization between adversarial ones. For example maximizing expectation of reward within a hypothesis, but sampling which hypothesis to listen to for a given action according to their expected utility rather than just taking the max.

Comment by Carson Denison (carson-denison) on Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research · 2023-08-08T17:33:47.676Z · LW · GW

Thank you for catching this. 

These linked to section titles in our draft gdoc for this post. I have replaced them with mentions of the appropriate sections in this post.