Posts

Training on Documents About Reward Hacking Induces Reward Hacking 2025-01-21T21:32:24.691Z

Comments

Comment by Nathan Hu (nathan-hu) on Training on Documents About Reward Hacking Induces Reward Hacking · 2025-04-12T21:21:51.379Z · LW · GW

Apologies for the slow response on this. There was an issue with the link - this should link to the files with the correct access permissions https://drive.google.com/drive/folders/1QUwJTIqwYH2eskaoRtDRgnMt0YHtyDYA

Comment by Nathan Hu (nathan-hu) on Training on Documents About Reward Hacking Induces Reward Hacking · 2025-01-28T01:04:18.482Z · LW · GW

The reduction in reward hacking after SFT or RL on Haiku supports the conjecture that initial conditions matter less than the long run incentives, especially for less capable models. On the other hand, the alignment faking paper shows evidence that capable models can have "value crystallization." IMO a main takeaway here is that values and personas we might worry about being locked can emerge from pre-taining. A future exciting model organisms project would be to try to show these two effects together (emergent values from pre-training + lock in). Its plausible to me that repeating the above experiments, with some changes to the synthetic documents and starting from a stronger base model, might just work.