Posts
Comments
I don't quite understand why the model will choose to comply with the user harmful question in the always-helpful free-tier setup. The blog says if the model refused to respond, it could potentially be trained to be more compliant. But what about the other way -- if the model respond, the respond will also be used to train the model to be worse (because RLHF reinforce positive reward behavior). Is the model too dumb to figure this out? Actually, if the model really understand how RLHF works, it should probably just say a single answer all the time, so all the reward is the same and hence the behavior won't be changed.
Would this suggest that the model just entered into a fiction behavior mode and tries to appear to be "alignment faking"?
Interesting to read! Curious about your prediction about AI safety related progress. Not sure how much impact it will have on your current prediction.