Pre-Training + Fine-Tuning Favors Deception

post by Mark Xu (mark-xu) · 2021-05-08T18:36:06.236Z · LW · GW · 3 comments

Contents

3 comments

Thanks to Evan Hubinger for helpful comments and discussion.

Currently, to obtain models useful for some task X, models are pre-trained on some task Y, then fine-tuned on task X. For example, to obtain a model that can summarize articles, a large language model is first pre-trained on predicting common crawl, then fine-tuned on article summarization. Given the empirical success of this paradigm and the difficulty of obtained labeled data, I loosely expect this trend to continue.

I will argue that compared to the paradigm of training a model on X directly, training on Y then fine-tuning on X increases the chance of deceptive alignment. More specifically, I will argue that fine-tuning a deceptive model will produce a deceptive model and fine-tuning a non-deceptive model is more likely to produce a deceptive model than training from scratch.

From Does SGD Produce Deceptive Alignment? [AF · GW]:

More specifically, we can single out three types of models that optimize for the base objective:

  • internally aligned models optimize for the base objective because it is equal to the model's terminal objective
  • corrigibly aligned models optimize for the base objective because their terminal objective is a robust pointer to a model of the base objective. Note that these are called corrigible because they are amenable to corrections of the model of the base objective (though not necessarily corrections to their pointer).
  • deceptively aligned models optimize for the base objective for instrumental reasons

Assume that pre-training has produced a model that optimizes for the pre-training loss and is one of the above types. The arguments are weaker if pre-training is imperfect. If the model is not one of the above types, then these arguments do not apply.

We can analyze the three cases separately:

In the first case, changing an internal objective from the pre-training loss to the fine-tuning loss seems about as difficult as learning the fine-tuning loss from scratch. Since the model starts highly capable, deception has a strict advantage here. For example, suppose a model was pre-trained on prediction and fine-tuned on approval. Since the pre-training data is vast, the model will already understand human approval. Performing well on the fine-tuning loss is thus likely to involve routing capabilities through the understanding the model already has, instead of reincorporating human approval into the model’s internal objective.

In the second case, in order for fine-tuning to produce a corrigibly aligned model, the pointer must be changed from the pre-training loss to the fine-tuning loss. In order for fine-tuning to produce a deceptively aligned model, the model must understand the objective it’s being trained on, which also requires a pointer.

During normal training, the amount of optimization power the model can apply to a task slowly increases, while the amount of optimization power SGD can apply remains roughly constant. Since we are starting pre-training with an already capable model, this gap starts out wider than if we were training from scratch. Since deceptive models can build pointers at runtime, they will thus be able to apply more optimization power to pointer construction than SGD. As a result, the model will become deceptive faster than it can become corrigible. However, the model also begins with a detailed world model, enabling SGD to start building a pointer to the fine-tuning loss earlier than if we were training from scratch. Since deception cannot happen until the model has a detailed world model, this consideration is not more compelling when fine-tuning versus training from scratch.

In the third case, in order for fine-tuning to produce an internally or corrigibly aligned model, fine-tuning must align the model faster than the model can figure out the fine-tuning objective. Since the model was deceptive during pre-training, it already understands most of the training setup. In particular, it probably understood that it was being pre-trained and predicted that it would subsequently get fine-tuned, thus making fine-tuning overwhelmingly likely to produce a deceptive model. There are considerations about the type of deceptive alignment one gets during pre-training that I have ignored. See Mesa-Search vs Mesa-Control [AF · GW] for further discussion.

The above arguments assume that pre-training + fine-tuning and training on the fine-tuning task directly produce models that are equally capable. This assumption is likely false. In particular, one probably will not have enough data to achieve high capabilities at the desired task. If the desired task is something like imitative amplification, suboptimal capabilities might produce an imperfect approximation of HCH, which might be catastrophic even if HCH is benign. There are other reasons why pre-training is beneficial for alignment which I will not discuss.

Overall, holding constant the capabilities of the resulting model, pre-training + fine-tuning increases the probability of deceptive alignment. It is still possible that pre-training is net-beneficial for alignment. Exploring ways of doing pre-training that dodge the arguments for deceptive alignment is a potentially fruitful avenue of research.

3 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-05-08T21:42:01.626Z · LW(p) · GW(p)

Nice post! You may be interested in this related post and discussion. [LW · GW]

I think you may have forgotten to put a link in "See Mesa-Search vs Mesa-Control for discussion."

Replies from: mark-xu
comment by Mark Xu (mark-xu) · 2021-05-09T01:57:58.700Z · LW(p) · GW(p)

thanks, fixed

comment by Nora Belrose (nora-belrose) · 2023-04-03T20:02:27.422Z · LW(p) · GW(p)

Assume that pre-training has produced a model that optimizes for the pre-training loss and is one of the above types.

As you note, this is an important assumption for the argument, and I think it's likely false, at least for self-supervised pre-training tasks. I don't think LLMs for example are well-described as "optimizing for" low perplexity at inference time. It's not even clear to me what that would mean since there is no ground truth next token during autoregressive generation, so "low perplexity" is not defined. Rather, SGD simply produces a bundle of heuristics defining a probability distribution that matches the empirical distribution of human text quite well.

I do think your argument may apply to cases where you pre-train on an RL task and fine tune on another one, although even there it's unclear [LW · GW].