Catastrophic Goodhart in RL with KL penalty

post by Thomas Kwa (thomas-kwa), Adrià Garriga-alonso (rhaps0dy) · 2024-05-15T00:58:20.763Z · LW · GW · 10 comments

Contents

  Abstract
  Intuitive explanation of catastrophic Goodhart with a KL penalty
  Results
    X heavy tailed, V light tailed: EV→0
    X,V have light tails and are independent: EV→∞
  How likely is heavy-tailed error?
  Limitations
    Goodhart is not inevitable
    Goodhart seems preventable
    Goodhart is not a treacherous turn
  Conclusion
  Related work
None
10 comments

TLDR: In the last [LW · GW] two [LW · GW] posts, we showed that optimizing for a proxy can fail to increase true utility, but only when the error is heavy-tailed. We now show that this also happens in RLHF with a KL penalty.

This post builds on our earlier result with a more realistic setting and assumptions:

Abstract

When applying KL regularization, the trained model is regularized towards some base policy . One would hope that a KL penalty can produce good outcomes even in the case of reward misspecification; that is, if the reward U is the sum of true utility V and an error term X, we would hope that optimal policies under a KL penalty achieve high V even if the magnitude of X is large. We show that this is not always the case: when X is heavy-tailed, there are arbitrarily well-performing policies  with ; that is, that get no higher true utility than the prior. However, when error is light-tailed and independent of V, the optimal policy under a KL penalty results in , and  can be made arbitrarily large. Thus, the tails of the error distribution are crucial in determining how much utility will result from optimization towards an imperfect proxy.

Intuitive explanation of catastrophic Goodhart with a KL penalty

Recall that KL divergence between two distributions P and Q is defined as

If we have two policies , we abuse notation to define  as the KL divergence between the distributions of actions taken on the states in trajectories reached by . That is, if  is the distribution of trajectories taken by , we penalize

This strongly penalizes  taking actions the base policy never takes, but does not force the policy to take all actions the base policy takes.

If our reward model gives reward , then the optimal policy for RLHF with a KL penalty is:

Suppose we have an RL environment with reward  where  is an error term that is heavy-tailed under , and V is the “true utility” assumed to be light-tailed under . Without loss of generality, we assume that . If we optimize for , there is no maximum because this expression is unbounded. In fact, it is possible to get  and  for any . That is, we get arbitrarily large proxy reward  and arbitrarily small KL penalty.

For such policies , it is necessarily the case that ; that is, for policies with low KL penalty, utility goes to zero. Like in the previous post, we call this catastrophic Goodhart because the utility produced by our optimized policy is as bad as if we hadn’t optimized at all. This is a corollary of a property about distributions (Theorems 1 and 3 below) which we apply to the case of RLHF with unbounded rewards (Theorem 2).

The manner in which these pathological policies  achieve high  is also concerning: most of the time they match the base policy , but a tiny fraction of the time they will pick trajectories with extremely high reward. Thus, if we only observe actions from the policy , it could be difficult to tell whether  is Goodharting or identical to the base policy.

Results

Full proofs are in the appendix [LW · GW] post.

X heavy tailed, V light tailed: 

We'll start by demonstrating the key fact about distributions that makes this proof work: in a heavy-tailed distribution, you can have arbitrarily high mean with arbitrarily low KL divergence.

Theorem 1: Given any heavy-tailed reference distribution  over  with mean , and any , there is a distribution  with mean  and .

Proof sketch (see appendix [LW · GW] for full proof): WLOG take . If we set  to upweight the probability mass of  to  for some , then the mean of  will be approximately at least . As , the KL divergence  will shrink to zero.

The intuition is that in a heavy-tailed distribution, events with extremely high  are not very rare, so you don’t pay much of a KL penalty to upweight them so they happen about  of the time. We hope the animation below intuitively explains this fact:

As , the mean of X grows without bound while KL divergence goes to 0. The prior distribution Q is a Student t-distribution with df=3. In this case, high values of X are upweighted to ; upweighting them to  would cause  to converge to ~1 while KL divergence goes to zero faster.

We now adapt our result to the case where our policy is a language model and we are training it using RLHF. We are now applying a KL penalty over policies, which are a different distribution from the returns , but a similar result holds:

Theorem 2: Let  be a deterministic-transition MDP with Markovian returns. Given  we define the function that takes policies to trajectories , and the average return function  which induces a function . Let  be some base policy. If  is heavy-tailed with finite mean , then for any , there is a policy  with mean return  and .

 

In theorems 1 and 2 we do not require that  is light-tailed, but if we make this assumption, we can then prove that a small KL divergence implies V is small:

Theorem 3: If  is light-tailed,  is finite, and  is bounded, then  is bounded, and  as .

Together, theorems 2 and 3 imply the headline result.

 have light tails and are independent: 

Our proof for the hard-threshold case [LW · GW] can be extended to show that when X and V are independent and both have light tails, the optimum of  has . It is also true that utility under the optimal policy goes to  as the KL penalty decreases:

Theorem 4: If  with  and  both light-tailed, and the distribution of U is continuous, and , then .

How likely is heavy-tailed error?

Current open-source reward models for RLHF probably don’t have heavy-tailed error; we explored the upper tails of the reward distributions of a ~0.5B reward model and a ~7B reward model, and the maximum values were less than 100, which is consistent with light tails. (We will show evidence for this in a future post).

But in open-ended environments, especially relating to real-world outcomes, reward is much more likely to be heavy-tailed, and so catastrophic Goodhart may become more likely.

Limitations

Goodhart is not inevitable

Catastrophic Goodhart is not a unique optimal policy, just one family of high-performing policies. When optimizing , the outcome depends on RL training dynamics; it could be that  causing catastrophic Goodhart, but more likely both terms will go to infinity, potentially allowing .

Even so, catastrophic Goodhart is likely to occur in many scenarios where KL regularization is naively employed in an attempt to avoid Goodhart’s Law:

Goodhart seems preventable

There are at least two ways to prevent this phenomenon, even if we don’t know how to make an unbounded reward function with light-tailed error:

Goodhart is not a treacherous turn

Although the kind of rare failures above are superficially similar to a treacherous turn as described in Risks from Learned Optimization [? · GW], we think they are very different. An AI mesa-optimizer randomly performing a coup is inner-misaligned, situationally aware, and motivated by maximizing the probability of a successful coup. The catastrophic Goodhart phenomenon has nothing to do with inner misalignment or situational awareness, and probabilities of an extreme action are unrelated to the optimum rate for executing a successful coup.

Conclusion

In the next post, we will empirically demonstrate that some current reward models have light-tailed reward. After this, we may explore the conditions under which catastrophic Goodhart holds in a stochastic environment, and do empirical tests of this phenomenon in practice.

10 comments

Comments sorted by top scores.

comment by Erik Jenner (ejenner) · 2024-05-15T16:54:41.062Z · LW(p) · GW(p)

The manner in which these pathological policies  achieve high  is also concerning: most of the time they match the reference policy , but a tiny fraction of the time they will pick trajectories with extremely high reward. Thus, if we only observe actions from the policy , it could be impossible to tell whether  is Goodharting or identical to the base policy.

I'm confused; to learn this policy , some of the extremely high reward trajectories would likely have to be taken during RL training, so we could see them, right? It might still be a problem if they're very rare (e.g. if we can only manually look at a small fraction of trajectories). But if they have such high reward that they drastically affect the learned policy despite being so rare, it should be trivial to catch them as outliers based on that.

One way we wouldn't see the trajectories is if the model becomes aligned with "maximize whatever my reward signal is," figures out the reward function, and then executes these high-reward trajectories zero-shot. (This might never happen in training if they're too rare to occur even once during training under the optimal policy.) But that's a much more specific and speculative story.

I haven't thought much about how this affects the overall takeaways but I'd guess that similar things apply to heavy-tailed rewards in general (i.e. if they're rare but big enough to still have an important effect, we can probably catch them pretty easily---though how much that helps will of course depend on your threat model for what these errors  are).
 

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2024-05-15T18:28:03.538Z · LW(p) · GW(p)

This is a fair criticism. I changed "impossible" to "difficult".

My main concern is with future forms of RL that are some combination of better at optimization (thus making the model more inner aligned even in situations it never directly sees in training) and possibly opaque to humans such that we cannot just observe outliers in the reward distribution. It is not difficult to imagine that some future kind of internal reinforcement could have these properties; maybe the agent simulates various situations it could be in without stringing them together into a trajectory or something. This seems worth worrying about even though I do not have a particular sense that the field is going in this direction.

comment by Alex_Altair · 2024-05-28T15:51:19.636Z · LW(p) · GW(p)

Does the notation get flipped at some point? In the abstract you say

prior policy 

and

there are arbitrarily well-performing policies 

But then later you say

This strongly penalizes  taking actions the base policy never takes

Which makes it sound like they're switched.

I also notice that you call it "prior policy", "base policy" and "reference policy" at different times; these all make sense but it'd be a bit nicer if there was one phrase used consistently.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2024-05-28T20:44:31.366Z · LW(p) · GW(p)

The third one was a typo which I just fixed. I have also changed it to use "base policy" everywhere to be consistent, although this may change depending on what terminology is most common in an ML context, which I'm not sure of.

comment by Noosphere89 (sharmake-farah) · 2024-05-17T17:28:59.321Z · LW(p) · GW(p)

I have a question about this post, and it has to do with the case where both utility and error are heavy tailed:

Where does the expected value converge to if both utility and errors are heavy tailed? Is it 0, infinity, some other number, or does it not converge to any number at all?

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2024-05-17T22:00:58.507Z · LW(p) · GW(p)

It could be anything because KL divergence basically does not restrict the expected value of anything heavy-tailed. You could get finite utility and  error, or the reverse, or infinity of both, or neither converging, or even infinite utility and negative infinity error—any of these with arbitrarily low KL divergence.

To draw any conclusions, you need to assume some joint distribution between the error and utility, and use some model of selection that is not optimal policies under a KL divergence penalty or limit. If they are independent and you think of optimization as conditioning on a minimum utility threshold, we proved last year [LW · GW] that you get 0 of whichever has lighter tails and  of whichever has heavier tails, unless the tails are very similar. I think the same should hold if you model optimization as best-of-n selection. But the independence assumption is required and pretty unrealistic, and you can't weaken it [LW(p) · GW(p)] in any obvious way.

Realistically I expect that error will be heavy-tailed and heavier-tailed than utility by default so error goes to infinity. But error will not be independent of utility, so the expected utility depends mostly on how good extremely high error outcomes are. The prospect of AIs creating some random outcome that we overestimated the utility of by 10 trillion points does not seem especially good, so I think we should not be training AIs to maximize this kind of static heavy-tailed reward function.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-05-17T23:06:50.273Z · LW(p) · GW(p)

My expectation is that error and utility are both extremely heavy tailed, and arguably in the same order of magnitude for heavy tails.

But thanks for answering, the real answer is we can predict effectively nothing without independence, and thus we can justify virtually every outcome of real-life Goodhart.

Maybe it's catastrophic, maybe it doesn't matter, or maybe there's anti-goodhart, but I don't see a way to predict what will reasonably happen.

Also, why do you think that error is heavier tailed than utility?

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2024-05-17T23:36:39.111Z · LW(p) · GW(p)

Also, why do you think that error is heavier tailed than utility?

Goodhart's Law is really common in the real world, and most things only work because we can observe our metrics, see when they stop correlating with what we care about, and iteratively improve them. Also the prevalence of reward hacking in RL often getting very high values.

If the reward model is as smart as the policy and is continually updated with data, maybe we're in a different regime where errors are smaller than utility.

comment by Stephen McAleese (stephen-mcaleese) · 2024-05-15T13:48:04.512Z · LW(p) · GW(p)
  • Regularize by a function other than KL divergence. For heavy-tailed error distributions, KL divergence doesn’t work, but capping the maximum odds ratio for any action (similar to quantilizers) still results in positive utility.

A recent paper from UC Berkeley named Preventing Reward Hacking with Occupancy Measure Regularization proposes replacing KL divergence regularization with occupancy measure (OM) regularization. OM regularization involves regularizing based on the state or state-action distribution rather than the the action distribution:

"Our insight is that when reward hacking, the agent visits drastically different states from those reached by the safe policy, causing large deviations in state occupancy measure (OM). Thus, we propose regularizing based on the OM divergence between policies instead of AD [action distribution] divergence to prevent reward hacking"

The idea is that regularizing to minimize changes in the action distribution isn't always safe because small changes in the action distribution can cause large changes in the states visited by the agent:

Suppose we have access to a safe policy that drives slowly and avoids falling off the cliff. However, the car is optimizing a proxy reward function that prioritizes quickly reaching the destination, but not necessarily staying on the road. If we try to regularize the car’s action distributions to the safe policy, we will need to apply heavy regularization, since only slightly increasing the probability of some unsafe action (e.g., making a sharp right turn) can lead to disaster.

...

Our proposal follows naturally from this observation: to avoid reward hacking, regularize based on divergence from the safe policy’s occupancy measure, rather than action distribution.  A policy’s occupancy measure (OM) is the distribution of states or state-action pairs seen by a policy when it interacts with its environment.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2024-08-04T02:35:15.364Z · LW(p) · GW(p)

I think that paper and this one are complementary. Regularizing on the state-action distribution fixes problems with the action distribution, but if it's still using KL divergence you still get the problems in this paper. The latest version on arxiv mentions this briefly.