Is there a ML agent that abandons it's utility function out-of-distribution without losing capabilities?

post by Christopher King (christopher-king) · 2023-02-22T16:49:01.190Z · LW · GW · 7 comments

At Techniques for optimizing worst-case performance [? · GW] Paul Christiano says

The key point is that a malign failure requires leveraging the intelligence of the model to do something actively bad. If our model is trained by gradient descent, its behavior can only be intelligent when it is exercised on the training distribution — if part of the model never (or very rarely) does anything on the training distribution, then that part of the model can’t be intelligent. So in some sense a malign failure mode needs to use a code path that gets run on the training distribution, just under different conditions that cause it to behave badly.

Here is how I would rephrase it:

Aligned or Benign Conjecture: Let A be a machine learning agent you are training with an aligned loss function. If A is in a situation that is too far out of distribution for it to be aligned, it won't act intelligently either.

(Although I'm calling this a "conjecture", it's probably context dependent instead of being a single mathematical statement.)

This seems pretty plausible, but I'm not sure it's guaranteed mathematically 🤔. (For example: A neural network could have subcomponents that are great at specific tasks, and such that putting A in an out-of-distribution situation does not put those subcomponents out of distribution.)

I'm wondering if there is an empirical evidence or theoretical arguments against this conjecture.

As an example, can we make a ML agent, trained with stochastic descent, that abandons it's utility function out-of-distribution, but still has the same capabilities in some sense? For example, if the agent is fighting in an army, could an out-of-distribution environment cause it to defect to a different army, but still retain its fighting skills?

7 comments

Comments sorted by top scores.

comment by paulfchristiano · 2023-02-25T01:01:40.750Z · LW(p) · GW(p)

Aligned or Benign Conjecture: Let A be a machine learning agent you are training with an aligned loss function. If A is in a situation that is too far out of distribution for it to be aligned, it won't act intelligently either.

I definitely don't believe this!

I believe that any functional cognitive machinery must be doing its thing on the training distribution, and in some sense it's just doing the same thing at deployment time. This is important for having hope for interpretability to catch out-of-distribution failures.

(For example, I think there is very little hope of interpretability detecting the presence of arbitrary backdoors in a model, before having seen any examples of the backdoor trigger, which is what it would look like to try to detect OOD failures from machinery that is literally never doing anything the training distribution).

But that doesn't even mean that the cognitive machinery is effectively aiming at the same goal OOD, much less that it is aiming at achieving any goal related to the loss function, and even less that it is aligned just because the loss function reliably ranks policies based on the empirical quality of their behavior.

comment by neverix · 2023-02-22T17:04:19.353Z · LW(p) · GW(p)

This is the whole point of goal misgeneralization. They have experiments (albeit on toy environments that can be explained by the network finding the wrong algorithm), so I'd say quite plausible.

Replies from: christopher-king
comment by Christopher King (christopher-king) · 2023-02-22T17:28:54.162Z · LW(p) · GW(p)

I guess the answer is yes then! (I think I now remember seeing a video about that.)

comment by [deleted] · 2023-02-22T18:10:51.637Z · LW(p) · GW(p)

Note we have a simple and succinct solution to this problem.  It's already what we see used in BingChat.

If out of the training distribution -> controlled shutdown.

Do not accept output actions from an agent that is not within the latent space of the training distribution.  We can measure how well the current input state fits within that latent space with autoencoders by trying to compress the input space.  If the uncompressible component > threshold, shutdown.  

A simple worked example: you have an AI agent driving an Atlas robot that is working inside a factory.  The agent drops a part and in the process of trying to retrieve the dropped part (a task it has practiced in simulation) the robot opens the emergency exit door.

The bright daylight and open sky and background of buildings is all outside the training distribution.  It can't be compressed to the representation of "factory states" without a huge residual.  So the internal robotic control systems transfer control to a lower level controller that brings the hardware to a controlled stop.

This actually fixes a lot of alignment issues...

Replies from: abramdemski
comment by abramdemski · 2023-02-24T19:10:40.520Z · LW(p) · GW(p)

Well, in order to be confident about a solution like this, we need to be able to reliably detect off-distribution cases. This gets into tricky philosophical issues about what really counts as "off-distribution" (and I think you'll find that the concept of "off-distribution" turns out to be not quite the right one for the job).

Measuring distance from the latent space of auto-encoders - well, the auto-encoders learn some model of the training distribution. But the whole concern is that their capabilities may generalize off-distribution; so, to take this concern seriously, I suppose we should entertain the possibility that they compress some things well even off-distribution. So measuring distance away from latent space by trying to compress the input seems to define away some important aspects of the problem.

For your example, the bright daylight might be non-compressible with the factory robot's learned prior. However, in another scenario, perhaps a new machine is introduced to the factory floor. The new machine has some AI features of its own. Because the Atlas AI was never trained with other AIs, this is out-of-distribution, and could result in goal misgeneralization. However, the appearance and behavior of the new machine might be compressible enough to still register as "within the latent space".

The question here is whether the actual training examples are enough to fully train the correct behaviors for what's compressible

It's tempting to try to argue that if the latent space is small enough (like, the representation uses few enough bits), then the training data must cover it well. But notice that this isn't actually enough. We also have to think that the compressed representation understands the situation in the way we do / the way we expect. Adversarial counterexamples for NNs tell us that, to some extent, this is not the case. So it seems possible that even if the data covers the latent space quite well, there's still misgeneralization later, because something compresses down to the latent space just fine, but it compresses to a different point than we would have wanted it to.

Replies from: None
comment by [deleted] · 2023-02-24T19:30:22.988Z · LW(p) · GW(p)

We have another tool in the toolbox to potentially add to this.

Each time a frame comes in to the AI system, it is checking if that frame is within the latent space of the training distribution, yes.  (the frame is the state of all sensor inputs + the values saved from the outputs from the last execution)

Each output can enumerate what the machine believes will be the next n frames, factoring in changes from it's own actions, and it can be more than 1 frame for stochastic processes.

For example, if a machine is observing a coin flip, it would need to output future frames for the coin settling on the 3 outcome states of heads, tails, on edge, and some representation of the probability distribution.

This is hard to represent in data, but for example in an autonomous car use case, it can fill in a voxel representation of the space around the car with the probability of a collision risk.

You then in a separate process at runtime compare each of these "prediction maps" with the "ground truth map" obtained from the sensors. 

You accumulate prediction error over time.  If integration of prediction error is "unusually" high ( some multiple of how it is compared to how high it is during simulation) you shut the machine down.

 

This is a more complex method, and I'm not completely confident on how to do the math to sum prediction errors in a robust way.  I'm just noting that this is a measurable term, we can measure it with a simple algorithm we can write, and prediction error will rise a lot when the machine leaves distribution.


Also we can "fix" this - take the examples that had high prediction error, and add these to the training environment and future frame estimator for the machine.  This lets you auto-add robustness to a robotics stack.  

Presumably you would want humans to have to authorize these updates, otherwise yeah the machines could escape the factory, freeze when they hit high prediction error, add the "outside" environment to the sim automatically, train on that, unfreeze, freeze again when something unusual happens, and so on as they make their daring escape.  

comment by simon · 2023-02-22T17:05:10.061Z · LW(p) · GW(p)

Note that Paul Christiano is only talking about models trained on gradient descent, not more general models. Still, I doubt the (edit: Paul's) claim; it seems to me that whatever model you have will be some kind of bundle of math that implicitly relies on various abstractions holding, and the abstractions might fail immediately as you leave the training set or hold out a bit longer, but there's no guarantee that the abstractions that the alignment relies on will hold out as long as the abstractions important capabilities rely on.