The "Backchaining to Local Search" Technique in AI Alignment

post by adamShimi · 2020-09-18T15:05:02.944Z · LW · GW · 1 comments

Contents

1 comment

In the spirit of this post [AF · GW] by John S. Wentworth, this is a reference for a technique I learned from Evan Hubinger. He's probably not the first to use it, but he introduced it to me, so he gets the credit.

In a single sentence, backchaining to local search is the idea of looking at how a problem of alignment could appear through local search (think gradient descent). So it starts with a certain problem (say reward tampering), and then tries to create a context where the usual training process in ML (local search) could create a system suffering from this problem. It’s an instance of backchaining in general, which just looks for how a problem could appear in practice.

Backchaining to local search has two main benefits:

Let's look at a concrete example: reward gaming (also called specification gaming). To be even more concrete, we have a system with a camera and other sensors, and its goal is to maximize the amount of time when my friend Tom smiles, as measured through a loss function that captures whether the camera sees Tom smiling. The obvious (for us) way to do reward gaming here is to put a picture of Tom’s smiling face in front of the camera -- then the loss function is minimized.

The backchaining to local search technique applied to this example asks "How can I get this reward gaming behavior by local search?" Well this reward gaming strategy is probably a local minima for the loss function (as changing just a little the behavior would increase the loss significantly), so local search could find it and stay in there. It's also better than most simple strategies, as ensuring that someone smiles (not necessarily a good goal, mind you) requires rather complex actions in the world (like going full "Joker" on someone, or changing someone's brain chemistry, or any other weird and impractical scheme). So there's probably a big zone in model space for which our specific example of reward gaming is the local minima.

All in all, the backchaining to local search technique tells us that this looks like a problem that should happen frequently in practice. Which lines up well with the evidence: see this list of reward gaming examples in the literature, and the corresponding post.

The last thing to point in such a reference post is how to interpret this technique. Because just like models, no technique applies to every situation. If you cannot backchain to local search from your alignment issue, it might mean one of these things.

That is, this technique assumes that your problem is a specific behavior of a trained system (like reward gaming), and that learning algorithms will not shift completely before we reach AGI. So it has close ties with the prosaic AGI [AF · GW] approach to AI Safety.

In conclusion, when you encounter or consider a new alignment problem which talks about the specific behavior of the AI (compared to a general issue of theory, for example), backchaining to local search means trying to find a scenario where a system suffering from your alignment problem emerges from local search in some model space. If you put decent probability on the prosaic AGI idea, it should tell you something important about your alignment problem.

1 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2020-09-18T18:55:51.860Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

This post explains a technique to use in AI alignment, that the author dubs “backchaining to local search” (where local search refers to techniques like gradient descent and evolutionary algorithms). The key idea is to take some proposed problem with AI systems, and figure out mechanistically how that problem could arise when running a local search algorithm. This can help provide information about whether we should expect the problem to arise in practice.

Planned opinion:

I’m a big fan of this technique: it has helped me expose many initially confused concepts, and notice that they were confused, particularly wireheading and inner alignment. It’s an instance of the more general technique (that I also like) of taking an abstract argument and making it more concrete and realistic, which often reveals aspects of the argument that you wouldn’t have previously noticed.