Alignment via manually implementing the utility function

post by Chantiel · 2021-09-07T20:20:25.145Z · LW · GW · 6 comments

Contents

6 comments

I would like to propose an idea for aligning AI.

First, I will provide some motivation for it. Suppose you are a programmer who's having a really hard time implementing a function in a computer program you're developing. Most of the code is fine, but there's this one function that you can't figure out how to implement right. But, you still need to run the program. So, to do this, you do the following: first, you add a breakpoint to the code in the function you're having trouble implementing. So, whenever you reach the function in the code, program execution halts. Once this happens, you on your own find a reasonable value v for the function to return. Finally, in your debugger you type "return v", making the function return v, and then you resume execution.

As long as you can come up with reasonable return values of the function on your own, then I bet the above would make the program work pretty well. And why not? Everything outside that function is implemented well. And you are manually making sure that hard-to-implement function also outputs reasonable values. So then there's no function that's not doing what it's supposed to do.

My basic idea is to do this, but with the AI's utility function.

Now, you don't need to literally put a breakpoint in the AI's utility function and then have the developers type into a debugger. Instead, inside the AI's utility function, you can just have the AI pause execution, send a message to a developer or other individual containing a description of a possible world, and then wait for a response. Once someone sends a message in response, the AI will use the returned value as the value of its utility function. That is, you could do something like:

def utility(outcome):
    message_ai_controllers(make_readable(outcome))
    response = wait_for_controller_response()
    return parse_utility(response)

(Error-handling code could be added if the returned utility is invalid.)

Using the above utility function would, in theory at least, be equivalent to actually having a breakpoint in the code, then manually returning the right value with a debugger.

You might imagine this AI would be incredibly inefficient due to how slow people would be in answering the AI's queries. However, with the right optimization algorithm I'm not sure this would be much of a problem. The AI would have an extremely slow utility function, but I don't see a reason to think that it's impossible to make an optimization algorithm that can perform well on even on extremely slow objective functions.

I'll provide one potential approach to making such an algorithm. The optimization algorithm would, based on the known values of its objective function, learn fast approximations to it. Then, the AI could use these fast function to come up with a plan that scores well on them. Finally, if necessary, the AI can query its (slow) objective function for the value of the results of this plan. After doing so, it would also update its fast approximations with what its learned. The optimization algorithm could be designed so that if the AI is particularly unsure about if something would be desirable according to the objective function, it would consult the actual (slow) objective function. The algorithm could also potentially be programmed to do the same for any outcomes with high impact or strategic significance.

My technique is intended to provide both outer-alignment and corrigability. By directly asking the people for the desirability of outcomes, the AI would, if I'm reasoning correctly, be outer-aligned. If the AI uses fast approximations learned approximations to its utility function, then the system also provides a degree of hard-coded corrigability. The AI's optimization algorithm is hard-coded to query its slow utility function at some points and to update its fast models appropriately, which allows for errors in the fast approximations to be corrected.

6 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2021-12-04T07:00:57.591Z · LW(p) · GW(p)

This works great when you can recognize good things within the respresentation the AI uses to think about the world. But what if that's not true?

Here's the optimistic case:

Suppose you build a Go-playing AI that defers to you for its values, but the only things it represents are states of the Go board, and functions over states of the Go board. You want to tell it to win at Go, but it doesn't represent that concept, you have to tell it what "win at Go" means in terms of a value function from states of the Go board to real numbers. If (like me) you have a hard time telling when you're winning at Go, maybe you just generate as many obviously-winning positions as you can and label them all as high-value, everything else low-value. And this sort of works! The Go-playing AI tries to steer the gameboard into one of these obviously-winning states, and then it stops, and maybe it could win more games of Go if it also valued the less-obviously-winning positions, but that's alright.

Why is that optimistic?

Because it doesn't scale to the real world. An AI that learns about and acts in the real world doesn't have a simple gameboard that we just need to find some obviously-good arrangements of. At the base level it has raw sensor feeds and motor outputs, which we are not smart enough to define success in terms of directly. And as it processes its sensory data it (by default) generates representations and internal states that are useful for it, but not simple for humans to understand, or good things to try to put value functions over. In fact, an entire intelligent system can operate without ever internally representing the things we want to put value functions over.

Here's a nice post from the past: https://www.lesswrong.com/posts/Mizt7thg22iFiKERM/concept-safety-the-problem-of-alien-concepts [LW · GW]

Replies from: Chantiel
comment by Chantiel · 2022-03-23T11:23:52.540Z · LW(p) · GW(p)

Sorry for taking a ridiculously long time to get back to you. I was dealing with some stuff.

This works great when you can recognize good things within the represention the AI uses to think about the world. But what if that's not true?

Yes, that is correct. As I said in the article, a high degree of interpretability is necessary to use the idea.

It's true that interpretability is required, but the key point of my scheme is this: interpretability is all you need for intent alignment, provided my scheme is correct. I don't know of any other alignment strategies for which which this is the case. So, my scheme, if correct, basically allows you to bypass what is plausibly the hardest part of AI safety: robust value-loading.

I know of course that I could be wrong about this, but if the technique is correct, it seems like a quite promising AI safety technique to me.

Does this seem reasonable? I may very well be just be misunderstanding or missing something.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-03-23T20:26:38.269Z · LW(p) · GW(p)

My point was that you don't just need interpretability, you need the AI to "meet you halfway" by already learning the right concept that you want to interpret. You might also need it to not learn "spurious" concepts that fit the data but generalize poorly. This doesn't happen by default AFAICT, it needs to be designed for.

Replies from: Chantiel
comment by Chantiel · 2022-03-29T10:50:24.575Z · LW(p) · GW(p)

I hadn't fully appreciated to difficultly that could result from AIs having alien concepts, so thanks for bringing it up.

However, it seems to me that this would not be a big problem, provided the AI is still interpretable. I'll provide two ways to handle this.

For one, you could potentially translate the human concepts you care about into statements using the AI's concepts. Even if the AI doesn't use the same concepts people do, AIs are still incentivized to form a detailed model of the world. If you can have access to all the AI's world model, but still can't figure out basic things like if the model means the world gets destroyed or the AI takes over the world, then that model doesn't seem very interperable. So I'm skeptical that this would really be a problem.

But, if it is, it seems to me that there's a way to get the AI to have non-alien concepts.

In a comment with another person, made a modification to the system by saying that the people outputting utilities should be able to refuse to output one in a given query, for example because the situation is too complicated or to vague for humans to understand that desirability of. This could potentially allow for people to avoid having the AI from having very aliens concepts.

To deal with alien concepts, you can just have the people refuse to provide an answer to the utility of a possible for description if the description is described. This way, the AI would need to come up with sufficiently non-alien concepts before it can understand the utility of things. The AI would have to come up with reasonably non-alien concepts in order to get any of its calls to its utility function to work.

comment by ViktoriaMalyasova · 2021-12-04T12:18:38.061Z · LW(p) · GW(p)

Another problem is that the system cannot represent and communicate the whole predicted future history of the universe to us. It has to choose some compact description. And the description can get a high evaluation both for being a genuinely good plan, or for neglecting to predict or mention bad outcomes and using persuasive language (if it's a natural language description). 

Maybe we can have the human also report his happiness daily, and have the make_readable subroutine rewarded solely for how good the plan evaluation given beforehand matches the happiness level reported afterwards? I don't think that solves the problem of delayed negative consequences, or bad consequences human will never learn about, or wireheading the human while using misleading descriptions of what's happening, though.

Replies from: Chantiel
comment by Chantiel · 2022-03-23T14:07:25.370Z · LW(p) · GW(p)

Another problem is that the system cannot represent and communicate the whole predicted future history of the universe to us.

This is a good point and one that I, foolishly, hadn't considered.

However, it seems to me that there is a way to get around this. Specifically, just provide the query-answerers the option to refuse to evaluate the utility of a description of a possible future. If this happens, the AI won't be able to have its utility function return a value for such a possible future.

To see how to do this, note that if a description of a possible future world is too large for the human to understand, then the human can refuse to provide a utility for it.

Similarly, if the description of the future doesn't specify the future with sufficient detail that the person can clearly tell if the described outcome would be good, then the person can also refuse to return a value.

For example, suppose you are making an AI designed to make paperclips. And suppose the AI queries the person asking for the utility of the possible future described by, "The AI makes a ton of paperclips". Then the person could refuse to answer, because the description is insufficient to specify the quality of the outcome, for example, because it doesn't say whether or not Earth got destroyed.

Instead, a possible future would only be rated as high utility if it says something like,"The AI makes a ton of paperclips, and the world isn't destroyed, and the AI doesn't take over the world, and no creatures get tortured anywhere in our Hubble sphere, and creatures in the universe are generally satisfied".

Does this make sense?

I, of course, could always be missing something.

(Sorry for the late response)