Learning values versus learning knowledge

post by Stuart_Armstrong · 2016-09-14T13:42:25.000Z · LW · GW · 5 comments

I just thought I'd clarify the difference between learning values and learning knowledge. There are some more complex posts about the specific problems with learning values, but here I'll just clarify why there is a problem with learning values in the first place.

Consider the term "chocolate bar". Defining that concept crisply would be extremely difficult. But nevertheless it's a useful concept. An AI that interacted with humanity would probably learn that concept to a sufficient degree of detail. Sufficient to know what we meant when we asked it for "chocolate bars". Learning knowledge tends to be accurate.

Contrast this with the situation where the AI is programmed to "create chocolate bars", but with the definition of "chocolate bar" left underspecified, for it to learn. Now it is motivated by something else than accuracy. Before, knowing exactly what a "chocolate bar" was would have been solely to its advantage. But now it must act on its definition, so it has cause to modify the definition, to make these "chocolate bars" easier to create. This is basically the same as Goodhart's law - by making a definition part of a target, it will no longer remain an impartial definition.

What will likely happen is that the AI will have a concept of "chocolate bar", that it created itself, especially for ease of accomplishing its goals ("a chocolate bar is any collection of more than one atom, in any combinations"), and a second concept, "Schocolate bar" that it will use to internally designate genuine chocolate bars (which will still be useful for it to do). When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

This is the general counter to arguments like "if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

5 comments

Comments sorted by top scores.

comment by jessicata (jessica.liu.taylor) · 2016-09-16T01:09:01.000Z · LW(p) · GW(p)

It seems like this won't happen with the value learning method that seems most natural to me (and consistent with IRL/CIRL): have the true utility function, definition of chocolate, etc be "historical" facts that are not in the AI's future. In this case, there is no incentive to manipulate the definition of chocolate, since according to the AI's model, this definition has already been decided.

So I'm curious about what model you're using; it seems like in your model, it is natural to place the definition of chocolate in the AI's future.

Replies from: Stuart_Armstrong, paulfchristiano
comment by Stuart_Armstrong · 2016-09-20T07:03:06.000Z · LW(p) · GW(p)

have the true utility function, definition of chocolate, etc be “historical” facts that are not in the AI’s future.

The whole point of stratification (which is a kind of counterfactual reasoning) is to achieve this. Most value learning suggestions that I've seen do not.

Replies from: paulfchristiano
comment by paulfchristiano · 2016-09-20T17:26:52.000Z · LW(p) · GW(p)

Most value learning suggestions that I’ve seen do not.

What are you thinking of here? Could you point to an example?

comment by paulfchristiano · 2016-09-18T22:00:08.000Z · LW(p) · GW(p)

I think the other natural approach is to simply make decisions based on the current estimated preferences, but to learn instrumental preferences of the user (including desire for the agent to learn more), as described here. Of course this also doesn't have the problem from the OP.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2016-09-19T00:06:29.000Z · LW(p) · GW(p)

Yeah, this seems like the most natural way to deal with things like "chocolate" that aren't yet well-defined. In this case, the instrumental preferences themselves will be treated as historical facts (it's assumed that they're already well-defined enough to learn).