What's the dream for giving natural language commands to AI?

post by Charlie Steiner · 2019-10-08T13:42:38.928Z · score: 9 (3 votes) · LW · GW · 2 comments

Contents

  0
  1 - The Artificial Intentional Stance
  3 - Process
  4 - Mort
  5 - Wrap-up
None
2 comments

0

I promised in a previous post [LW · GW] that I would give a post-mortem for a scheme for learning the intentional stance from natural language. This is that post. But first, I should explain why such an idea might seem good in the first place.

Some people think of AI as a genie. The goal of AI research, in this picture, is to "tell the AI what to do," sometimes explicitly in natural language. And then since the AI is smart, it will understand what we mean and do that, because to do something else would be stupid.

This is, in a sense, very naive. Making an AI that does what we want is not at all like instructing a human - see the relevant Eliezer post [LW · GW] - the methods, dangers, and goals are all different. But... if the AI understood what we meant, maybe we could just tell it what to do.

Of course, "understood what we meant" captures more or less the whole problem, because meaning isn't like the charge of the electron, it's nowhere in the words themselves. When you understand moral language, you're implicitly using your morals. But what if we trained an AI so that it functionally understood moral language - would that be implicitly using your morals too, and isn't that exactly what we want?

1 - The Artificial Intentional Stance

I like to think of myself as having preferences, but at the same time I am made of atoms, and my preferences are not-like-the-charge-of-the-electron, they're nowhere in the atoms. Instead, my preferences are an abstraction that I (and others) use when thinking about me.

So part of the this artificial intentional stance stuff can be summed up as: get the AI to think about humans like humans think about humans. (Another part is that abstractions are contagious. If I want to go to the gym, to handle this correctly this you need abstractions not just for me but also for the gym.)

We often put too much magic into the word "understand." If the AI can hold a good conversation and extract real-world information from human speech, it's reasonable to say it understands what we're saying. And then once it understands us, you might think "communicating our goals to it is a lot like communicating with a human."

But it's easy to hold a decent conversation without taking the intentional stance towards humans, and easier still to extract real-world information from human speech without the intentional stance. This leads to problems that become clear if you try to take an AI that does a good job at modeling language, and follow step by step how to get it to choose actions that are good for humans.

The dream is to learn the intentional stance by using the information implicit in our use of language. The intentional stance requires picking out good levels of abstraction to model humans on, and using language implies that the good levels of abstraction are the ones humans implicitly use in language. Is this what we want? I don't know, it might be?

It certainly isn't the only option - we might imagine other schemes involving trying to amplify emulations of humans, semi-supervised learning from examples of good and bad behavior, or multi-stage chains of making increasingly trustworthy AIs. But the question is whether it's an option.

3 - Process

It's not hard to hook up a videocamera to an image captioner to a deep reinforcement learner and say you can input goals with natural language because when you set the goal to "cat," your camera will look for cats. It's a lot harder to get that camera to look for what's best in life.

This is the bind I got myself into, writing this post. Value learning schemes that are simple are wrong (issues with the cat camera above are left as an exercise to the reader), and value learning schemes that seem promising have been selected for incomprehensibility and poor epistemic luck. So I tried to split the difference, if favoring interestingness over simplicity.

Here's some of the rules of thumb I used when thinking of ways to apply natural language processing to value learning:

First, I wanted to avoid the scheme having glaringly unspecified parts. It's very easy to be lazy and not specify something enough that it actually chooses actions, or feel like I've made progress but not be able to apply it. Usually either of these meant I was sweeping problems under the rug - the right level of specificity involves sweeping out some of those cobwebs.

Second, I needed to encourage myself to be specific about the intended purpose of natural language processing in each particular scheme. Yes, the dream is that it "includes common sense" or something like that, but that's not specific enough mental technology to tell whether you're solving the intended problem without unnecessary side effects, or explain why different methods get different results.

It was profitable to think of natural language processing as being targeted at the problem of alien concepts [LW · GW]: when the AI can match your training examples but still fail to generalize how you want because it's representing your examples in an alien way. For example, an image classifier might learn to distinguish dogs by the texture of their fur, but we're not going to be happy with how that generalizes to fur-less dogs or dog-less fur. Now replace "dogs" with "human values" and "fur" with "superficial features that work well on the training set."

An even more specific purpose of natural language would be "greedy reification" - actively trying to form concepts that correspond to linguistic tokens. So if we have a word "dog," we want to incentivize the AI to form a concept that picks out dogs in the world-model, and then the hope is that this also works on "human values."

4 - Mort

So here's a value learning scheme: try to squish the world and natural language into the same latent space, just with different input/output functions.

Training this simultaneous model might just be separately trying to do encoding-decoding or prediction tasks with sensory data and text, but more plausibly it should involve translation tasks where we can associate words with sensory environments. The model required is somewhat subtle, because we don't want words associated with raw sense data, we want words associated with the state of the AI's model of the world. This mandates that to the world-model, this latent space should look like the persistent state associated with sequence prediction or encoding-decoding of sequences of sense data, with transition dynamics partially included in the shared information. This means the language model should also look like sequence prediction or encoding with some local state consisting of the high-level features.

If I haven't said anything impossible so far, we could use sufficiently advanced technology to train this simultaneous model so that it's good at understanding both the world and language, and competent at turning one into the other when it comes to simple training examples. Can you now solve value learning by giving it a bunch of English descriptions what we want ("human values satisfied," "do the right thing," "a fulfilling and cosmopolitan future for the galaxy," et c.), and coding it to choose actions that make the state of the world like that?

Looking on the bright side first, what advantages might this have?

What are some big issues with this? Take a second if you like.

No, really, I'd be interested in what people come up with on their own. I don't understand this family of schemes as well as I'd like.

Ready? Okay:

5 - Wrap-up

Going back to the artificial intentional stance and the problem of alien concepts, it seems like this helps in some ways but not in others.

It seems to help with the intentional stance at the object level - the everyday work of translating "Charlie wants to go to the gym" into reasonable actions - but not at the meta-level. It's doubtful that it's modeling humans how they want to be modeled. Maybe this indicates that it would be profitable to break down this concept further. It also might spark your imagination about how to take the same information about humans and end up with something that models humans in a variable way.

A different thing is going on in the department of alien concepts, where we've run into the stress that Goodhart's law places on concepts. Instead of thinking about human-modeling, this makes me want to focus on the decision procedure and training. Can we find a decision procedure that leverages prediction and translation tasks in a way that puts less stress on the concepts? Can we use a training procedure that reduces the context shift when trying to use the model to choose actions?

Overall I think this avenue is pretty interesting to think about. Maybe this also serves as a concrete example of what I mean by trying to create the artificial intentional stance, which can be generalized from language to other options for learning about humans.

2 comments

Comments sorted by top scores.

comment by steve2152 · 2019-10-12T09:29:26.626Z · score: 3 (2 votes) · LW · GW

So here's a value learning scheme: try to squish the world and natural language into the same latent space, just with different input/output functions.

What's the input-output function in the two cases?

I'm also generally confused about why you're calling this thing "two linked models" rather than "one model". For example, I would say that a brain has one world model that is interlinked with speech and vision and action, etc. Right?

comment by Charlie Steiner · 2019-10-13T23:52:13.700Z · score: 4 (2 votes) · LW · GW
What's the input-output function in the two cases?

Good question :) We need the AI to have a persistent internal representation of the world so that it's not limited to preferences directly over sensory inputs. Many possible functions would work, and in various places (like comparison to CIRL), I've mentioned that it would be really useful to have some properties of a hierarchical probabilistic model, but as an aid to imagination I mostly just thought of a big ol' RNN.

We want the world model to share associations between words and observations, but we don't want it to share dynamics (one text-state following another is a very different process from one world-state following another). It might be sufficient for the encoding/decoding functions from observations to be RNNs, and the encoding/decoding functions from text just to be non-recurrent neural networks on patches of text.

That is, if we call the text the observations (at time ) , and the internal state , we'd have the encoding function , decoding something like , and also and . And then you could compose these functions to get things like . Does this answer your question, and do you think it brings new problems to light? I'm more interested in general problems or patterns than in problems specific to RNNs (like initialization of the state), because I'm sort of assuming that this is just a placeholder for future technology that would have a shot at learning a model of the entire world.

For example, I would say that a brain has one world model that is interlinked with speech and vision and action, etc. Right?

Right. I sort of flip-flop on this, also calling it "one simultaneous model" plenty. If there are multiple "models" in here, it's because different tasks use different subsets of its parts, and if we do training on multiple tasks, those subsets get trained together. But of course the point is that the subsets overlap.