Intuitions about goal-directed behavior

post by Rohin Shah (rohinmshah) · 2018-12-01T04:25:46.560Z · LW · GW · 15 comments

Contents

  Generalization to novel circumstances
  Empowerment
  Our understanding of the behavior
  Summary
None
15 comments

One broad argument for AI risk is the Misspecified Goal argument:

The Misspecified Goal Argument for AI Risk: Very intelligent AI systems will be able to make long-term plans in order to achieve their goals, and if their goals are even slightly misspecified then the AI system will become adversarial and work against us.

My main goal in this post is to make conceptual clarifications and suggest how they affect the Misspecified Goal argument, without making any recommendations about what we should actually do. Future posts will argue more directly for a particular position. As a result, I will not be considering other arguments for focusing on AI risk even though I find some of them more compelling.

I think of this as a concern about long-term goal-directed behavior. Unfortunately, it’s not clear how to categorize behavior as goal-directed vs. not. Intuitively, any agent that searches over actions and chooses the one that best achieves some measure of “goodness” is goal-directed (though there are exceptions, such as the agent that selects actions that begin with the letter “A”). (ETA: I also think that agents that show goal-directed behavior because they are looking at some other agent are not goal-directed themselves -- see this comment [AF(p) · GW(p)].) However, this is not a necessary condition: many humans are goal-directed, but there is no goal baked into the brain that they are using to choose actions.

This is related to the concept of optimization [LW · GW], though with intuitions around optimization we typically assume that we know the agent’s preference ordering, which I don’t want to assume here. (In fact, I don’t want to assume that the agent even has a preference ordering.)

One potential formalization is to say that goal-directed behavior is any behavior that can be modelled as maximizing expected utility for some utility function; in the next post I will argue that this does not properly capture the behaviors we are worried about. In this post I’ll give some intuitions about what “goal-directed behavior” means, and how these intuitions relate to the Misspecified Goal argument.

Generalization to novel circumstances

Consider two possible agents for playing some game, let’s say TicTacToe. The first agent looks at the state and the rules of the game, and uses the minimax algorithm to find the optimal move to play. The second agent has a giant lookup table that tells it what move to play given any state. Intuitively, the first one is more “agentic” or “goal-driven”, while the second one is not. But both of these agents play the game in exactly the same way!

The difference is in how the two agents generalize to new situations. Let’s suppose that we suddenly change the rules of TicTacToe -- perhaps now the win condition is reversed, so that anyone who gets three in a row loses. The minimax agent is still going to be optimal at this game, whereas the lookup-table agent will lose against any opponent with half a brain. The minimax agent looks like it is “trying to win”, while the lookup-table agent does not. (You could say that the lookup-table agent is “trying to take actions according to <policy>”, but this is a weird complicated goal so maybe it doesn’t count.)

In general, when we say that an agent is pursuing some goal, this is meant to allow us to predict how the agent will generalize to some novel circumstance. This sort of reasoning is critical for the Goal-Directed argument for AI risk. For example, we worry that an AI agent will prevent us from turning it off, because that would prevent it from achieving its goal: “You can't fetch the coffee if you're dead.” This is a prediction about what an AI agent would do in the novel circumstance where a human is trying to turn the agent off.

This suggests a way to characterize these sorts of goal-directed agents: there is some goal such that the agent’s behavior in new circumstances can be predicted by figuring out which behavior best achieves the goal. There's a lot of complexity in the space of goals we consider: something like "human well-being" should count, but "the particular policy <x>" and “pick actions that start with the letter A” should not. When I use the word goal I mean to include only the first kind, even though I currently don’t know theoretically how to distinguish between the various cases.

Note that this is in stark contrast to existing AI systems, which are particularly bad at generalizing to new situations.


Honestly, I’m surprised it’s only 90%. [1]

Empowerment

We could also look at whether or not the agent acquires more power and resources. It seems likely that an agent that is optimizing for some goal over the long term would want more power and resources in order to more easily achieve that goal. In addition, the agent would probably try to improve its own algorithms in order to become more intelligent.

This feels like a consequence of goal-directed behavior, and not its defining characteristic, because it is about being able to achieve a wide variety of goals, instead of a particular one. Nonetheless, it seems crucial to the broad argument for AI risk presented above, since an AI system will probably need to first accumulate power, resources, intelligence, etc. in order to cause catastrophic outcomes.

I find this concept most useful when thinking about the problem of inner optimizers, where in the course of optimization through a rich space you stumble across a member of the space that is itself doing optimization, but for a related but still misspecified metric. Since the inner optimizer is being “controlled” by the outer optimization process, it is probably not going to cause major harm unless it is able to “take over” the outer optimization process, which sounds a lot like accumulating power. (This discussion is extremely imprecise and vague; see Risks from Learned Optimization for a more thorough discussion.)

Our understanding of the behavior

There is a general pattern in which as soon as we understand something, it becomes something lesser. As soon as we understand rainbows, they are relegated to the “dull catalogue of common things” [LW · GW]. This suggests a somewhat cynical explanation of our concept of “intelligence”: an agent is considered intelligent if we do not know how to achieve the outcomes it does using the resources that it has (in which case our best model for that agent may be that it is pursuing some goal, reflecting our tendency to anthropomorphize). That is, our evaluation about intelligence is a statement about our epistemic state. Some examples that follow this pattern are:

Note that an alternative hypothesis is that humans equate intelligence with mystery; as we learn more and remove mystery around eg. evolution, we automatically think of it as less intelligent.

To the extent that the Misspecified Goal argument relies on this intuition, the argument feels a lot weaker to me. If the Misspecified Goal argument rested entirely upon this intuition, then it would be asserting that because we are ignorant about what an intelligent agent would do, we should assume that it is optimizing a goal, which means that it is going to accumulate power and resources and lead to catastrophe. In other words, it is arguing that assuming that an agent is intelligent definitionally means that it will accumulate power and resources. This seems clearly wrong; it is possible in principle to have an intelligent agent that nonetheless does not accumulate power and resources.

Also, the argument is not saying that in practice most intelligent agents accumulate power and resources. It says that we have no better model to go off of other than “goal-directed”, and then pushes this model to extreme scenarios where we should have a lot more uncertainty.

To be clear, I do not think that anyone would endorse the argument as stated. I am suggesting as a possibility that the Misspecified Goal argument relies on us incorrectly equating superintelligence with “pursuing a goal” because we use “pursuing a goal” as a default model for anything that can do interesting things, even if that is not the best model to be using.

Summary

Intuitively, goal-directed behavior can lead to catastrophic outcomes with a sufficiently intelligent agent, because the optimal behavior for even a slightly misspecified goal can be very bad according to the true goal. However, it’s not clear exactly what we mean by goal-directed behavior. Often, an algorithm that searches over possible actions and chooses the one with the highest “goodness” will be goal-directed, but this is neither necessary nor sufficient.

“From the outside”, it seems like a goal-directed agent is characterized by the fact that we can predict the agent’s behavior in new situations by assuming that it is pursuing some goal, and as a result it is acquires power and resources. This can be interpreted either as a statement about our epistemic state (we know so little about the agent that our best model is that it pursues a goal, even though this model is not very accurate or precise) or as a statement about the agent (predicting the behavior of the agent in new situations based on pursuit of a goal actually has very high precision and accuracy). These two views have very different implications on the validity of the Misspecified Goal argument for AI risk.


[1] This is an entirely made-up number.

15 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2018-12-02T10:02:16.730Z · LW(p) · GW(p)

One broad argument for AI risk is the Misspecified Goal argument

Do you have a citation for this? Who are you arguing against, or whose argument are you trying to clarify?

I am suggesting as a possibility that the Misspecified Goal argument relies on us incorrectly equating superintelligence with “pursuing a goal” because we use “pursuing a goal” as a default model for anything that can do interesting things, even if that is not the best model to be using.

I tend to have a different version of the Misspecified Goal argument in mind which I think doesn't have this problem:

  1. At least some humans are goal-directed at least some of the time.
  2. It's likely possible to create artificial agents that are better at achieving goals than humans.
  3. It will be very tempting for some humans to build such agents in order to help those humans achieve their goals (either instrumental goals or their understanding of their terminal goals).
  4. The most obvious way for other humans to compete with or defend against such goal-oriented artificial agents is to build their own goal-oriented artificial agents.
  5. If at that point we do not know how to correctly specify the goals for such artificial agents (and we haven't figured out how to stop / compete with / defend against such agents some other way), the universe will end up being directed towards the wrong goals, which may be catastrophic depending on various contingencies such as what the correct meta and normative ethics turn out to be, if the the "incorrect" goals we build into such agents are good enough to capture most of our scalable values [LW · GW], etc.
Replies from: rohinmshah, Vladimir_Nesov
comment by Rohin Shah (rohinmshah) · 2018-12-03T03:49:18.764Z · LW(p) · GW(p)
Do you have a citation for this?

I briefly looked for and did not find a good citation for this.

Who are you arguing against, or whose argument are you trying to clarify?

I'm not sure. However, I have a lot of conversations where it seems to me that the other person believes the Misspecified Goal Argument. Currently, if I were to meet a MIRI employee I hadn't met before, I would be unsure whether the Misspecified Goal Argument is their primary reason for worrying about AI risk. If I meet a rationalist who takes the MIRI perspective on AI risk but isn't at MIRI themselves, by default I assume that their primary reason for caring about AI risk is the Misspecified Goal argument.

I do want to note that I am primarily trying to clarify here, I didn't write this as an argument against the Misspecified Goal argument. In fact, conditional on the AI having goals, I do agree with the Misspecified Goal argument.

I tend to have a different version of the Misspecified Goal argument in mind which I think doesn't have this problem

Yeah, I think this is a good argument, and I want to defer to my future post on the topic, which should come out on Wednesday. The TL;DR is that I agree with the argument but it implies a broader space of potential solutions than "figure out how to align a goal-directed AI".

(Sorry that I didn't adequately point to different arguments and what I think about them -- I didn't do this because it would make for a very long post, and it's instead being split into several posts, and this particular argument happens to be in the post on Wednesday.)

comment by Vladimir_Nesov · 2018-12-02T13:33:07.522Z · LW(p) · GW(p)

My guess is that agents that are not primarily goal-directed can be good at defending against goal-directed agents (especially with first mover advantage, preventing goal-directed agents from gaining power), and are potentially more tractable for alignment purposes, if humans coexist with AGIs during their development and operation (rather than only exist as computational processes inside the AGI's goal, a situation where a goal concept becomes necessary).

I think the assumption that useful agents must be goal-directed has misled a lot of discussion of AI risk in the past. Goal-directed agents are certainly a problem, but not necessarily the solution. They are probably good for fixing astronomical waste, but maybe not AI risk.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-12-02T21:50:47.856Z · LW(p) · GW(p)

I think I disagree with this at least to some extent. Humans are not generally safe agents, and in order for not-primarily-goal-directed AIs to not exacerbate humans' safety problems (for example by rapidly shifting their environments/inputs out of a range where they are known to be relatively safe), it seems that we have to solve many of the same metaethical/metaphilosophical problems that we'd need to solve to create a safe goal-directed agent. I guess in some sense the former has lower "AI risk" than the latter in that you can plausibly blame any bad outcomes on humans instead of AIs, but to me that's actually a downside because it means that AI creators can more easily deny their responsibility to help solve those problems.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2018-12-03T01:06:01.556Z · LW(p) · GW(p)

Learning how to design goal-directed agents seems like an almost inevitable milestone on the path to figuring out how to safely elicit human preference in an actionable form. But the steps involved in eliciting and enacting human preference don't necessarily make use of a concept of preference or goal-directedness. An agent with a goal aligned with the world can't derive its security from the abstraction of goal-directedness, because the world determines that goal, and so the goal is vulnerable to things in the world, including human error. Only self-contained artificial goals are safe from the world and may lead to safety of goal-directed behavior. A goal built from human uploads that won't be updated from the world in the future gives safety from other things in the world, but not from errors of the uploads.

When the issue is figuring out which influences of the world to follow, it's not clear that goal-directedness remains salient. If there is a goal, then there is also a world-in-the-goal and listening to your own goal is not safe! Instead, you have to figure out which influences in your own goal to follow. You are also yourself part of the world and so there is an agent-in-the-goal that can decide aspects of preference. This framing where a goal concept is prominent is not obviously superior to other designs that don't pursue goals, and instead focus on pointing at the appropriate influences from the world. For example, a system may seek to make reliable uploads, or figure out which decisions of uploads are errors, or organize uploads to make sense of situations outside normal human environments, or be corrigible in a secure way, so as to follow directions of a sane external operator and not of an attacker. Once we have enough of such details figured out (none of which is a goal-directed agent), it becomes possible to take actions in the world. At that point, we have a system of many carefully improved kluges that further many purposes in much the same way as human brains do, and it's not clearly an improvement to restructure that system around a concept of goals, because that won't move it closer to the influences of the world it's designed to follow.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-12-03T04:18:53.814Z · LW(p) · GW(p)

This framing where a goal concept is prominent is not obviously superior to other designs that don’t pursue goals, and instead focus on pointing at the appropriate influences from the world. For example, a system may seek to make reliable uploads, or figure out which decisions of uploads are errors, or organize uploads to make sense of situations outside normal human environments, or be corrigible in a secure way, so as to follow directions of a sane external operator and not of an attacker.

This makes me think I probably misunderstood what you meant earlier by "agents that are not primarily goal-directed". Do you have a reference that you can point me to that describes what you have in mind in more detail?

comment by Gurkenglas · 2018-12-01T12:02:57.756Z · LW(p) · GW(p)

According to GAZP vs. GLUT [LW · GW] with consciousness replaced by goal-directed behavior, we may want to say that goal-directed behavior is involved in the creation or even just specification of the giant lookup table TicTacToe agent.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2018-12-02T00:47:58.996Z · LW(p) · GW(p)

Yeah, that's right. There's some sort of goal/specification/desire that picks out the particular behavior we want out of the large space of possible behaviors. However, that goal/specification/desire need not be internal to the AI system, and it need not be long-term.

(Side note: it's not that giant)

comment by Rafael Harth (sil-ver) · 2020-09-03T16:53:20.926Z · LW(p) · GW(p)

Intuitively, any agent that searches over actions and chooses the one that best achieves some measure of “goodness” is goal-directed (though there are exceptions, such as the agent that selects actions that begin with the letter “A”). (ETA: I also think that agents that show goal-directed behavior because they are looking at some other agent are not goal-directed themselves -- see this comment [LW(p) · GW(p)].)

This post did not convince me that the bolded part is not (almost) a solution

I would honestly be very surprised if [whatever definition of goal-directedness we end up with] will be such that approval-directed agents are not goal-directed. In what sense is "do what this other agent likes?" not a goal? It also seems like you could define lots of things that are kind of like approval-directedness (do what agent  likes, do what is good for , do what  will reward me for). Are some of those goal-directed and others not?

Excluding approval-directedness feels like patching the natural formulation of goal-directedness: 'It has to be a any kind of goal except doing what someone likes', and even then there seem to be many goals that are smilar as 'do what someone likes' which would have to be patched as well. What's wrong with saying approval-directedness is a special kind of goal-directedness?

 I agree that the agent that selects actions that begin with 'A' is a counter-example, but this feels like it belongs to a specific class of counter-examples where it's okay (or even necessary) to patch them. As in, you have these real things (possible actions), you have your representation of them, and now you define something in terms of the representation. (This feels similar to the agent that runs a search algorithm for a random objective, stops as soon as one of its registers is set to 56789ABC and then outputs the policy it's currently considering.) A definition just has to say that the 'goodness' criterion is not allowed to depend on the representation (names of policies/parts of the code/etc.).

Is there a counter-example (something that runs a search but isn't goal directed) that isn't about the representation?

(This discussion is extremely imprecise and vague; see the upcoming MIRI paper on “The Inner Alignment Problem” for a more thorough discussion.)

I assume an updated version of this would link to the Risks from Learned Optimization paper.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-09-04T05:40:09.407Z · LW(p) · GW(p)

Yeah I think I broadly agree with this perspective (in contrast to how I used to think about it), but I still think there is a meaningful distinction in an AI agent that pursues goals that other agents (e.g. humans) have. I would no longer call it "not goal-directed", but it still seems like an important distinction.

I assume an updated version of this would link to the Risks from Learned Optimization paper.

Yes, updated, thanks.

comment by Wei Dai (Wei_Dai) · 2019-01-06T06:54:52.213Z · LW(p) · GW(p)

However, it’s not clear exactly what we mean by goal-directed behavior. A sufficient condition is that the algorithm searches over possible actions and chooses the one with the highest “goodness”, but this is not a necessary condition.

Even this seems wrong or confusing, since it would appear to include approval-directed agents under "goal-directed behavior" (since an approval-directed agent searches over possible actions and chooses the one with the highest expected approval, which is a kind of "goodness").

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-07T14:10:39.042Z · LW(p) · GW(p)

Agreed, changed that sentence so that it no longer claims it is a sufficient condition. Thanks for catching that!

comment by Chris_Leong · 2018-12-03T14:14:19.784Z · LW(p) · GW(p)

"I find this concept most useful when thinking about the problem of inner optimizers, where in the course of optimization through a rich space you stumble across a member of the space that is itself doing optimization, but for a related but still misspecified metric." - Could you clarify what kind of algorithm you are imagining being run?

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2018-12-03T14:26:15.372Z · LW(p) · GW(p)

I could imagine this happening with standard deep RL over a long enough time horizon with enough compute. Again though, I want to defer to the upcoming sequence on the topic, which should have a good in-depth explanation.

comment by Riccardo Volpato (riccardo-volpato) · 2020-07-06T13:24:27.358Z · LW(p) · GW(p)

My rough mental summary of these intuitions

  • Generalisation abilities suggests that behaviour is goal directed beucase it demonstrates adaptability (and goals are more adatable/compact ways of defining behaviour than others, like enumeration)
  • Powergrabs suggest that behaviour is goal directed beucase it reveals instrumentalism
  • Our understanding of intelligence migth be limited to human intelligence which is sometime goal directed so we use this a proxy of intelligence (adding some, perhaps unrefutable, skepticism of goal directness as a model of intelligence)