AI safety without goal-directed behavior

post by Rohin Shah (rohinmshah) · 2019-01-07T07:48:18.705Z · LW · GW · 15 comments

Contents

  Why goal-directed behavior may not be required
  Implications
None
15 comments

When I first entered the field of AI safety, I thought of the problem as figuring out how to get the AI to have the “right” utility function. This led me to work on the problem of inferring values from demonstrators with unknown biases, despite the impossibility results in the area. I am less excited about that avenue because I am pessimistic about the prospects of ambitious value learning (for the reasons given in the first part of this sequence).

I think this happened because the writing on AI risk that I encountered has the pervasive assumption that any superintelligent AI agent must be maximizing some utility function over the long term future, such that it leads to goal-directed behavior and convergent instrumental subgoals. It’s often not stated as an assumption; rather, inferences are made assuming that you have the background model that the AI is goal-directed. This makes it particularly hard to question the assumption, since you don’t realize that the assumption is even there.

Another reason that this assumption is so easily accepted is that we have a long history of modeling rational agents as expected utility maximizers, and for good reason: there are many coherence arguments that say that, given that you have preferences/goals, if you aren’t using probability theory and expected utility theory, then you can be taken advantage of. It’s easy to make the inference that a superintelligent agent must be rational, and therefore it must be an expected utility maximizer.

Because this assumption was so embedded in how I thought about the problem, I had trouble imagining how else to even consider the problem. I would guess this is true for at least some other people, so I want to summarize the counterargument, and list a few implications, in the hope that this makes the issue clearer.

Why goal-directed behavior may not be required

The main argument of this chapter is that it is not required that a superintelligent agent takes actions in pursuit of some goal. It is possible to write algorithms that select actions without doing a search over the actions and rating their consequences according to an explicitly specified simple function. There is no coherence argument that says that your agent must have preferences or goals; it is perfectly possible for the agent to take actions with no goal in mind simply because it was programmed to do so; this remains true even when the agent is intelligent.

It seems quite likely that by default a superintelligent AI system would be goal-directed anyway, because of economic efficiency arguments. However, this is not set in stone, as it would be if coherence arguments implied goal-directed behavior. Given the negative results around goal-directed behavior, it seems like the natural path forward is to search for alternatives that still allow us to get economic efficiency.

Implications

At a high level, I think that the main implication of this view is that we should be considering other models for future AI systems besides optimizing over the long term for a single goal or for a particular utility or reward function. Here are some other potential models:

There are versions of these scenarios which are compatible with the framework of an AI system optimizing for a single goal:

I do not want these versions of the scenarios, since they then make it tempting to once again say “but if you get the goal even slightly wrong, then you’re in big trouble”. This would likely be true if we built an AI system that could maximize an arbitrary function, and then tried to program in the utility function we care about, but this is not required. It seems possible to build systems in such a way that these properties are inherent in the way that they reason, such that it’s not even coherent to ask what happens if we “get the utility function slightly wrong”.

Note that I’m not claiming that I know how to build such systems; I’m just claiming that we don’t know enough yet to reject the possibility that we could build such systems. Given how hard it seems to be to align systems that explicitly maximize a reward function, we should explore these other methods as well.

Once we let go of the idea of optimizing for a single goal and it becomes possible to think about other ways in which we could build AI systems, there are more insights about how we could build an AI system that does what we intend instead of what we say. (In my case it was reversed -- I heard a lot of good insights that don’t fit in the framework of goal-directed optimization, and this eventually led me to let go of the assumption of goal-directed optimization.) We’ll explore some of these in the next chapter.

15 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2019-01-08T05:32:39.956Z · LW(p) · GW(p)

I'm curious if you're more optimistic about non-goal-directed approaches to AI safety than goal-directed approaches, or if you're about equally optimistic (or rather equally pessimistic). The latter would still justify your conclusion that we ought to look into non-goal-directed approaches, but if that's the case I think it would be good to be explicit about it so as to not unintentionally give people false hope (ETA: since so far in this sequence you've mostly talked about the problems associated with goal-directed agents and not so much about problems associated with the alternatives). I think I'm about equally pessimistic, because while goal-directed agents have a bunch of safety problems, they also have a number of advantages that may be pretty hard to replicate in the alternative approaches.

  1. We have an existing body of theory about goal-directed agents (which MIRI is working on refining and expanding) which plausibly makes it possible to one day reason rigorously about the kinds of goal-directed agents we might build and determine their safety properties. Paul and others working on his approach are (as I understand it) trying to invent a theory of corrigibility, but I don't know if such a thing even exists in platonic theory space. And if it did, we're starting from scratch so it might take a long time to reach parity with the theory of goal-directed agents.
  2. Goal-directed agents give you economic efficiency "for free". Alternative approaches have to simultaneously solve efficiency and safety, and may end up approximating goal-directed agent anyway due to competitive pressures.
  3. Goal-directed agents can more easily avoid a bunch of human safety problems that are inherited by alternative approaches which all roughly follow the human-in-the-loop paradigm. These include value drift (including vulnerability to corruption/manipulation), problems with cooperation/coordination, lack of transparency/interpretability, and general untrustworthiness of humans.
Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-08T17:53:59.363Z · LW(p) · GW(p)

While I mostly agree with all three of your advantages, I am more optimistic about non-goal-directed approaches to AI safety. I think this is primarily because I'm generally optimistic about AI safety, and the well-documented problems with goal-directed agents makes me pessimistic about that particular approach.

If I had to guess at what drives my optimism that you don't have, it would be that we can aim for an adequate, not-formalized solution, and this will very likely be okay. All else equal, I would prefer a more formal solution, but I don't think we have the time for that. I would guess that while this lack of formality makes me only a little more worried, it is a big source of worry for you and MIRI researchers. This means that argument 1 isn't a big update for me.

Re: argument 2, it's worth noting that a system that has some chance of causing catastrophe is going to be less economically efficient. Now people might build it anyway because they underestimate the chance of catastrophe, or because of race dynamics, but I'm hopeful that (assuming it's true) we can convince all the relevant actors that goal-directed agents have a significant chance of causing catastrophe. In that case, non-goal-directed agents have a lower bar to meet. But overall this is a significant update.

Re: argument 3, I don't really see why goal-directed agents are more likely to avoid human safety problems. It seems intuitively plausible -- if you get the right goal, then you don't have to rely on humans, and so you avoid their safety problems. However, even with goal-directed agents, the goal has to come from somewhere, which means it comes from humans. (If not, we almost certainly get catastrophe.) So wouldn't the goal have all of the human safety problems anyway?

I'm also optimistic about our ability to solve human safety problems in non-goal-directed approaches -- see for example the reply [LW(p) · GW(p)] I just wrote on your CAIS comment.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-01-08T18:32:25.072Z · LW(p) · GW(p)

All else equal, I would prefer a more formal solution, but I don’t think we have the time for that.

I should have added that having a theory isn't just so we can have a more formal solution (which as you mention we might not have the time for) but it also helps us be less confused (e.g., have better intuitions) in our less formal thinking. (In other words I agree with what MIRI calls "deconfusion".) For example currently I find it really confusing to think about corrigible agents relative to goal-directed agents.

However, even with goal-directed agents, the goal has to come from somewhere, which means it comes from humans. (If not, we almost certainly get catastrophe.) So wouldn’t the goal have all of the human safety problems anyway?

The goal could come from idealized humans, or from a metaphilosophical algorithm, or be an explicit set of values that we manually specify. All of these have their own problems, of course, but they do avoid a lot of the human safety problems that the non-goal-directed approaches would have to address some other way.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-09T02:00:00.565Z · LW(p) · GW(p)
For example currently I find it really confusing to think about corrigible agents relative to goal-directed agents.

Strong agree, and I do think it's the biggest downside of trying to build non-goal-directed agents.

The goal could come from idealized humans, or from a metaphilosophical algorithm, or be an explicit set of values that we manually specify.

For the case of idealized humans, couldn't real humans defer to idealized humans if they thought that was better?

Similarly, it seems like a non-goal-directed agent could be instructed to use the metaphilosophical algorithm. I guess I could imagine a metaphilosophical algorithm such that following it requires you to be goal-directed, but it doesn't seem very likely to me.

For an explicit set of values, those values come from humans, so wouldn't they be subject to human safety problems? It seems like you would need to claim that humans are better at stating their values than acting in accordance with them, which seems true in some settings and false in others.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-01-09T05:09:58.547Z · LW(p) · GW(p)

For the case of idealized humans, couldn’t real humans defer to idealized humans if they thought that was better?

Real humans could be corrupted or suffer some other kind of safety failure before the choice to defer to idealized humans becomes a feasible option. I don't see how to recover from this, except [LW · GW] by making an AI with a terminal goal of deferring to idealized humans (as soon as it becomes powerful enough to compute what idealized humans would want).

Similarly, it seems like a non-goal-directed agent could be instructed to use the metaphilosophical algorithm. I guess I could imagine a metaphilosophical algorithm such that following it requires you to be goal-directed, but it doesn’t seem very likely to me.

That's a good point. Solving metaphilosophy does seem to have the potential to help both approaches about equally.

For an explicit set of values, those values come from humans, so wouldn’t they be subject to human safety problems? It seems like you would need to claim that humans are better at stating their values than acting in accordance with them, which seems true in some settings and false in others.

Well I'm not arguing that goal-directed approaches are more promising than non-goal-directed approaches, just that they seem roughly equally (un)promising in aggregate.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-09T10:19:18.850Z · LW(p) · GW(p)
Well I'm not arguing that goal-directed approaches are more promising than non-goal-directed approaches, just that they seem roughly equally (un)promising in aggregate.

Your first comment was about advantages of goal-directed agents over non-goal-directed ones. Your next comment talked about explicit value specification as a solution to human safety problems; it sounded like you were arguing that this was an example of an advantage of goal-directed agents over non-goal-directed ones. If you don't think it's an advantage, then I don't think we disagree here.

Real humans could be corrupted or suffer some other kind of safety failure before the choice to defer to idealized humans becomes a feasible option. I don't see how to recover from this, except [LW · GW] by making an AI with a terminal goal of deferring to idealized humans (as soon as it becomes powerful enough to compute what idealized humans would want).

That makes sense, I agree that goal-directed AI pointed at idealized humans could solve human safety problems, and it's not clear whether non-goal-directed AI could do something similar.

comment by Steven Byrnes (steve2152) · 2019-10-30T13:18:51.689Z · LW(p) · GW(p)

Rohin, I really like the distinction you draw between "build[ing] an AI system that could maximize an arbitrary function, and then [trying] to program in the utility function we care about" versus "build[ing] systems in such a way that these properties are inherent in the way that they reason." That was helpful.

However, it seems to me—and please correct me if I'm wrong!—that most or all CIRL papers are framing the problem in terms of understanding a generic goal-seeking system whose goal is "the human gets what they want". Then papers like The Off-Switch Game show that the goal of "the human gets what they want" leads to nice instrumental goals like not disabling off-switches. Do you agree?

So when I was reading CIRL papers, or reading Stuart Russell's new book, I did in fact keep thinking to myself "How do we make sure that the AI really has the goal of "The human gets what they want.", as opposed to a proxy to it that will diverge out-of-distribution?"

IDA / "act-based corrigibility" seems like more of an attempt to break out of the goal-seeking paradigm altogether, although I still haven't convinced myself that it succeeds.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-10-30T16:08:47.779Z · LW(p) · GW(p)

To be clear, this post was not arguing that CIRL is not goal-directed -- you'll notice that CIRL is not on my list of potential non-goal-directed models above.

I think CIRL is in this weird in-between place where it is kind of sort of goal-directed. You can think of three different kinds of AI systems:

  • An agent optimizing a known, definite utility function
  • An agent optimizing a utility function that it is uncertain about, that it gets information about from humans
  • A system that isn't maximizing any simple utility function at all

I claim the first is clearly goal-directed, and the last is not goal-directed. CIRL is in the second set, where it's not totally clear: it's actions are driven by a goal, but that goal comes from another agent (a human). (This is also the case with imitation learning, and that case is also not clear -- see this thread [LW(p) · GW(p)].)

I did in fact keep thinking to myself "How do we make sure that the AI really has the goal of "The human gets what they want.", as opposed to a proxy to it that will diverge out-of-distribution?"

I think this is a reasonable critique to have. In the context of Stuart's book, this is essentially a quibble with principle 3:

3. The ultimate source of information about human preferences is human behavior.

The goal learned by the AI system depends on how it maps human behavior (or sensory data) into (beliefs about) human preferences. If that mapping is not accurate (quite likely), then it will in fact learn some other goal, which could be catastrophic.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2019-10-31T08:32:55.169Z · LW(p) · GW(p)

Thanks! Pulling on that thread a bit more, compare:

My goal is that the human overseer achieves her goals. To accomplish this, I need to observe and interact with the human to understand her better—what kind of food she likes, how she responds to different experiences, etc. etc.

My goal is to maximize the speed of this racecar. To accomplish this, I need to observe and interact with the racecar to understand it better—how its engine responds to different octane fuels, how its tires respond to different weather conditions, etc. etc.

To me, they don't seem that different on a fundamental level. But they do have the super-important practical difference that the first one doesn't seem to have problematic instrumental subgoals.

(I think I'm just agreeing with your comment here [LW(p) · GW(p)]?)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-10-31T19:17:46.629Z · LW(p) · GW(p)
(I think I'm just agreeing with your comment here [LW(p) · GW(p)]?)

Yeah, I think that's basically right.

comment by Steven Byrnes (steve2152) · 2019-10-30T13:57:10.897Z · LW(p) · GW(p)

If we're making a list of models for non-goal-directed AI (and we should!!), I would propose two more:

  • Non-consequentialist oracle AI: An oracle with the property that the algorithm will not think through the consequences of its own outputs. You ask it a question, it digs through its world-model for a fixed number of computation steps and spits out its best-guess answer, but crucially it does not try to model the causal effects of that output. (Contrast with Eliezer's side-comment about an oracle here [LW · GW], which he of course assumes will be goal-directed, with the goal of "increase the correspondence between the user's belief about relevant consequences and reality".) A non-consequentialist oracle could never be deceptive or manipulative, because deception and manipulation require modeling the causal effects of outputs. I've speculated a bit about how to build something like this [LW · GW] but it's still definitely an open question.

  • Interpretable-world-model as AI: Kinda related to the first. Imagine you take an AGI that deeply understands the world, you extract its world-model, and you have a way to browse it—like the world-model is somehow 100% super-easily interpretable. What causes Alzheimers? Well, you would go to the Alzheimers entry of the world-model, and you'll find a beautiful way of thinking about Alzheimers in terms of these other three concepts, which in turn refer to other concepts etc. What would happen if we started a political movement against squirrels? Well, through the world-model interface, we can throw that hypothetical scenario at other entities in the world-model (people, journalists, politicians) and see what the predicted effects are. My intuition here is: (1) It's nice to have a map when you're traveling, (2) It's nice to have wikipedia when you're learning, (3) it would be nice to have a crystal ball when you're planning ... Maybe there's some way to build a system that combines all those things and more, but is still fundamentally tool-ish? My impression is that something like this is at the core of the Kurzweil-ish vision of how brain-computer interfaces are going to solve the problem of AGI safety (see also waitbutwhy on Neuralink). (Needless to say, it's possible to try to implement this vision without brain-computer interfaces, and vice-versa.)

comment by Adrià Garriga-alonso (rhaps0dy) · 2019-01-07T14:11:30.726Z · LW(p) · GW(p)

I usually think that logic-based reasoning systems are the canonical example of of an AI without goal-directed behaviour. They just try to prove or disprove a statement, given a database of atoms and relationships. (Usually they're restricted to statements that are decidable by construction so that is always possible).

You can also frame their behaviour as a utility function: U(time, state) = 1 if you have correctly decided the statement at t ≤ time, 0 otherwise. But your statement that

>It seems possible to build systems in such a way that these properties are inherent in the way that they reason, such that it’s not even coherent to ask what happens if we “get the utility function slightly wrong”.

very much applies. I'm fairly sure you can specify the behaviour of _anything_, including "dumb" things like trousers, screwdrivers, rocks and saucepans, as an utility function + perfect optimization, even though for most things this is a very unhelpful way of thinking. Or at least human artifacts. E.g. a screwdriver optimizes "transmit the rotational force that is applied to you", a rock optimizes "keep these molecules bound and respond to forces according to the laws of physics".

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-07T20:26:48.352Z · LW(p) · GW(p)
I usually think that logic-based reasoning systems are the canonical example of of an AI without goal-directed behaviour.

Yeah, that seems right to me. Though it's not clear how you'd use a logic-based reasoning system to act in the world -- if you do that by asking the question "what action would lead to the maximum value of this function", which it then computes using logic-based reasoning, then the resulting behavior would be goal-directed.

I'm fairly sure you can specify the behaviour of _anything_

Yup. I actually made this argument two posts ago [LW · GW].

Replies from: rhaps0dy
comment by Adrià Garriga-alonso (rhaps0dy) · 2019-01-08T19:05:57.384Z · LW(p) · GW(p)
Yup. I actually made this argument two posts ago. [LW · GW]

Ah, that's good. I should probably read the rest of the sequence too.

Though it's not clear how you'd use a logic-based reasoning system to act in the world

The easy way to use them would be as they are intended: oracles that will answer questions about factual statements. Humans would still do the questioning and implementing here. It's unclear how exactly you'd ask really complicated, natural-language-based questions (obviously, otherwise we'd have solved AI), but I think it serves as an example of the paradigm.

comment by avturchin · 2019-03-23T11:51:03.499Z · LW(p) · GW(p)

May be this article is related to the topic:

"A plurality of values" https://www.academia.edu/173502/A_plurality_of_values

"Abstract: Many maximizing normative theories are monistic in resting upon one core value. But  such theories generate highly counter-intuitive implications. This is especially clear in the case of hedonistic utilitarianism. But an analysis of why we find thoseimplications counter-intuitive implies that we ought to subscribe to a plurality of values. For example, the Repugnant Conclusion implies that we should value a highlevel of average happiness, while the Problem of the Ecstatic Psychopath implies that we should value either a large quantity of total happiness or a large number of worthwhile lives. The problems posed by pleasure-wizards, on the other hand, implythat we should include a non-utilitarian value: namely, equality. And only when suchvalues are kept in play simultaneously can the Repugnant Conclusion, the Problem of the Ecstatic Psychopath and the problems posed by pleasure-wizards all be avoided,thereby demonstrating the superiority of pluralist over monistic normative theories."

Interesting part start on page 10 after the quote: "Brian Barry argues in his early work that we could model trade-offs between principles such as equity and efficiency in a manner that parallels the way in which micro-economists employ indifference-curves to model how we might swap grapes for potatoe".