Locality of goals

post by adamShimi · 2020-06-22T21:56:01.428Z · LW · GW · 8 comments

Contents

  Introduction
  Starting points
    Thermostats and Goals
    Goals Across Cartesian Boundaries
  What Is Locality Anyway?
None
8 comments

Introduction

Studying goal-directedness produces two kinds of questions: questions about goals, and questions about being directed towards a goal. Most of my previous posts focused on the second kind; this one shifts to the first kind.

Assume some goal-directed system with a known goal. The nature of this goal will influence which issues of safety the system might have. If the goal focuses on the input, the system might wirehead itself and/or game its specification [LW · GW]. On the other hand, if the goal lies firmly in the environment, the system might have convergent instrumental subgoals and/or destroy any unspecified value [LW · GW].

Locality aims at capturing this distinction.

Intuitively, the locality of the system's goal captures how far away from the system one must look to check the accomplishment of the goal.

Let's give some examples:

Locality isn't about how the system extract a model of the world from its input, but about whether and how much it cares about the world beyond it.

Starting points

This intuition about locality came from the collision of two different classification of goals: the first from from Daniel Dennett and the second from Evan Hubinger.

Thermostats and Goals

In "The Intentional Stance", Dennett explains, extends and defends... the intentional stance. One point he discusses is his liberalism: he is completely comfortable with admitting ridiculously simple systems like thermostats in the club of intentional systems -- to give them meaningful mental states about beliefs, desires and goals.

Lest we readers feel insulted at the comparison, Dennett nonetheless admits that the goals of a thermostat differ from ours.

Going along with the gag, we might agree to grant [the thermostat] the capacity for about half a dozen different beliefs and fewer desires—it can believe the room is too cold or too hot, that the boiler is on or off, and that if it wants the room warmer it should turn on the boiler, and so forth. But surely this is imputing too much to the thermostat; it has no concept of heat or of a boiler, for instance. So suppose we de-interpret its beliefs and desires: it can believe the A is too F or G, and if it wants the A to be more F it should do K, and so forth. After all, by attaching the thermostatic control mechanism to different input and output devices, it could be made to regulate the amount of water in a tank, or the speed of a train, for instance.

The goals and beliefs of a thermostat are thus not about heat and the room it is in, as our anthropomorphic bias might suggest, but about the binary state of its sensor.

Now, if the thermostat had more information about the world -- a camera, GPS position, general reasoning ability to infer information about the actual temperature from all its inputs --, then Dennett argues its beliefs and goals would be much more related to heat in the room.

The more of this we add, the less amenable our device becomes to serving as the control structure of anything other than a room-temperature maintenance system. A more formal way of saying this is that the class of indistinguishably satisfactory models of the formal system embodied in its internal states gets smaller and smaller as we add such complexities; the more we add, the richer or more demanding or specific the semantics of the system, until eventually we reach systems for which a unique semantic interpretation is practically (but never in principle) dictated (cf. Hayes 1979). At that point we say this device (or animal or person) has beliefs about heat and about this very room, and so forth, not only because of the system's actual location in, and operations on, the world, but because we cannot imagine an-other niche in which it could be placed where it would work.

Humans, Dennett argues, are more like this enhanced thermostat, in that our beliefs and goals intertwine with the state of the world. Or put differently, when the world around us changes, it will influence almost always influence our mental states; whereas a basic thermostat might react the exact same way in vastly different environments.

But as systems become perceptually richer and behaviorally more versatile, it becomes harder and harder to make substitutions in the actual links of the system to the world without changing the organization of the system itself. If you change its environment, it will notice, in effect, and make a change in its internal state in response. There comes to be a two-way constraint of growing specificity between the device and the environment. Fix the device in any one state and it demands a very specific environment in which to operate properly (you can no longer switch it easily from regulating temperature to regulating speed or anything else); but at the same time, if you do not fix the state it is in, but just plonk it down in a changed environment, its sensory attachments will be sensitive and discriminative enough to respond appropriately to the change, driving the system into a new state, in which it will operate effectively in the new environment.

Part of this distinction between goals comes from generalization, a property considered necessary for goal-directedness since Rohin's initial post [? · GW] on the subject. But the two goals also differs in their "groundedness": the thermostat's goal lies completely in its sensors' inputs, whereas the goals of humans depend on things farther away, on the environment itself.

That is, these two goals have different locality.

Goals Across Cartesian Boundaries

The other classification of goals comes from Evan Hubinger, in a personal discussion. Assuming a Cartesian Boundary [LW · GW] outlining the system and its inputs and outputs, goals can be functions of:

Of course, many goals are functions of multiple parts of this quatuor. Yet separating them allows a characterization of a given goal through their proportions.

Going back to Dennett's example, the basic thermostat's goal is a function of its input, while human goals tend to be functions of the environment. And once again, an important aspect of the difference appears to lie in how far from the system is there information relevant to the goal -- locality.

What Is Locality Anyway?

Assuming some model of the world (possibly a causal DAG) containing the system, the locality of the goal is inversely proportional to the minimum radius of a ball, centered at the system, which suffice to evaluate the goal. Basically, one needs to look a certain distance away to check whether one’s goal is accomplished; locality is a measure of this distance. The more local a goal, the less grounded in the environment, and the most it is susceptible to wireheading or change of environment without change of internal state.

Running with this attempt at formalization, a couple of interesting point follow:

In summary, locality is a measure of the distance at which information about the world matters for a system's goal. It appears in various guises in different classification of goals, and underlies multiple safety issues. What I give is far from a formalization; it is instead a first exploration of the concept, with open directions to boot. Yet I believe that the concept can be put into more formal terms, and that such a measure of locality captures a fundamental aspect of goal-directedness.

Thanks to Victoria Krakovna, Evan Hubinger and Michele Campolo for discussions on this idea.

8 comments

Comments sorted by top scores.

comment by johnswentworth · 2020-06-23T15:22:59.018Z · LW(p) · GW(p)

Nice post!

One related thing I was thinking about last week: part of the idea of abstraction is that we can pick a Markov blanket around some variable X, and anything outside that Markov blanket can only "see" abstract summary information f(X). So, if we have a goal which only cares about things outside that Markov blanket, then that goal will only care about f(X) rather than all of X. This holds for any goal which only cares about things outside the blanket. That sounds like instrumental convergence: any goal which does not explicitly care about things near X itself, will care only about controlling f(X), not all of X.

This isn't quite the same notion of goal-locality that the OP is using (it's not about how close the goal-variables are to the agent), but it feels like there's some overlapping ideas there.

Replies from: adamShimi, adamShimi
comment by adamShimi · 2020-07-07T16:49:55.893Z · LW(p) · GW(p)

The more I think about it, the more I come to believe that locality is very related to abstraction. Not the distance part necessarily, but the underlying intuition. If my goal is not "about the world", then I can throw almost all information about the world except a few details and still be able to check my goal. The "world" of the thermostat is in that sense a very abstracted map of the world where anything except the number on its sensor is thrown away.

comment by adamShimi · 2020-06-23T21:00:10.209Z · LW(p) · GW(p)

Thanks! Glad that I managed to write something that was not causally or rhetorically all wrong. ^^

One related thing I was thinking about last week: part of the idea of abstraction is that we can pick a Markov blanket around some variable X, and anything outside that Markov blanket can only "see" abstract summary information f(X). So, if we have a goal which only cares about things outside that Markov blanket, then that goal will only care about f(X) rather than all of X

That makes even more sense to me than you might think. My intuitions about locality comes from its uses in distributed computing, where it measures both how many rounds of communication are needed to solve a problem and how far in the communication graph one needs to look to compute one's own output. This looks like my use of locality here.

On the other hand, recent work on distributed complexity also studied the volume complexity of a problem: the size of the subgraph one needs to look at, which might be very different from a ball. The only real constraint is connectedness. Modulo the usual "exactness issue", which we can deal with by replacing "the node is not used" by "only f(X) is used", this looks a lot like your idea.

comment by Rohin Shah (rohinmshah) · 2020-07-02T19:09:59.713Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

This post introduces the concept of the _locality_ of a goal, that is, how “far” away the target of the goal is. For example, a thermometer’s “goal” is very local: it “wants” to regulate the temperature of this room, and doesn’t “care” about the temperature of the neighboring house. In contrast, a paperclip maximizer has extremely nonlocal goals, as it “cares” about paperclips anywhere in the universe. We can also consider whether the goal depends on the agent’s internals, its input, its output, and/or the environment.
The concept is useful because for extremely local goals (usually goals about the internals or the input) we would expect wireheading or tampering, whereas for extremely nonlocal goals, we would instead expect convergent instrumental subgoals like resource acquisition.
Replies from: adamShimi
comment by adamShimi · 2020-07-02T21:09:56.000Z · LW(p) · GW(p)

Thanks for the summary! It's representative of the idea.

Just by curiosity, how do you decide for which posts/paper you want to write an opinion?

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-07-02T22:07:40.261Z · LW(p) · GW(p)

I ask myself if there's anything in particular I want to say about the post / paper that the author(s) didn't say, with an emphasis on ensuring that the opinion has content. If yes, then I write it.

(Sorry, that's not very informative, but I don't really have a system for it.)

Replies from: adamShimi
comment by adamShimi · 2020-07-02T22:14:34.095Z · LW(p) · GW(p)

No worries, that's a good answer. I was just curious, not expecting a full-fledged system. ;)

comment by Joe_Collman · 2020-06-25T20:51:39.699Z · LW(p) · GW(p)

[I'm not sure I'm understanding correctly, so do correct me where I'm not getting your meaning. Pre-emptive apologies if much of this gets at incidental details and side-issues]

The idea seems interesting, and possibly important.

Some thoughts:

(1) Presumably you mean to define locality as the distance (our distance) that the system would (?need to?) look to check its own goal. The distance we'd need to look doesn't seem safety relevant, since that doesn't tell us anything about system behaviour.

So we need to reason within the system's own model to understand 'where' it needs to look - but we need to ground that 'where' in our world model to measure the distance.

Let's say we can see that a system X has achieved its goal by our looking at its local memory state (within 30cm of X). However, X must check another memory location (200 miles away in our terms) to know that it's achieved its goal.

In that case, I assume: Locality = 1 / (200 miles) ??

(I don't think it's helpful to use: Locality = 1 / (30cm), if the system's behaviour is to exert influence over 200 miles)

(2) I don't see a good way to define locality in general (outside artificially simple environments), since for almost all goals the distance to check a goal will be contingent on the world state. The worst-case distance will often be unbounded. E.g. "Keep this room above 23 degrees" isn't very local if someone moves the room to the other side of the continent, or splits it into four pieces and moves each into a separate galaxy.

This applies to the system itself too. The system's memory can be put on the other side of the galaxy, or split up.... (if you'd want to count these as having low distance from the system, then this would be a way to cheat for any environmental goal: split up the system and place a part of it next to anything in the environment that needs to be tested)

You'd seem to need some caveats to rule out weird stuff, and even then you'd probably end up with categories: either locality zero (for almost any environmental goal), or locality around 1 (for any input/output/internal goal).

If things go that way, I'm not sure having a number is worthwhile.


(3a) Where there's uncertainty over world state, it might be clearer to talk in terms of probabilistic thresholds.
E.g. my goal of eating ice cream doesn't dissolve, since I never know I've eaten an ice cream. In my world model, the goal of eating an ice cream *with certainty* has locality zero, since I can search my entire future light-cone and never be sure I achieved that goal (e.g. some crafty magician, omega, or a VR machine might have deceived me).

I think you'd need to parameterise locality:
To know whether you've achieved g with probability > p, you'd need to look (1/locality) meters.

Then a relevant safety question is the level of certainty the system will seek.

(3b) Once things are uncertain, you'd need a way to avoid most goal-checking being at near-zero distance: a suitable system can check most goals by referring to its own memory. For many complex goals that's required, since it can't simultaneously perceive all the components. The goal might not be "make my memory reflect this outcome", but "check that my memory reflects this outcome" is a valid test (given that the system tends not to manipulate its memory to perform well on tests).



(4) I'm not sure it makes sense to rush to collapse locality into one dimension. In general we'll be interested in some region (perhaps not a connected region), not only in a one-dimensional representation of that region.

Currently, caring about the entire galaxy gets the same locality value as caring about one vase (or indeed one memory location) that happens to be on the other side of the galaxy. Splitting a measure of displacement from a measure of region size might help here.

If you want one number, I think I'd go with something focused on the size of the goal-satisfying region. Maybe something like:
1 / [The minimum over the sum of radii of balls in (some set of balls of minimum radius k, such that any information needed to check the goal is contained within at least one of the balls)]


(5) I'm not sure humans do tend to avoid wireheading. What we tend to avoid is intentionally and explicitly choosing to wirehead. If it happens without our attention, I don't think we avoid it by default.
Self-deception is essentially wire-heading; if we think that's unusual, we're deceiving ourselves :)

This is important, since it highlights that we should expect wireheading by default. It's not enough for a highly capable system not to be actively aiming to wirehead. To avoid accidental/side-effect wireheading, a system will need to be actively searching for evidence, and thoroughly analysing its input for wireheading signs.

Another way to think about this:
There aren't actually any "environment" goals.
"Environment-based" is just a shorthand for: (input + internal state + output)-based

So to say a goal is environment-based, is just to say that we're giving ourselves the maximal toolkit to avoid wireheading. We should expect wireheading unless we use that toolkit well.

Do you agree with this? If not, what exactly do you mean by "(a function of) the environment"?

If so, then from the system's point of view, isn't locality always about 1: since it can only ever check (input + internal state + output)? Or do we care about the distance over which the agent must have interacted in gathering the required information? I still don't see a clean way to define this without a load of caveats.


Overall, if the aim is to split into "environmental" and "non-environmental" goals, I'm not sure I think that's a meaningful distinction - at least beyond what I've said above (that you can't call a goal "environmental" unless it depends on all of input, internal-state and output).

I think our position is that of complex thermostats with internal state.