"How conservative" should the partial maximisers be?

post by Stuart_Armstrong · 2020-04-13T15:50:00.044Z · LW · GW · 8 comments

Contents

  Intermediate value uncertainty
None
8 comments

Due to the problem of building a strong -enhancer when we want a -enhancer [LW · GW] - and the great difficulty in defining [LW · GW], the utility we truly want to maximise - many people have suggested reducing the -increasing focus of the AI. The idea is that, as long as the AI doesn't devote too much optimisation power [LW · GW] to , the and will stay connected with each other [LW · GW], and hence a moderate increase in will in fact lead to a moderate increase in .

This has lead to interest in such things as satisficers and low-impact AIs, both [LW · GW] of which [LW · GW] have their problems. Those try and put an absolute limit on how much is optimised. The AI is not supposed to optimise above a certain limit (satisficer) or if optimising it changes too much about the world or the power of other agents (low-impact).

Another approach is to put a relative limit on how much an AI can push a utility function. For example, quantilizers will choose randomly among the top proportion of actions/policies, rather than picking the top action/policy. Then there is the approach of using pessimism [LW · GW] to make the AI more conservative. This pessimism is defined by a parametre , with being very pessimistic.

Intermediate value uncertainty

The behaviours of and are pretty clear around the extremes. As and tend to , the agent will behave like a -maximiser. As they tend to , the agent will behave randomly () or totally conservatively ().

Thus, we expect that moving away from the extremes will improve the true -performance, and that the conservative, end, will be less disastrous than the -maximising, end (though we only know that second fact, because of implicit assumptions we have on and [LW · GW]).

The problem is in the middle, where the behaviour is unknown (and, since we lack a full formulation of , generically unknowable). There is no principled way of setting the or the . Consider, for example, this plot of versus :

Here, the ideal is around , but the critical thing is to keep above : that's the point at which it falls precipitously.

Contrast now with this one:

Here, any value of above is essentially the same, and can be lowered as low as before there are any problems.

So, in the first case, we need above , and, in the second, below . And, moreover, it might be that the first situation appears in one world and the second in another, and both worlds are currently possible. So there's no consistent good value of we can set (and in the general case, the curve might be multi-modal, with many peaks). And note that we don't know any of these graphs (since we can't define fully). So we don't know what values to set at, have little practical guidance on what to do, but expect that some values will be disastrous.

The conservatism approach has similar problems: is even harder to interpret than , we don't have any guidance on how to set it, and the ideal may vary considerably depending on the circumstance. For example, what would we want our AI to do when it finds an unexpected red button connected to nuclear weapons?

Well, that depends on whether the button starts a nuclear launch - or if it cancels one.

A future post will explore how to resolve this issue, and how to choose the conservatism parameter in a suitable way.

8 comments

Comments sorted by top scores.

comment by Decius · 2020-04-14T06:49:13.514Z · LW(p) · GW(p)

> choose randomly among the top 0<q≤1 proportion of actions/policies

That requires that you be able to rank actions/policies, which means that they are first reduced to some absolute value on that scale (technically you could do this by merely ranking them, but every sane method of consistently ordering all possible policies is going to reduce each policy to a single value and then sort them by that value).

So... if there are critical values, those values should be apparent in a large gap between the value that you are using to sort them.


Which brings us to the main problem, which is that ranking policies is a hard and unsolved problem that so far has only been reduced to itself.

comment by TurnTrout · 2020-04-13T15:35:17.477Z · LW(p) · GW(p)

if optimising it changes too much about the world (low-impact).

Although not important for the content of this post, I think this might be better phrased as "if optimizing [the objective function] drastically changes other agents' abilities to achieve their goals". In my experience, the "amount of change to the world" framing can be misleading. (See World State is the Wrong Level of Abstraction for Impact [LW · GW] and Attainable Utility Landscape: How The World Is Changed [? · GW])

Replies from: Stuart_Armstrong, Stuart_Armstrong
comment by Stuart_Armstrong · 2020-04-16T10:01:02.505Z · LW(p) · GW(p)

Have slightly rephrased to include this.

comment by Stuart_Armstrong · 2020-04-13T15:47:35.644Z · LW(p) · GW(p)

Possibly, but I think the "amount of change to the world" is broader umbrella term that covers more of the methods that people have been proposing.

Replies from: Dagon, TurnTrout
comment by Dagon · 2020-04-13T16:16:50.112Z · LW(p) · GW(p)

"kill all humans, then shut down" is probably the action that most minimizes change. Leaving those buggers alive will cause more (and harder to predict) change than anything else the agent might do.

There's no way to talk about this in the abstract sense of change - it has to be differential from a counterfactual (aka: causal), and can only be measured by other agents' evaluation functions. The world changes for lots of reasons, and an agent might have most of it's impact by PREVENTING a change, or by FAILING to change something that's within it's power. Asimov's formulation included this understanding: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Replies from: Stuart_Armstrong, TurnTrout
comment by Stuart_Armstrong · 2020-04-13T17:12:59.719Z · LW(p) · GW(p)

Yep, been dealing with that issue for some time now ^_^

https://arxiv.org/abs/1705.10720

comment by TurnTrout · 2020-04-13T16:54:19.919Z · LW(p) · GW(p)

I agree it doesn't make sense to talk about this kind of change as what we want impact measures to penalize, but i think you could talk about this abstract sense of change. You could have an agent with beliefs about the world state, and some distance function over world states, and then penalize change in observed world state compared to some counterfactual.

This kind of change isn't the same thing as perceived impact, however.

comment by TurnTrout · 2020-04-13T16:12:48.132Z · LW(p) · GW(p)

While I see the appeal of having an umbrella description of past approaches, I don't think we explain the goal of impact measure research in terms of the average proposal so far, but rather, by what impact is. As I argued in the first half of Reframing Impact [? · GW], people impact each other by changing the other person's ability to achieve their goal. This is true no matter which impact measure you prefer.

I think that proposals generally fail or succeed to the extent that they are congruent with this understanding of impact. In particular, an impact measure is good for us to the extent that it penalizes policies which destroy our ability to get what we want.