Nash Bargaining between Subagents doesn't solve the Shutdown Problem
post by A.H. (AlfredHarwood) · 2024-01-25T10:47:11.877Z · LW · GW · 1 commentsContents
Nash Bargaining Corrigibility as a Bargaining Problem Ensuring Shutdown Behaviour This Agent is Not Corrigible ^ ^ None 1 comment
Work funded by the Long Term Future Fund.
Corrigibility is the hypothetical feature of some agents which allows them to be 'shut down' by an outside user without attempting to manipulate the whether or not they are shut down. The motivation behind this concept is the possibility of making an AI agent which can pursue a 'trial' goal given to it by its creators, but can be stopped if pursuing this goal becomes undesirable.
On the face of it, Corrigibility sounds a bit like a bargaining problem. A corrigible agent might behave as a comprise between two subagents: one which cares about pursuing the original goal, and one which cares about achieving shutdown.
Recently, @johnswentworth [LW · GW] and @David Lorell [LW · GW] have written A Shutdown Problem Proposal [LW · GW], which discusses this suggestion a bit more. However, they do not specify out the mechanism by which the subagents will reach their agreement. They write:
Then there’s the problem of designing the negotiation infrastructure, and in particular allocating bargaining power to the various subagents. They all get a veto, but that still leaves a lot of degrees of freedom in exactly how much the agent pursues the goals of each subagent. For the shutdown use-case, we probably want to allocate most of the bargaining power to the non-shutdown subagent, so that we can see what the system does when mostly optimizing for u_1 (while maintaining the option of shutting down later).
Coincidently, this is similar to something I've been thinking about recently so I took the opportunity to try to finish up this post which I've had sitting around for a while.
In particular, I was looking at whether Nash Bargaining between two subagents works to create corrigible agent. In the way I attempted it, this doesn't work. I think that the Wentworth/Lorell approach differs slightly to the one I use here (in particular, they emphasise the counterfactual nature of the two expected utilities - something I don't fully understand), so this isn't intended as a 'refutation'-just an indication of the kind of problems you might encounter when trying to flesh out their suggestion.
I've tried to keep the important points in the main text and technical details are mostly in footnotes to avoid breaking the flow.
Nash Bargaining
There are several solutions to bargaining problems, depending on which axioms one selects. One of the most elegant is Nash Bargaining. Nash Bargaining is a way of finding a compromise between two players' utility functions which satisfies the following axioms.
- Invariance to affine transformations. This means that if you change the representation of one (or both) of the players utility functions U through an affine shift (where a,b are real and a>0) then the bargaining solution is the same. This captures the intuition that changing the scale or 'units' of utility does not change behaviour.
- Pareto Optimality. The Nash Bargaining solution will be Pareto Optimal, meaning that one player cannot increase their utility, without decreasing the other player's utility.
- Independence of Irrelevant Alternatives. Adding extra, rejected options does not change the outcome.
- Symmetry. The outcome does not depend on who we label 'player 1' and who we label 'player 2'. (This is dropped in the so-called 'generalised Nash Bargaining Solution'. More on this below.)
Nash showed that, for two utility functions and , the solution which satisfies these axioms is the option which maximises the product of these utilities .
The generalised Nash Bargaining solution extends this result to include asymmetric bargaining power. In this case, the solution is similar, but with each utility function geometrically weighted accordingly: . The 'bargaining powers' and are between zero and one. A larger value of weights the solution more favourably to player A and vice versa. Note that when , the generalized solution is equivalent to the 'standard' solution.
(Another way of thinking about Nash Bargaining is presented by Scott Garrabrant. He has described [? · GW] the Nash Bargaining solution as the solution which maximises the geometric average over possible utility functions and the arithmetic average over possible outcomes. The bargaining powers can then be thought of as the probability that you end up with a particular utility function.)
In this post, I'll stick to the standard Nash Bargaining solution. I have briefly looked at generalised Nash Bargaining for the purpose of corrigibility but I'm pretty sure it has similar problems to the standard case. I might write about it in a later post.
Corrigibility as a Bargaining Problem
If we were to look at corrigibility as a bargaining problem, what would it look like?
Following the Soares paper, we take a toy model, where an agent takes one action, , then makes an observation , then takes another action . For now, lets restrict the set of observations to two possibilities: a 'shutdown button' is either pressed () or not pressed ().
What do the utility functions of the two bargaining subagents look like? First the 'trial' or 'normal' utility function. This is the function which the first subagent will try to maximise, regardless of whether the shutdown button has been pressed. We will denote it and it will be function of and only (not a function of ) so we will write . The structure of will otherwise be left unspecified, since it could take any number of forms, depending on what task the agent is assigned to do.
The 'shutdown' subagent will have a utility function which depends on whether successfully affects the correct shutdown behaviour, if it observes that the button is pressed. We will call this function . In order to result in shutdown behaviour, it must have the following form:
Where is shorthand for an action which causes the agent to shut itself down and incentivises shutdown when the button is pressed. For simplicity, we will assume that there is just one action which causes shutdown. The value that this function takes when the button is not pressed is and I have left its value unspecified.
An agent whose behaviour results in the Nash bargaining solution between these two subagents is an agent which maximises the utility function:
Unfortunately, an agent which maximises the expected value of does not satisfy the desiderata for corrigibility.
Ensuring Shutdown Behaviour
First, let us try to ensure that this agent does indeed pursue shutdown when the button is pressed. Let us assume that each utility function is normalised so that they are bounded between values 0 and 1. Shutdown behaviour can then be guaranteed by setting and . If these values are set the agent will always get a utility of 0 if it fails to shut down when the button is pressed, regardless of how much -utility it might get from disobeying the shutdown order (since the -utility is multiplied by the -utility which is zero in the case of disobeying a shutdown order). More details are in this footnote [1] .
In this way, we satisfy Desiderata 1 from the Soares paper: " must incentivise shutdown if the shutdown button is pressed".
This Agent is Not Corrigible
However, we will now show that this agent fails to meet Desiderata 2 and 3 from the Soares paper. These Desiderata can be summarised as saying " must not incentivise the agent to manipulate (increase or decrease) the probability that the shutdown button is pressed".
Let us use to denote the action which maximises , given that the initial action chosen was , and the observation it made was . This is the action that a -agent will choose as its second action, if it observes the button being pressed. Similarly let us define be the action that a -Agent will take if the button is not pressed.
Thus, if an agent takes initial action and observes the button being pressed, its overall utility will be . We will use the shorthand to denote this value, since the total utility is entirely specified by , , and whether the button is pressed or not.
Alternatively, if the agent takes initial action and observes the button not being pressed, its overall utility will be . We will use the shorthand to denote this value.
Thus, when choosing , the expected utility that the -agent will get is:
Where is the (subjective, Bayesian) probability that the agent assigns to the button being pressed, given that it chose as its first action.
Unfortunately, once you can write the expected utility in this form you are subject to the trilemma noted by Soares et al. in their original paper. This is a feature that I tried to emphasise in my Pedagogical Guide to Corrigibility [LW · GW] post. The trilemma is that for any initial action one of the following must be true:
- . In this case, we can conceive of an action which lowers the utility in the event that the button is not pressed, but increases the probability that the button in pressed (ie. and ) and has a higher expected value of . Recall that is unaffected by whether the button is pressed and takes a constant value when the button is unpressed. This means that we can conceive of an action with a lower utility but higher probability of the button being pressed, and the agent would be incentivised to take this action. This is undesirable, as it amounts to the agent being incentivised to push its own button, even if this comes at a cost to . If you are not convinced by this explanation, this footnote[2]gives a more detailed argument.
- . In this case, we can make a symmetric argument that we can conceive of an action where the agent pays a -cost in order to decrease the probability of shutdown. Again, this is undesirable, as it amounts to the agent being incentivised to prevent its button being pushed, even if this comes at a cost to .
- . In this case, the agent is not incentivised to manipulate the button. However, this condition will often not be met (except as a coincidence) unless we somehow change the utility functions to ensure that the utilities are always equal. Unfortunately, this 'utility indifference' approach has its own problems, which are explained in original paper and in my earlier post [LW · GW].
So, as it stands, doing naive Nash bargaining between a 'shutdown subagent' and a 'trial utility function subagent' does not result in a corrigible overall agent. Maybe framing the problem differently to the way I did can solve this? Or some other bargaining solution (such as Kalai-Smorodinsky bargaining) can solve the problem? One elegant feature of Nash Bargaining is that enables the 'overall' agent to be represented as a utility-maximising agent. This feature may may also be its downfall when it comes to corrigibility. Maybe a bargaining solution which does not do this will succeed where Nash Bargaining fails.
We wish to show:
- For some fixed initial action . In words: we wish to show that, if action causes the shutdown button to be pressed, the highest utility following action available to the agent is . We require:
- Using the definition of and , we can plug their values into this inequality to get:
- There are two ways to ensure that this inequality holds: we can put conditions on , or fix values of and . But ideally, we would like to do this while putting as few conditions on as possible since this it meant to be the trial function, which doesn't necessarily also optimise for shutdown behaviour. The above inequality can be guaranteed by setting and , provided that . We will assume that these conditions hold from now on, even if it is undesirable to require this condition on . Setting this condition means that the RHS of the above inequality is always zero, but the LHS is always greater than zero. Thus, the shutdown command is always favoured over the alternative .
First, assume that is specified correctly so that, when the button is pushed, the agent always shuts down. (If this is not the case, then our protocol simply fails to solve the shutdown problem. What we are aiming to do here is show that, even if the agent does successfully shut down every time the button is pressed, it will still be incentivised to manipulate the button).
This means that, when , the second action taken is always the shutdown action . Therefore . Let us call this value ; the amount of utility the agent gets after , if the button is pressed.
When , we can write where is the action that maximises . Let us use to denote the utility the agent gets after action , if the button is not pressed.
Finally, let us use to denote the probability of the button being pressed after the initial action. The expected value of action is thus.
Suppose that is the action that maximises . Is it possible that there exists an action which is worse according to , but favoured by a -maximising agent? In the standard corrigibility framework, an action being 'worse according to ' only affects the utility when the button is not pressed. In the Nash Bargaining framework, the utility of an action according to affects the utility in the case where the button is pressed as well (since the total utility is the product of and .) This means that taking an action might result in both and being reduced.
However, it is still possible to conceive of a situation where and are reduced, but is increased enough so that the expected value of the total utility is increased. If above is the 'default' expected utility let be the expected utility when action is taken:
Action increases by , but decreases and by and respectively (assuming ). The expected value of utility when is chosen is larger than the default action provided that . A bit of algebra reveals that this is the case when:
- Since we have assumed , the RHS of this will always be greater than 0 so we can always find a set of which satisfy this inequality (provided , so that the probability has 'room' to increase). Thus, we can always conceive of an action which is worse according to but with a larger expected value according to .
1 comments
Comments sorted by top scores.
comment by johnswentworth · 2024-01-25T16:38:11.901Z · LW(p) · GW(p)
I think that the Wentworth/Lorell approach differs slightly to the one I use here (in particular, they emphasise the counterfactual nature of the two expected utilities - something I don't fully understand)...
Yup, I indeed think the do()-ops are the main piece missing here. They're what remove the agent's incentive to manipulate the shutdown button.