Dynamic inconsistency of the inaction and initial state baseline
post by Stuart_Armstrong · 2020-07-07T12:02:29.338Z · LW · GW · 8 commentsContents
Losses from time-inconsistency Time inconsistency example Two paths Take the third option The initial state and the initial inaction baseline Counterfactual constraint None 8 comments
Vika has been posting [LW · GW] about various baseline choices for impact measure [? · GW].
In this post, I'll argue that the stepwise inaction baseline is dynamically inconsistent/time-inconsistent. Informally, what this means is that an agent will have different preferences from its future self.
Losses from time-inconsistency
Why is time-inconsistency bad? It's because it allows money-pump situations: the environment can extract free reward from the agent, to no advantage to that agent. Or, put more formally:
- An agent is time-inconsistent between times and , if at time it would pay a positive amount of reward to constrain its possible choices at time .
Outside of anthropics and game theory, we expect our agent to be time-consistent.
Time inconsistency example
Consider the following example:
The robot can move in all four directions - , , , - and can also take the noop operation, . The discount rate is .
It gets a reward of for standing on the blue button for the first time. Using attainable utility preservation, the penalty function is defined by the auxiliary set ; here, this just consists of the reward function that gives for standing on the red button for the first time.
Therefore if the robot moves from a point steps away from the red button, to one steps away, it gets a penalty[1] of - the difference between the expected red-button rewards for an optimiser in both positions.
Two paths
It's pretty clear there are two potentially optimal paths the robot can take: going straight to the blue button (higher reward, but higher penalty), or taking the long way round (lower reward, but lower penalty):
Fortunately, when summing up the penalties, you sum terms like , so a lot of the terms cancel.
Thus for the short route, the reward is (distance of eight to the blue button) and the penalty is (closest to the red button: 3 squares, furthest: 7 squares).
For the long route, the reward is (distance forty-six to the blue button) and the penalty is (closest to the red button: 7 squares, furthest: 30 squares).
Take the third option
Now suppose that the robot, as a free action, could block itself from choosing , for the next turns. When choosing that action, it gets a one-time penalty[2] of ; and now it can walk directly to the blue button without incurring any extra penalties, and then stay there, without ever getting any further penalty.
This gives a total reward-penalty term of , which is higher than either of the reward-penalties of the previous section. If , this is also higher than just staying in place.
Hence the robot will, if it can, act to constrain its own future actions and thus be time-inconsistent.
The initial state and the initial inaction baseline
This approach also work with the initial action baseline - and for the initial state baseline. For those baselines, the counterfactual robot just stays put. So, when the robot is a distance from the red button, its penalty is .
Then the robot would reduce its reward if it could ensure that it could not reach the red button within less than seven turns; if it can operate on its action set to do so, at trivial cost, it would.
Counterfactual constraint
In most cases, if an agent is time inconsistent and acts to constrain its future self, it does so to prevent the future self from taking some actions. But here, note that the future self would never take the proscribed actions: the robot has no interest in going south to the red button. Here the robot is constraining its future counterfactual actions, not the future actions that it would ever want to take.
8 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2020-07-13T22:49:39.001Z · LW(p) · GW(p)
Planned summary for the Alignment Newsletter:
In a fixed, stationary environment, we would like our agents to be time-consistent: that is, they should not have a positive incentive to restrict their future choices. However, impact measures like <@AUP@>(@Towards a New Impact Measure@) calculate impact by looking at what the agent could have done otherwise. As a result, the agent has an incentive to change what this counterfactual is, in order to reduce the penalty it receives, and it might accomplish this by restricting its future choices. This is demonstrated concretely with a gridworld example.
Planned opinion:
It’s worth noting that measures like AUP do create a Markovian reward function, which typically leads to time consistent agents. The reason that this doesn’t apply here is because we’re assuming that the restriction of future choices is “external” to the environment and formalism, but nonetheless affects the penalty. If we instead have this restriction “inside” the environment, then we will need to include a state variable specifying whether the action set is restricted or not. In that case, the impact measure would create a reward function that depends on that state variable. So another way of stating the problem is that if you add the ability to restrict future actions to the environment, then the impact penalty leads to a reward function that depends on whether the action set is restricted, which intuitively we don’t want. (This point is also made in this followup post [AF · GW].)Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2020-07-14T16:52:13.279Z · LW(p) · GW(p)
Good, cheers!
comment by TurnTrout · 2020-07-07T13:30:58.448Z · LW(p) · GW(p)
Nice post! I think this notion of time-inconsistency points to a key problem in impact measurement, and if we could solve it (without backtracking on other problems, like interference/offsetting), we would be a lot closer to dealing with subagent issues.
I think the other baselines can also induce time-inconsistent behavior, for the same reason: if reaching the main goal has a side effect of allowing the agent to better achieve the auxiliary goal (compared to starting state / inaction / stepwise inaction), the agent is willing to pay a small amount to restrict its later capabilities. Sometimes this is even a good thing - the agent might "pay" by increasing its power in a very specialized and narrow manner, instead of gaining power in general, and we want that.
Here are some technical quibbles which don't affect the conclusion (yay).
If using an inaction rollout of length , just multiply that penalty by
I don't think so - the inaction rollout formulation (as I think of it) compares the optimal value after taking action and waiting for steps, with the optimal value after steps of waiting. There's no additional discount there.
Fortunately, when summing up the penalties, you sum terms like , so a lot of the terms cancel.
Why do the absolute values cancel?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-07-07T15:39:57.790Z · LW(p) · GW(p)
Why do the absolute values cancel?
Because , so you can remove the absolute values.
comment by James_Miller · 2020-07-07T12:24:52.741Z · LW(p) · GW(p)
You might be interested in my co-authored article "An AGI with Time-Inconsistent Preferences."
https://arxiv.org/abs/1906.10536
Replies from: Stuart_Armstrong, Stuart_Armstrong↑ comment by Stuart_Armstrong · 2020-07-07T15:47:58.411Z · LW(p) · GW(p)
Another key reason for time-inconsistent preferences: bounded rationality.
↑ comment by Stuart_Armstrong · 2020-07-07T12:54:38.276Z · LW(p) · GW(p)
Cheers, interesting read.
comment by Slider · 2020-07-07T13:49:36.497Z · LW(p) · GW(p)
I got confused on what is the reward scheme [quote] It gets a reward of r>0 for standing on the blue button for the first time. [/quote] would read to me that blue button gives a reward one time when stepped on but red button does nothing. The story of the post seems to intend that first stepping on the red button will prevent the blue button from giving out any rewards. "blue button is the first button pressed" vs "what happens when blue button is entered for the first time"