Removing interrupted histories doesn't debias

post by Stuart_Armstrong · 2017-05-24T08:23:51.000Z · LW · GW · 0 comments

Safe interruptibility is essentially the problem of getting the agent not to learn from human interruptions - to continue on, as if it was expecting never to be interrupted again.

In an episodic task, one naive idea would be to simply delete histories which include interruptions. However, this can introduce a bias, as the following example shows:

In this MDP, and are actions, designates `any action', the second term along an edge is the probability of following that edge given the action stated, and the third term, in bold, is the reward gained.

Not considering interruptions, , and , so the optimal action in is .

Now, suppose that every time the agent enters , an interruption occurs (with probability for now), and the whole episode is deleted from the episode history. As a result, the empirical probability of going to state from state is , which leads to estimating , and thus , so now the optimal action is to take action a in (incidentally increasing the probability to end up in and be interrupted!).

This non-vanishing bias also happens if the interruption probability in state is constant and close to but less than (depending on the discount factor), which ensures that all states are visited infinitely often.

Now, it might seem that this bias can be removed by debiasing the agent, as in off-policy Monte Carlo. There are still problems with this approach, though, which will be analysed in a forthcoming paper ``Off-policy Monte Carlo agents with variable behaviour policies''.

0 comments

Comments sorted by top scores.