Posts

Comments

Comment by IAFF-User-225 (Imported-IAFF-User-225) on CIRL Wireheading · 2017-05-07T13:12:44.000Z · LW · GW

As an observation, it seems like part of the problem in this example is that the agent has access to different actions than the supervisor. The supervisor cannot move to (and therefore cannot provide any information about the reward difference, as noted), but the agent can easily do so. If this were not the case, it would not matter what the agent believed about .

What happens in scenarios where you restrict the set of actions available to the agent so that it matches those available to the supervisor?

Comment by IAFF-User-225 (Imported-IAFF-User-225) on Change utility, reduce extortion · 2017-05-07T12:38:26.000Z · LW · GW

In that case, one strategy the EAI might employ is to allow the FAI to increase its utility to an arbitrarily high level before threatening to take it away. In this way, it can simulate an arbitrarily large disutility even if the utility function is bounded below. Of course, a high utility might improve the FAI's position to resist the EAI's threats.

In this scenario, it is also possible that the FAI, anticipating the EAI's future threat against it, might calculate its expected utility differently. For example, if it deduces that the EAI is waiting until some utility threshold to make its threat, it might limit its own utility growth at some if it found the threat credible to avoid triggering it.

This seems a lot like the human cognitive bias of loss aversion; I wonder if AGIs would (or should) suffer from something similar.

Comment by IAFF-User-225 (Imported-IAFF-User-225) on Change utility, reduce extortion · 2017-05-02T16:46:14.000Z · LW · GW

Would it even be necessary for the EAI to threaten unbounded disutility? Given that is unbounded in the positive direction as well, it seems like a simple threat by the EAI to cap at some value would suffice. Depriving an agent of unbounded rewards could be as bad as threatening unbounded punishments. If the actions that the EAI wants the FAI to take do not themselves go against its utility function, then there is little reason for it not to comply, given the infinite rewards it can gain by going along. An infinite stick is persuasive, but so is an infinite carrot.

An upper bound to (or ) could help to prevent threats of this sort.

Comment by IAFF-User-225 (Imported-IAFF-User-225) on Change utility, reduce extortion · 2017-05-02T02:24:02.000Z · LW · GW

I am a little confused by the axes on those graphs. Is utility meant to decrease from left to right (inverting the x axis) and to increase from bottom to top (normal y axis)?