Minimizing Empowerment for Safety

post by IAFF-User-111 (Imported-IAFF-User-111) · 2017-02-09T04:45:28.000Z · score: 0 (0 votes) · LW · GW · None comments

I haven't put much thought into this post; it's off the cuff.

DeepMind has published a couple of papers on maximizing empowerment as a form of intrinsic motivation for Unsupervised RL / Intelligent Exploration.

I never looked at either paper in detail, but the basic idea is that you should seek to maximize mutual information between (future) outcomes and actions or policies/options. Doing so means an agent knows what strategy to follow to accomplish a given outcome.

It seems plausible that instead minimizing empowerment in the case where there is a reward function could help steer an agent away from pursuing instrumental goals which have large effects.

So that might be useful for "taskification", "limited impact", etc.

None comments

Comments sorted by top scores.

comment by danieldewey · 2017-02-09T16:29:51.000Z · score: 2 (2 votes) · LW · GW

Discussed briefly in Concrete Problems, FYI:

comment by Vika · 2017-03-08T14:51:19.000Z · score: 0 (0 votes) · LW · GW

I would expect minimizing empowerment to impede the agent in achieving its objectives. You do want the agent to have large effects on some parts of the environment that are relevant to its objectives, without being incentivized to negate those effects in weird ways in order to achieve low impact overall.

I think we need something like a sparse empowerment constraint, where you minimize empowerment over most (but not all) dimensions of the future outcomes.