Learning "known" information when the information is not actually known

post by Stuart_Armstrong · 2019-04-01T17:56:17.719Z · LW · GW · 0 comments

Contents

No comments

Methods like cooperative inverse reinforcement learning assume that the human knows their "true" reward function , and then that the human and the robot cooperate to figure out and maximise this reward.

This is fine as far as the model goes, and can allow us to design many useful systems. But it has a problem: the assumption is not true, and, moreover, its falsity can have major detrimental effects.

Contrast two situations:

  1. The human knows the true .
  2. The human has a collection of partial models [LW · GW] in which they have clearly defined preferences. As a bounded, limited agent whose internal symbols are only well-grounded [LW · GW] in standard situations [LW · GW], their stated preferences will be a simplification of their mental model at the time. The true is constructed from some process of synthesis [LW · GW].

Now imagine the following conversation:

Under most versions of hypothesis 1., this will be in a disaster. The human has expressed their preferences, and, when offered the opportunity for clarification, didn't give any. The AI will become a money-maximiser, and things go pear shaped.

Under hypothesis 2., however, the AI will attempt to get more details out of the human, suggesting hypothetical scenarios, checking what happens when money and other things in money's web of connotations [LW · GW] come apart - eg "What if you had a lot of money, but couldn't buy anything, and everyone despised you?" The synthesis may fail, but, at the very least, the AI will investigate more.

Thus assuming the AI will be learning a truth that humans already know, is harmless assumption in many circumstances, but will result in disasters if pushed to the extreme.

0 comments

Comments sorted by top scores.