Reward learning summary

post by Stuart_Armstrong · 2017-11-28T15:55:08.000Z · LW · GW · 1 comments

A putative new idea for AI control; index here.

I've been posting a lot on value/reward learning recently, and, as usual, the process of posting (and some feedback) means that those posts are partially superseded already - and some of them are overly complex.

So here I'll try and briefly summarise my current insights, with links to the other posts if appropriate (a link will cover all the points noted since the previous link):


1 comments

Comments sorted by top scores.

comment by michaelcohen (cocoa) · 2018-01-07T23:04:32.000Z · LW(p) · GW(p)

I had idea for a prior for planners (the 'p' part of (p, R)) that I think would remove the no-free-lunch result. For a given planner, let its "score" be the average reward the agent gets for a randomly selected reward function (with a simplicity prior over reward functions). Let the prior probability for a particular planner be a function of this score, perhaps by applying a Boltzmann distribution over it. I would call this an evolutionary prior—planners that typically get higher reward given a randomly assigned reward function are more likely to exist. One could also randomize the transition function to see how planners do for arbitrary world-dynamics, but it doesn’t seem particularly problematic, and maybe even beneficial, if we place a higher prior probability on planners that are unusually well-adapted to generate good policies given the particular dynamics of the world we’re in.