Posts

San Francisco Meetup: Projects 2018-08-24T18:02:10.004Z · score: 2 (1 votes)
San Francisco Meetup: Shallow Questions 2018-08-08T02:13:33.230Z · score: 2 (1 votes)
San Francisco Meetup: Short Talks 2018-07-27T03:37:35.559Z · score: 2 (1 votes)
San Francisco Meetup: Stories 2018-06-22T02:11:08.871Z · score: 10 (3 votes)
San Francisco Meetup: Projects 2018-06-15T23:46:56.979Z · score: 1 (1 votes)
San Francisco Meetup: Deep Questions 2018-06-08T02:09:30.046Z · score: 1 (1 votes)
San Francisco Meetup: Group Debugging 2018-06-04T17:13:02.298Z · score: 7 (2 votes)
Impact Tuning 2017-01-31T03:56:50.000Z · score: 0 (0 votes)
Uncertainty About One's Utility Function 2016-07-27T21:28:43.000Z · score: 1 (1 votes)

Comments

Comment by michael-cohn on Reward learning summary · 2018-01-07T23:04:32.000Z · score: 0 (0 votes) · LW · GW

I had idea for a prior for planners (the 'p' part of (p, R)) that I think would remove the no-free-lunch result. For a given planner, let its "score" be the average reward the agent gets for a randomly selected reward function (with a simplicity prior over reward functions). Let the prior probability for a particular planner be a function of this score, perhaps by applying a Boltzmann distribution over it. I would call this an evolutionary prior—planners that typically get higher reward given a randomly assigned reward function are more likely to exist. One could also randomize the transition function to see how planners do for arbitrary world-dynamics, but it doesn’t seem particularly problematic, and maybe even beneficial, if we place a higher prior probability on planners that are unusually well-adapted to generate good policies given the particular dynamics of the world we’re in.

Comment by michael-cohn on [LINK] Paper on reduced impact ideas · 2017-01-12T20:55:42.000Z · score: 0 (0 votes) · LW · GW

It seems like the purpose of the asteroid scenario is not to come up with ways of deflecting an asteroid, but as an example system in which two uncoordinated AIs (pardon the pun) can minimize impact in an interesting way.

Comment by michael-cohn on Uncertainty About One's Utility Function · 2016-09-18T17:46:51.000Z · score: 0 (0 votes) · LW · GW

Thanks for the thoughts. I guess I need to do a lot more looking into CIRL, before I come back to this. I do still wonder (although this is at an unformalized level) whether an agent could potentially learn a lot about moral evidence from the constraint that it's own actions can't cause the expected evidence to change. For example, if it realizes that a certain action (like subtle coercion) would result in something that it would have thought was legitimate evidence, then that situation must not actually count as evidence at all. That constraint seems to pack a decent minority of our requirements for value learning into a relatively simple statement. There may be other ways to encode such a constraint besides having an agent be uncertain about its function for determining what observations provide what evidence, though.