Ultra-simplified research agenda

post by Stuart_Armstrong · 2019-11-22T14:29:41.227Z · LW · GW · 4 comments

Contents

4 comments

This is an ultra-condensed version of the research agenda [LW · GW] on synthesising human preferences (video version here):

In order to infer what a human wants from what they do, an AI needs to have a human theory of mind.

Theory of mind is something that humans have instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately.

One way of defining theory of mind is to look at how humans internally model the value of various hypothetical actions and events (happening to themselves and to others).

Finally, once we have a full theory of mind, we still need to deal, somehow, with the fact that humans have meta-preferences over their preferences, and that these preferences and meta-preferences are often contradictory, changeable, manipulable, and (more worryingly) underdefined in the exotic worlds that AIs could produce.

Any way of dealing with that fact will be contentious, but it's necessary to sketch out an explicit way of doing this, so it can be critiqued and improved.

A toy model for this research agenda can be found here [LW · GW].

4 comments

Comments sorted by top scores.

comment by Michaël Trazzi (mtrazzi) · 2019-11-22T16:44:16.391Z · LW(p) · GW(p)

Having printed and read the full version, this ultra-simplified version was an useful summary.

Happy to read a (not-so-)simplified version (like 20-30 paragraphs).

comment by John_Maxwell (John_Maxwell_IV) · 2019-11-24T07:22:46.077Z · LW(p) · GW(p)

Theory of mind is something that humans have instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately.

I think this is the weakest part. Consider: "Recognizing cat pictures is something humans can do instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately." But in practice, the techniques that work best for cat pictures work well for lots of other things as well, and a hardcoded solution customized for cat pictures will actually tend to underperform.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-11-25T10:23:20.107Z · LW(p) · GW(p)

I'm actually willing to believe that methods used for cat pictures might work for human theory of mind - if trained on that data (and this doesn't solve the underdefined problem).

comment by avturchin · 2019-11-22T15:36:49.910Z · LW(p) · GW(p)

Maybe we could try to put the theory of mind out of the brackets? In that case, the following type of claims will be meaningful: "For the theory of mind T1, a human being H has the set of preferences P1, and for the another theory of mind T2 he has P2". Now we could compare P1 and P2 and if we find some invariants, they could be used as more robust presentations of the preferences.