"Do Nothing" utility function, 3½ years later?

post by niplav · 2020-07-20T11:09:36.946Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    7 Vika
None
1 comment

In AI Alignment: Why It's Hard and Where To Start, at 21:21, Yudkowsky says:

If we want to have a robot that will let us press the suspend button—just suspend it to disk—we can suppose that we already have a utility function that describes: “Do nothing.” In point in fact, we don’t have a utility function that says, “Do nothing.” That’s how primitive the state of the field is right now. But, leaving that aside, it's not the hardest problem we're ever going to do, and we might have it in six months, for all I know.

I get the impression that there are some pointers to this in Attainable Utility Preservation [LW · GW] (but saying "maximise attainable utility over this set of random utility functions" seems like it would just fire up instrumentally convergent drives), but I could be wrong.

So, 3½ years later, what is the state on "do nothing" utility functions?

Answers

answer by Vika · 2020-07-20T11:31:09.697Z · LW(p) · GW(p)

Hi there! If you'd like to get up to speed on impact measures, I would recommend these papers and the Reframing Impact [? · GW] sequence.

comment by niplav · 2020-07-20T21:15:07.994Z · LW(p) · GW(p)

Thanks for the links! I'll check them out.

1 comment

Comments sorted by top scores.

comment by Pattern · 2020-07-20T21:42:08.170Z · LW(p) · GW(p)

I think there are proposals that (are hoped? with more research?) might lead to changeable utility functions, i.e. an agents won't try to stop you from changing their utility function.

'Don't self modify' utility functions, I don't think are around yet - the tricky part might be in getting the agent recognize itself, the goal, or something.

 

Most of what I've seen has revolved around thought experiments (with math).