How to model uncertainty about preferences?

post by quetzal_rainbow · 2023-03-24T19:04:42.005Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    1 quetzal_rainbow
    1 baturinsky
None
No comments

I've recently started to think about how nascent "hot mess" superintelligence can reflect on its own values and converge to something consistent. The simplest route to think about this, it seems to me, is model it like a process of resolving uncertainity of superintelligence about its own preferences. 

Suppose an agent knows that it is an expected utility maximizer and is uncertain between two utility functions,  and , with assigned probabilities  and . The agent must choose between two actions,  and . Let's say that the optimal decision for  is  and for  is . To maximize the expected value of , the agent chooses . However, choosing  is also a decisive evidence in favor of , and therefore, the agent updates  to 1. This representation of uncertain preferences looks unsatisfactory because it quickly and predictably converges to only one utility function.

Does anyone know of a good model for uncertain preferences that can meet these criteria after some additions?

Nash bargaining (between different hypotheses about preferences) looks like something that is close to desirable properties but I am not sure, may be something better has already been developed. 

Answers

answer by baturinsky · 2023-03-25T02:26:50.333Z · LW(p) · GW(p)

Correctly handling the uncertainty in values, knowledge and predictions is necessary for reaching any complex goal or executing any complex plan. So, capability of doing that is probably something that AI will have to obtain in order to be AGI.

No comments

Comments sorted by top scores.