Descriptive vs. specifiable values

post by TsviBT · 2023-03-26T09:10:56.334Z · LW · GW · 2 comments

Contents

  Descriptive values
  Specifiable values
  Quasi-example: explicit utility maximization
None
2 comments

[Metadata: crossposted from https://tsvibt.blogspot.com/2022/11/descriptive-vs-specifiable-values.html. First completed November 19, 2022.]

What are an agent's values? An answer to this question might be a good description of the agent's external behavior and internal workings, without showing how one could modify the agent's workings or origins so that the agent pushes the world in a specific different direction.

Descriptive values

There's some discussion of what can be inferred about the values of an agent based on its behavior and structure. E.g. see Daniel Dennett's intentional stance, and "Occam's razor is insufficient to infer the preferences of irrational agents" by Stuart Armstrong, Sören Mindermann (arxiv), and this post [AF(p) · GW(p)] by Vanessa Kosoy.

One could describe an agent as having certain values: the agent's behavior is a boundedly rational attempt to push the world in certain directions. For some purposes, it's useful to have a parsimonious description of an agent's behavior or internal workings in terms of values. For example, such a description could be useful for helping the agent out: to help the agent out, you push the world in the same direction that the agent is trying to push the world.

Specifiable values

A distinct purpose in describing an agent as having values is to answer questions about values in counterfactuals:

To make these questions more likely to have answers, and to not rely too much on assumptions about what values are, replace the notion of "values" with the notion "what directions a mind ends up pushing the world in".

Quasi-example: explicit utility maximization

An auxiliary question: how, mechanistically, do "the values" determine the behavior? This question might not have an answer, because there might not be some component in the agent that constitutes "the values". For example, in humans, there's no clear value component; there are many in-built behavior-determiners, but they don't fully constitute what we call our values. But, in cases where we clearly understand the mechanism by which an agent's values determine its behavior, answers to other questions about values in counterfactuals might follow.

For example, there's the classic agent model: a system that searches for actions that it predicts will lead in expectation to the most highly-scored world according to its utility function box. The mechanism is explicit in this model. The utility function is embodied, in a box, as an input-output function, and it determines the agent's effects on the world by providing the criterion that the agent uses to select actions. Some answers to the above questions follow. E.g., it's clear at least qualitatively how to modify the agent's values to a specific state: if you want to make the agent cause a certain kind of world, just change the utility function to score that kind of world highly.

Even this example is not so clear cut, and relies on background assumptions. See problems with embedded agency [? · GW]. For example, if we assume that there's already a fixed world (that is, an understanding of what's possible) about which to define the utility function, we sweep under the rug that the understanding behind having such a world had to be gained, and that the gaining of understanding might also change an agent's values.

2 comments

Comments sorted by top scores.

comment by baturinsky · 2023-03-27T15:23:17.806Z · LW(p) · GW(p)

I think AI should threat value function as probabilistic.  I.e. instead of thinking "this world has value of exactly N" it could think something like "I 90% sure that this world has value N+-M, but there is 10% possibility that it could actuall have value -ALOT". And would avoid that world, because it would give a very low expected value on averager.

Replies from: TsviBT
comment by TsviBT · 2023-04-01T08:04:09.644Z · LW(p) · GW(p)

It's a reasonable idea. See here though: https://arbital.com/p/updated_deference/