Toy model piece #4: partial preferences, re-re-visited

post by Stuart_Armstrong · 2019-09-12T03:31:08.628Z · LW · GW · 5 comments

Contents

  Two failed attempts
  The formalism
  Utilities difference between clearly comparable worlds
  Interpretation of l
None
5 comments

Two failed attempts

I initially defined partial preferences [LW · GW] in terms of foreground variables and background variables .

Then a partial preference would be defined by and in , such that, for any , the world described by would be better than the world described by . The idea being that, everything else being equal (ie the same ), a world with was better than a world with . The other assumption is that, within mental models, human preferences can be phrased as one or many binary comparisons. So if we have a partial preference like : "I prefer a chocolate ice-cream to getting kicked in the groin", then and are otherwise identical worlds with a chocolate ice-cream and a groin-kick, respectively.

Note that in this formalism, there are two subsets of the set of worlds, and , and map between them (which just sends to ).

In a later post [LW · GW], I realised that such a formalism can't capture seemingly simple preferences, such as : " people is better than people". The problem is that that preferences like that don't talk about just two subsets of worlds, but many more.

Thus a partial preference was defined as a preorder. Now, a preorder is certainly rich enough to include preferences like , but its allows for far too many different types of structures, needing a complicated energy-minimisation procedure to turn a preorder into a utility function.

This post presents another formalism for partial preferences, that keeps the initial intuition but can capture preferences like .

The formalism

Let be the (finite) set of all worlds, seen as universes with their whole history.

Let be a subset of , and let be an injective (one-to-one) map from to . Define , the image of , and as the inverse.

Then the preference is determined by:

If and are disjoint, this just reproduces the original definition, with and .

But it also allows preferences like , defining as something like "the same world as , but with one less person". In that case, maps some parts of to itself[1].

Then for any element , we can construct its upwards and downwards chain:

These chains end when they cycle: so there is an and an so that (equivalently, ).

If they don't cycle, the upwards chain ends when there is an which is not an element of (hence is not defined on in), and the downward chain ends when there is an which is not in (and hence is not defined on it).

So, for example, for , all the chains contain two elements only: and . For , there are no cycles, and the lower chain ends when the population hits zero, while the upper chain ends when the population hits some maximal value.

Utilities difference between clearly comparable worlds

Since the worlds of decompose either into chains or cycles via , there is not need for the full machinery for utilities constructed in this post [LW · GW].

One thing we can define unambiguously, is the relative utility between two elements of the same chain/cycle:

Currently, lets normalise these relative utilities to , by normalising each chain individually; note that if every world in the chain is reachable, this is the same as the mean-max normalisation [LW · GW] on each chain:

We we could try and extend to a global utility function which compares different chains and compares values in chains with values outside of . But as we shall see in the next post [LW · GW], this doesn't work when combining different partial preferences.

Interpretation of

The interpretation of is something like "this is the key difference in features that causes the difference in world-rankings". So, for , the switches out a chocolate ice-cream and substitutes a groin-kick. While for , the simply removes one person from the world.

This means that, locally, we can express in the same formalism as in the first post [LW · GW]. Here the are the background variables, while is a discrete variable that operates on.

We cannot necessarily express this product globally. Consider, for , a situation where is an idyllic village, is an Earthbound human population, and a star-spanning civilization with extensive use of human uploads.

And if denotes the number of people in each world, it's clear that hits a low maximum for (thousands?), can rise much higher for (trillions?), and even higher for (need to use scientific notation). So though makes sense, is nonsense. So there is no global decomposition of these worlds as .


  1. Note that there is a similarity with CP-nets, if we consider this as expressing a preference over population size while keeping other variables constant. ↩︎

5 comments

Comments sorted by top scores.

comment by adamShimi · 2020-02-11T17:19:07.462Z · LW(p) · GW(p)

I really like the refinement of the formalization, with the explanations of what to keep and what was missing.

That said, I feel like the final formalization could be defined directly as a special type of preorder, one composed only of disjoint chains and cycles. Because as I understand the rest of the post, that is what you use when computing the utility function. This formalization would also be more direct, with one less layer of abstraction.

Is there any reason to prefer the "injective function" definition to the "special preorder" one?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-02-11T17:34:26.818Z · LW(p) · GW(p)

This felt more intuitive to me (and it's a minor result that injective function->special preorder) and closer to what humans actually seem to have in their minds.

That said, since it's equivalent, there is nothing wrong with starting from either approach.

comment by Acorn · 2019-12-30T20:39:31.665Z · LW(p) · GW(p)

I am studying along your Research Agenda, and I am very excited about your whole plan.

As for this one, I am puzzled that how to formalize preferences like this together, "I prefer apples to both bananas and peaches", since the function l is one-to-one here. In contrast, the model you proposed in the "Toy model piece #1: Partial preferences revisited" deals with this quite easily.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-12-30T19:02:49.006Z · LW(p) · GW(p)

Does this imply I prefer X apples to Y bananas and Z pears, where Y+Z=X?

If it's just for a single fruit, I'd decompose that preference into two separate ones? Apple vs Banana, Apple vs Pear.

Replies from: Acorn
comment by Acorn · 2020-01-02T20:31:37.853Z · LW(p) · GW(p)

Sorry for my vague expressions here. What I try to say is that "I prefer apples to bananas and I prefer apples to peaches". My original thought is that: If this statement is formalized in a single world, since it is not clear that whether I prefer bananas to peaches, it seems that the function l has to map apples to bananas and peaches at the same time, which violates its one-to-one property.

But maybe I also asked a bad question: I mistook the definition of partial preferences for any simple statement about preferences, and tried to apply the model proposed in this post to the "composite" preferences, which actually expressed two preferences.