Intertheoretic utility comparison: examples

post by Stuart_Armstrong · 2019-07-17T12:39:45.147Z · LW · GW · 1 comments

Contents

    The methods
  Max, min, mean
  Controlling the spread
    Properties
None
1 comment

A previous post introduced the theory of intertheoretic utility comparison. This post will give examples of how to do that comparison, by normalising individual utility functions.

The methods

All methods presented here obey the axioms of Relevant data, Continuity, Individual normalisation, and Symmetry. Later, we'll see which ones follow Utility reflection, Cloning indifference, Weak irrelevance, and Strong irrelevance.

Max, min, mean

The maximum of a utility function is , while the minimum is . The mean of .

The max-mean normalisation has an interesting feature: it's precisely the amount of utility that an agent completely ignorant of its own utility, would pay to discover that utility (as a otherwise the agent would employ a random, 'mean', strategy).

For completeness, there is also:

Controlling the spread

The last two methods find ways of controlling the spread of possible utilities. For any utility , define the mean difference: . And define the variance: , where is the mean defined previously.

These lead naturally to:

Properties

The different normalisation methods obey the following axioms:

Property Max-min Max-mean Mean-min Mean difference Variance
Utility reflection YES NO NO YES YES
Cloning indifference YES NO NO NO NO
Weak Irrelevance YES YES YES NO YES
Strong Irrelevance YES YES YES NO NO

As can be seen, max-min normalisation, despite its crudeness, is the only one that obeys all the properties. If we have a measure on , then ignoring the cloning axiom becomes more reasonable. Strong irrelevance can in fact be seen as an anti-variance; it's because of its second order aspect that it fails this.

1 comments

Comments sorted by top scores.

comment by Davidmanheim · 2019-08-12T01:34:36.089Z · LW(p) · GW(p)

This is very interesting - I hadn't thought about utility aggregation for a single agent before, but it seems clearly important now that it has been pointed out.

I'm thinking about this in the context of both the human brain as an amalgamation of sub-agents, and organizations as an amalgamation of individuals. Note that we can treat organizations as rationally maximizing some utility function in the same way we can treat individuals as doing so - but I think that for many or most voting or decision structures, we should be able to rule out the claim that they are following any weighted combination of normalized utilities of the agents involved in the system using any intertheoretic comparison. This seems like a useful result if we can prove it. (Alternatively, it may be that certain decision rules map to specific intertheoretic comparison rules, which would be even more interesting.)