Posts

Comments

Comment by thomas-redding on [deleted post] 2019-08-30T21:49:39.815Z

For a fixed population any reasonable utilitarian must maximize the sum of individual utility. The fact that your system does not shows its fundamental "logical" weakness.


Take a look the Von Neumann-Morgenstern axioms. Seriously, read it - it's just a list of four assumptions. If we assume

a) Individual "wellbeing" follows these axioms.

b) "Social welfare" follows these axioms.

c) If everyone is indifferent between two universes, both universes have qual utility.

Then, John Harsanyi showed in 1955 that we have to believe in the existence of a utility function which "should" be maximized. This means, among other things, that given a fixed population, we should maximize the sum of utility. Your system (and your portrayal of average utilitarianism) doesn't do this, which means you are rejecting one of these axioms.


IMO, the axiom you're rejecting is completeness. You've set yourself the much easier task of choosing between two actions rather than the much harder task of choosing between all possible actions - that is constructing a weak ordering of actions.

Your system says (effectively) that there are sets of actions that incomparable. Consider three possible futures you're choosing between

A: 10 people with 2 utils each. (20 total)

B: 20 people with 3 utils each. (60 total)

You'd say (as would all utilitarians) that B > A.

Now add

C: 30 people with 1 util. (30 total)

Your system doesn't specify where C fits into this and any specification

Moreover, because the proposed system doesn't spit out a social welfare number, it doesn't let us operate under uncertainty by maximizing expected value. Normal utilitarianism (both total and average) do let us do this, so the bar is again lowered.


Harsanyi, J., C. (1955). Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of political economy, 63(4) , 309-321. https://doi.org/10.1086/257678