post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by tailcalled · 2022-02-23T21:33:26.344Z · LW(p) · GW(p)

I doubt all of your ought claims.

Replies from: JBlack
comment by JBlack · 2022-02-24T04:50:48.017Z · LW(p) · GW(p)

I doubt all of the claims, including the "is" claim.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-02-24T15:22:04.109Z · LW(p) · GW(p)

Me too. The claims are doing all the work, while the argument is a triviality.

Replies from: joshua-clymer
comment by joshc (joshua-clymer) · 2022-02-24T17:35:18.774Z · LW(p) · GW(p)

I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.

Replies from: tailcalled
comment by tailcalled · 2022-02-24T18:49:11.915Z · LW(p) · GW(p)

I think a key principle involves selecting the right set of ought claims as assumptions. Some are more convincing than others. E.g. I believe "The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption)." can be replaced with something like "Frequencies and stochasticities are interchangable; X% chance of affecting everyone's utility is equivalent to 100% chance of affect X% of people's utility".

Replies from: joshua-clymer
comment by joshc (joshua-clymer) · 2022-02-26T01:50:02.764Z · LW(p) · GW(p)

This is a much more agreeable assumption. When I get a chance, I'll make sure it can replace the fairness one and add it to the proof and give you credit.

comment by TLW · 2022-02-25T01:31:02.795Z · LW(p) · GW(p)

Another issue:

By the Von Neumann–Morgenstern utility theorem, this implies that there exists a function:

It implies that there exists some such function. It does not imply there exists a single unique function. And indeed the resulting function is not unique.

If I have two choices  and , and I rank  might be one valid function (Effective value of  for  and  for ). But  might be another (Effective value of  for  and  for .)

Since for any two VNM-agents X and Y, their VNM-utility functions uX and uY are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like uX(L) + uY(L) and uX(L) − uY(L) are not canonically defined, nor are comparisons like uX(L) < uY(L) canonically true or false. In particular, the aforementioned "total VNM-utility" and "average VNM-utility" of a population are not canonically meaningful without normalization assumptions[1].

This, unfortunately, rather undermines the rest of your argument.

  1. ^

    https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#Incomparability_between_agents

Replies from: joshua-clymer
comment by joshc (joshua-clymer) · 2022-02-26T01:55:04.706Z · LW(p) · GW(p)

I don't think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.

Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x'+ y'), h becomes irrelevant. It is the same as comparing x + y to x' + y'.

Replies from: TLW
comment by TLW · 2022-02-26T05:43:57.400Z · LW(p) · GW(p)

I have three agents   and , each with the following preferences between two outcomes  and :

  1. Agents  and  prefers 
    1. Agent  prefers 
  2. For any two lottos <, with an  chance of getting , otherwise > and <, with an  chance of getting , otherwise >:
    1. if 
      1.  and   prefer 
      2.  prefers .
    2. If , all three agents are indifferent between  and 
    3. if :
      1.  and   prefer 
      2.  prefers .

(2 is redundant given 1, but I figured it was best to spell it out.)

This satisfies the axioms of the VNM theorem.

I'll give you a freebee here: I am declaring that agent 's utility function is:  as part of the problem. This is compatible with the definition of agent 's preferences, above.

As for agents  and , I'll give you less of a freebee:
I am declaring as part of the problem that one of the two agents, agent [redacted alpha] has the following utility function: . This is compatible with the definition of agent [redacted alpha]'s preferences, above.
I am declaring as part of the problem that the other of the two agents, agent [redacted beta] has the following utility function: : . This is compatible with the definition of agent [redacted beta]'s preferences, above.

Now, consider the following scenarios:

  1. Agent [redacted alpha] and agent  are choosing between  and :
    1. The resulting utility function is 
    2. The resulting optimal outcome is outcome .
  2. Agent [redacted beta] and agent  are choosing between  and :
    1. The resulting utility function is 
    2. The resulting optimal outcome is outcome .
  3. Agent   and agent  are choosing between  and :
    1. Is this the same as scenario 1? Or scenario 2?
  4. Agent   and agent  are choosing between  and :
    1. Is this the same as scenario 1? Or scenario 2?

Please tell me the optimal outcome for 3 and 4.

comment by TLW · 2022-02-24T04:40:16.327Z · LW(p) · GW(p)

This assumes that the act of evaluating a utility function has no utility cost.

I do not agree with this (implicit) assumption.

Replies from: joshua-clymer
comment by joshc (joshua-clymer) · 2022-02-24T17:28:40.972Z · LW(p) · GW(p)

Good point, I overlooked this.