Posts
Comments
I broadly agree with all of Sami's points. However, on this terminological issue I think it is a bit less clear cut. It is true that many decision theorists distinguish between "dutch books" and "money pumps" in the way you are suggesting, and it seems like this is becoming the standard terminology in philosophy. That said, there are definitely some decision theorists that use "Dutch book arguments" to refer to money pump arguments for VNM axioms. For example, Yaari writes that "an agent that violates Expected Utility Theory is vulnerable to a so-called Dutch book".
Now, given that the entry is called "dutch book theorems" and mostly focuses on probabilism, Sami is still right to point out that it is confusing to say that these arguments support EUT. Maybe I would have put this under "misleading" rather than under "false" though.
Thanks!
i’m wary about it. like, how alien is this idealised human? why does it have any moral authority?
I don't have great answers to these metaethical questions. Conditional on normative realism, it seems plausible to me that first-order normative views must satisfy the vNM axioms. Conditional on normative antirealism, I agree it is less clear that first-order normative views must satisfy the vNM axioms, but this is just a special case of it being hard to justify any normative views under normative antirealism.
In any case, I suspect that we are close to reaching bedrock in this discussion, so perhaps this is a good place to end the discussion.
I appreciate the reply!
"the rationality conditions are pretty decent model of human behaviour, but they're only approximations. you're right that if the approximation is perfect then aggregativism is mathematically equivalent to utilitarianism, which does render some of these advantages/objections moot. but I don't know how close the approximations are (that's an empirical question)."
I'm not sure why we should combine Harsanyi's Lottery (or LELO or whatever) with a model of actual human behaviour. Here's a rough sketch of how I am thinking about it: Morality is about what preference ordering we should have. If we should have preference ordering R, then R is rational (morality presumably does not require irrationality). If R is rational, then R satisfies the vNM axioms. Hence, I think it is sufficient that the vNM axioms work as principles of rationality; they don't need to describe actual human behaviour in this context.
Regarding your points about two quick thoughts on time-discounting: yes, I basically agree. However, I also want to note that it is a bit unclear how to ground discounting in LELO, because doing so requires that one specifies the order in which lives are concatenated and I am not sure there is a non-arbitrary way of doing so.
Thanks for engaging!
Thanks for writing this!
I only skimmed the post, so I may have missed something, but it seems to me that this post underemphasizes the fact that both Harsanyi's Lottery and LELO imply utilitarianism under plausible assumptions about rationality. For example, if the social planner satisfies the vNM axioms of expected utility theory, then Harsanyi's Lottery implies that the social planner is utilitarian with respect to expected utilities (Harsanyi 1953). Likewise, if the social planner's intertemporal preferences satisfy a set of normatively plausible axioms, then LELO implies that the social planner is utilitarian with respect to experienced utilities (Fryxell 2024). In my view, it is therefore not clear that it makes sense to compare LELO and Harsanyi's Lottery with utilitarianism.
Also, at least some of the advantages of aggregativism that you mention are easily incorporated into utilitarianism. For example, what is achieved by adopting LELO with exponential time-discounting in Section 2.5.1 can also be achieved by adopting discounted utilitarianism (rather than unweighted total utilitarianism).
A final tiny comment: LELO has a long history, going back to at least C.I. Lewis's " An Analysis of Knowledge and Valuation", though the term "LELO" was coined by my colleague Loren Fryxell (Fryxell 2024). It's probably worth adding citations to these.