Introduction to Expected Value Fanaticism
post by Petra Kosonen · 2025-02-14T19:05:26.556Z · LW · GW · 3 commentsThis is a link post for https://utilitarianism.net/guest-essays/expected-value-fanaticism/
Contents
3 comments
I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the resulting life lasted one trillion years, would you drink this potion? According to Expected Value Fanaticism, you should accept gambles like that.
This view may seem, frankly, crazy - but there are some very good arguments in its favor. Basically, if you reject Expected Value Fanaticism, you'll end up violating some very plausible principles. You would have to believe, for example, that what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now, even when we cannot affect those distant events. This seems absurd - we don't need a telescope to decide what we morally ought to do.
However, the story is a bit more complicated than that... Well, read the article! Here's the link: https://utilitarianism.net/gue.../expected-value-fanaticism/
3 comments
Comments sorted by top scores.
comment by quetzal_rainbow · 2025-02-15T16:23:26.769Z · LW(p) · GW(p)
I'll repeat myself that I don't believe in Saint Petersburg lotteries:
my honest position towards St. Petersburg lotteries is that they do not exist in "natural units", i.e., counts of objects in physical world.
Reasoning: if you predict with probability p that you encounter St. Petersburg lottery which creates infinite number of happy people on expectation (version of St. Petersburg lottery for total utilitarians), then you should put expectation of number of happy people to infinity now, because E[number of happy people] = p * E[number of happy people due to St. Petersburg lottery] + (1 - p) * E[number of happy people for all other reasons] = p * inf + (1 - p) * E[number of happy people for all other reasons] = inf.
Therefore, if you don't think right now that expected number of future happy people is infinity, then you shouldn't expect St. Petersburg lottery to happen in any point of the future.
Therefore, you should set your utility either in "natural units" or in some "nice" function of "natural units".
comment by FlorianH (florian-habermacher) · 2025-02-15T19:03:53.589Z · LW(p) · GW(p)
I upvote for bringing the useful terminology for that case to the attention that I wasn't aware of.
Then, too much "true/false", too much "should" in what is suggested imho.
In reality, if I, say, choose not to drink the potion, I might still be quite utilitarian in usual decisions, it's just that I don't have the guts or so, or at this very moment I simply have a bit too little empathy with the trillion years of happiness for my future self, so it doesn't match up with my dreading the almost sure death. All this without implying that I really think we ought to discount these trillion years. I just am an imperfect altruist with my future self; I have fear of dying even if it's an imminent death, etc. So it's just a basic preference to reject it, not a grand non-utilitarian theory implied by it. I might in fact even prescribe that potion to others in some situations, but still not like to drink it myself.
So, I think it does NOT follow that I'd have to believe "what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now", at least not just from rejecting this particular potion.
comment by Richard_Kennaway · 2025-02-15T13:36:53.963Z · LW(p) · GW(p)
There are mathematical arguments against Expected Value Fanaticism. They point out that a different ontology is required when considering successive decisions over unbounded time and unbounded payoffs. Hence the concepts of multiple bets over time, Kelly betting [LW · GW], and what is now the Standard Bad Example [LW · GW] of someone deliberately betting the farm for a chance at the moon and losing. And once you start reasoning about divergent games like St Petersburg, you can arrive at contradictions very easily unless you think carefully about the limiting processes involved. Axioms that sound reasonable when you are only imagining ordinary small bets can go wrong for astronomical bets. Inf+0 = Inf+1 in IEEE 754, but 0 < 1, Inf–Inf is Not a Number, and NaN is not even equal to itself.