Posts
Comments
Thanks for your comment. I've thought through the issue cerafully and I'm no longer so confident about this topic. Now I'm planning to read more about the Pascal's wager and about the decision theory in general. I want to think about it more, and do my best to come up with a good enough solution for this problem.
Thank you for the whole disccusion and the time devoted for responding to my arguemnts.
I agree that it would be weird to accept the lottery with a positive EU only if you take it some specific number of times. In the normal, everyday decision making I wouldn’t argue for this. Indeed, I think the EU is the right approach to making decisions under uncertainty. I’m willing to follow it, even if chances of success are low, but the stakes are high enough to make the EU positive. What I argue for, is the justification why I’m willing to follow the EU. I don’t think that this is self-evident, that I should choose the option with the highest EU. I think that what makes following the EU the right choice is the law of large numbers. If I have to pay 10$ for a lottery where you have 99,9% chance that you will win nothing and 0,1% chance that you will win 100000000000 $ then I think I should pay. But the rationale why I should play, at least for me, is not that it is intrinsically worth it/rational. For me the reason why this is a good option , is that it finally will pay off, and in the end I will have a lot more money than I had at the start. Also, I think this holds also if I was able to buy only one ticket for that lottery, because even if I will gain nothing from this particular choice, still I will encounter low probability-high stakes choices many times in my life, so finally it would be worth it.
My point is just that EU needs the law of large numbers or repeated decision making or series of choices, however we call it, to be the rational strategy. And I don’t mean necessarily ,,choices concerning playing this particular lottery”. I mean making any choices under uncertainty. In other words, in the end the EU is more probable than improbable, to produce the outcome with the highest upside. This is why I think that this first order approach, which I described earlier, in fact implies the EU when we look at our situation holistically. For this reason, I regard the EU as rational.
My whole point is basically that it won’t harm you, if you make the one exception from following the EU in the case of one particular low probability-high stake decision, that is Pascal’s wager. The condition is that you need to be able to reliably restrict yourself to making just one (or at least limited) number of exceptions, since finally you will encounter the case when despite of the low probability, consequences will occur to be real.
Sure, this addition may seem ad hoc an a bit theoretically inelegant. It’s also true that it demands that assumption of how much choices you will have occasion to make, but it doesn’t look very problematic for me. All things considered, it seems to me that the rationale standing behind it (derived from the way in which I think the EU is justified) is enough to justify it.
You’ve mentioned also those infinite possibilities of different Gods only slightly different form each other. Fair enough, in this case maybe the infinitesimals are the right representation of credence that one should have in such options. Nevertheless, if we are to follow the EU in literally every case, then it still seems to make sense to determine which God (or set of possible Gods similar to each other) has the highest probability and then to accept the wager. Maybe the acceptance of the wager would not look like the proponents of it usually imagine (i.e. accepting particular religion) but rather devoting yourself to doing research to find out which God is the most probable one (because of the information value). Nevertheless, it still would have a significant influence on the way we live. And maybe this is fine and we should just accept this conclusion, I’m not sure about it. However, the approach that I’ve proposed here for me seems to be rather more rational.
You’ve written that ,,In standard utility theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome’’’’ I agree, probably I should have put there some numbers to be more precise.
,,The scheme you are proposing is more of "maximising the value of the expected outcome" rather than maximing the expected utility.”
If I understood correctly what you mean by that, then I would say that I agree also with that. But, with one important remark. I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside.
I call it the ,,first order” approach. However, using such an approach in every single decision would lead to a disaster. I think that the correct way to think about it is to look holistically and adopt a strategy, which based on the approach outlined here will lead to the desirable results. I think that EU is such an approach, since adopting the EU over the long run will lead with probability higher than 0,5 to achieving a net positive result with the highest upside. At least for me, this is the rationale behind the EU which justifies it. Although probably some people would disagree with that, and claim that EU is just self-evident in itself (e.g. https://reducing-suffering.org/why-maximize-expected-value/). Accepting this first order approach as the rationale behind the EU suggests also, that adopting EU with one exception for a very influential, low probability (below 0,5) case may be even a better strategy overall.
For the issue of infinitesimals, maybe it depends on the interpretation of what the nature if probability is. I used to think about probability in terms a subjective degree of belief, or level of confidence, maybe also with some frequentist element attached to it. I’m sceptical about the usage of infinitesimals, since it seems problematic to believe something with an infinitely small level of confidence. Although I have to admit that maybe it would make sense in some cases, e.g. in the case when you are confronted with a lottery with infinitely many options, and you know that one of those infinitely many options will be randomly selected, but there is no way to determine which one. Then it may seem plausible that you should assigned to each option an infinitely small probability of being selected. But at least in the case in which I’m interested in here (the case of belief in God), I don’t think that assigning an infinitely small probability to it would be right. My own very rough estimate is that the probability of the existence of some kind of God is about 0,3 (it shouldn’t be treated literally, I use this number only to roughly express my level of confidence that this is the case).
You’ve mentioned at the end of your comment also the issue of different magnitudes of infinities. That deserves the discussion of its own. I’m not sure for example, how to make decision if we have to choose between two possible Gods, when one God is more probable to exist but offers you ,,only” infinite amount of value, while the second God is less probable to exist but offers you a an infinity of a bigger size. This is an interesting topic, but I’m not sure what to say about it at this moment.
Thanks for your comment
In some sense I would agree that foregoing a finite chance of infinite payoff for finite chance of finite payoff needs infinite risk aversion. Nevertheless, I think that even such extreme risk aversion could be justified in some specific cases. When I consider this issue, I usually do it in terms of a thought experiment, similar to this with Sue, which I presented in my post.
Imagine you are the only being in the entire universe. You know with certainty that you have just one decision to make and after that you will magically disappear. You are faced with a choice between option A, which gives you 0,001 probability of creating really bad outcome and 0,999 probability of creating a moderately good outcome. You have also option B, and if you choose it nothing happens and universe just remains empty. I think that A is the right choice in this case. In my view, when faced with such a single decision it makes sense to go with whatever option which gives you above 0,5 probability of the best possible outcome.
However, this has very counterintuitive implications of its own. On this account, when faced with such only-one-case scenario as described above, it would be rational to choose option A, which gives you 0,49 probability of infinite negative utility and 0,51 probability of some tiny positive outcome, over option B on which just nothing happens.
This is an extremely counterintuitive result, but ,,pure” expected utility theory (EU) also can generate extremely counterintuitive results, such as choosing the option B (nothing happens) instead of the option A with 0,0000000000000000000000000001 probability of creating infinite negative value and 0,999999999999999999999999999 probability of creating an enormously good, but finite outcome. In a reply to the different comment by Wolajacy above I described how I think about this issue, so I don’t want to repeat it here again. Also, I in my view we should not trust our intuitions in such cases, since they evolved to help us spread our genes in the familiar environment, not to tackle the infinity paradoxes. Therefore I’m ready to accept even counterintuitive results based on explicit reasoning.
You mentioned that maybe it would be worth to revise the assumption that ,,negligible chances can be always adequately expressed by a (finite precision) real number". That would be a way out of the paradox. However, I don’t think that this is a very promising approach. Surely, in some (maybe most) cases it is hard to speak about precise probabilities that we attach to different beliefs. Nevertheless, I doubt that infinitesimals would be an adequate representation of such probabilities, especially in the case of a belief in God, where I think we can do better.
I agree that whether infinite payoffs make sense or not may be problematic. On the most basic, standard formulation of EU it seems that we are facing a problem, since if there are multiple options which can lead to infinite payoffs then we have no standard to choose between them. However, I think that this could be fix by a relatively uncontroversial addition, stating that when we are faced with multiple options of infinite value, we just go for the one with the highest probability. There may be other issues connected with the comparability of different outcomes, as you also mentioned that kind of problem in your example with being a fictional character, but it seems that discussing those issues would lead us even further away for the original topic.
If you haven’t yet, you can also check my reply to the other comment under this post, where I’ve tried to express myself more clearly. Of course if you have any objections to my reasoning outlined here feel free to criticize it, I really appreciate a well-thought feedback.
Thanks for your comment. I’ll try to express myself more clearly.
You’ve asked ,, in what sense it's the right choice/rational/achieving best result?”
This is what I had in mind.
I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside. Let’s call this approach the ,,first order” approach (I’m still uncertain about this exact formulation and I may revise it in the future, but let’s stick with this at the moment).
For example: I have to choose between options A, B and C
Option A: probability 0,9 of gaining 100 utility points and probability 0,1 of gaining 1000 negative utility points
Option B: probability 0,01 of gaining 100000 utility points and probability 0,99 of gaining 10 negative utility points
Option C: probability 0,75 of gaining 1000 utility points and probability 0,25 of gaining 10000 negative utility points
From this set of options first A and C would be selected and finally option C would be chosen. By this choice I will most probably gain 1000 utility points and loose nothing.
However, now is the moment when the expected utility theory (EU) comes into play. Let’s imagine that I know that during my life (let’s say 80 years) I will be confronted with that set of potions many times. If each time I would follow the procedure outlined above, then I would predictably end worse off, since option C has negative EU (indeed, the highest negative EU from all the options).
I think that the approach I’ve defined above is not in contradiction with EU if we look at our life holistically. I used to think of it as choosing the best decision framework, which at the end will lead to the best possible outcome. So my rationale for adopting EU is ultimately based on the first order approach that I defined earlier. I’m not sure how exact probabilities and utility points should look like here, but the situation looks roughly like this:
Option A: Adopt EU (with probability above 0,5 will lead to the best possible result overall)
Option B: Use the first order approach in every single decision (with probability above 0,5 will not lead to the best possible result overall)
That shows that the first order approach leads to the acceptance of EU if we look at the situation holistically. Of course, now the question may arise ,,So for what was that whole fancy theorizing about the first order approach? Isn’t it better to just adopt the EU from the start?”
Well, at least for mi the EU is not self-evident and it needs some further rationale to be justified. The first order approach tries to capture a fundamental intuition which, I think, stand behind the EU.
So what about Pascal’s wager? In this case the acceptance of wager is the best options according to EU. However, as I’ve tried to show above, EU works only because it pays off to follow it over the long series of choices under uncertainty. If some agent is able to reliably restrict herself to making just one exception to following EU, in the case when it is improbable that it would have any negative consequences, then it seems to me that such exception could be justified.
Let’s illustrate it on the same example that I’ve gave above. Suppose that indeed during 80 years of my life I was many times confronted with a choice between the options A, B and C. I followed the EU, so overall I gained a lot of utility points. Now I’m on my deathbed and this is the last hour of my life. Someone approaches me and offers me one more time the choice between options A, B and C. I know that the option B is the best in expectation. However, I have no expectation to life longer than an hour, so there is no more time to make the EU reasoning work. So I decide to choose option C this time. Most probably, I would gain more utility points than if I chose the option B one more time.
Sorry for making this response so long, but I tried to be clear in explaining my reasoning. However, I’m not an expert on probability theory nor on decision theory. If you think that I messed up something in the argument outlined above, fell free to press me on that point. It is really important for me to get things right in this case, so I appreciate the constructive critique.