(Ir)rationality of Pascal's wager

post by filozof3377@gmial.com · 2020-08-03T20:57:24.264Z · LW · GW · 10 comments

Contents

10 comments

During the last few weeks I’ve spent a lot of time thinking about ,,Pascalian” themes, like the paradoxes generated by introducing infinities in ethics or decision theory. In this post I want to focus on Pascal’s wager (Hajek, 2018), and why it is (ir)rational to accept it.

Firstly, it seems to me that a huge part of responses to Pascal’s wager are just unsuccessful rationalizations, which people create to avoid the conclusion. It is common to see people who (a) claim that this conclusion is plainly absurd and just dismiss it without argument, or (b) people who try to give an argument which at first glance seems to work, but at the second glance it backfires and leads to even worse absurdities than the wager.

In fact it is not very surprising if we take into account the psychological studies showing how motivated reasoning and unconscious processes leading to cognitive biases are common (Haidt, 2001; Greene, 2007). Arguably, accepting the Pascal’s wager goes against at least a few of those biases (like scope insensitivity, risk aversion when dealing with small probabilities, even not to mention that the conclusion is rather uncomfortable).

Nevertheless, although I think that the arguments typically advanced against Pascal’s wager are not successful, it still may be rational not to accept the wager.

Here is why I think so.

I regard the expected utility theory as the right approach to making choices under uncertainty, even when dealing with tiny probabilities of the large outcomes. This is simply because it can be shown, that over long series of such choices following such strategy would pay off. However, this argument holds only over the long series of choices, when there is enough time for such improbable scenarios to occur.
For example, imagine a person, let’s call her Sue, who knows with absolute certainty that she has only one decision to make in her life, and after that she will magically disappear. She has a choice between two options.

Option A: 0,0000000000000000001 % probability of creating infinite value.

Option B: 99 % probability of creating of a huge, but finite amount of value.

I think that in this case, option B is the right choice.
However, if instead of having this one choice in her life Sue had to face infinitely many such choices over an infinitely long time, then I think the option A is the right choice, because it gives the best results in expectation.
Of course our lives are, in some sense, the long series of choices, so we ought to follow the expected utility theory. But, what if someone decides to make just one decision, which is worse in expectation but very improbable to have any negative consequences? Of course, if this person would start to make such decisions repeatedly, then she will predictably end worse off, but if she is able to reliably restrict herself to making just this single decision solely on the basis of its small probability, and following the expected utility otherwise, then for me it seems to be rational.

I’ll illustrate what I mean by an example. Imagine three people: Bob, John and Sam. They all think that the acceptance of the Pascal’s wager is unlikely to result in salvation/avoidance of hell. However, they also think that they should maximize the expected utility, and that expected utility in the case of Pascal’s wager is infinite.

Confronted with that difficult dilemma, Bob abandons the expected utility theory and decides to rely more on his ,,intuitive’’ assessment of the choiceworthiness of actions. In other words, he just goes with his gut feelings.

John takes a different strategy, and decides to follow the expected utility theory, so he devotes the rest of his live to researching which religion is most likely to be true. (Since he is not sure which religion is true and he thinks that information value is extremely high in this case)

Sam adopts a mixed strategy. He decides to follow the expected utility theory, but in this one case he decides to make an exception and to not accept the wager, because he thinks it is unlikely to pay off. But he doesn’t want to abandon the expected utility approach either.

It seems to me that the Sam’s strategy achieve the best result at the end. Bob’s strategy is a nonstarter for me, since it predictably will lead to a bad outcomes. On the other hand, John’s strategy commits him to devote whole his life to something, what in the end gives no effects.
Meanwhile, it is unlikely that this one decision to not accept the wager will harm Sam, and following the expected utility theory in other decisions will predictably lead to the most desirable results.

For me it seems to work. Nevertheless, I have to admit that my solution also may seem as an ad hoc rationalization designed to avoid the uncomfortable conclusion.
This argument has also some important limitations, which I won’t address in detail here in order not to make this post too long. However I want to highlight them quickly.

1. How can you be sure that you will stick with your decision not to make any more such exceptions from the expected utility theory?

2. Why making the exception for this particular decision and not any other?

3. The problem posed by tiny probabilities of infinite value, the so called fanaticism problem, is not resolved by this trick, since the existence of God is not the only possible source of the infinite value (Beckstead, 2013; Bostrom, 2011).

4. What if taking that kind of approach would popularise it, causing more people to adopt it, but in different decisions than this concerning Pascal’s wager (or give the evidence that infinitely many copies of you has adopted it, if the universe is infinite (Bostrom, 2011))?

I don’t think that any of this objections is fatal, but I think they are worth considering. Of course, it is possible, and indeed quite probable, that I may have missed something important, fall prey to my bias or made some other kind of error. This is why I decided to write this post. I want to ask anyone who has some thoughts about this topic to comment about my argument, whether the conclusion is right, wrong, or right from wrong reasons. The whole issue may seem abstract, but I think it is really important, so I would appreciate giving it a serious thought.
Thanks in advance for all your comments! :)

Sources:

1. Beckstead, N. (2013). On the Overwhelming Importance of Shaping the Far Future
Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, Vol. 10 (2011): pp. 9-59
Greene, J. D. (2007). The secret joke of Kant's soul. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 35-80). Cambridge, MA, US: MIT Press.
Haidt, J. (2001). The Emotional Dog and it’s Rational Tail: A Social Intuitionist Approach to Moral Judgement. Psychological Review 108: 814-834

Hájek, A. "Pascal's Wager", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2018/entries/pascal-wager/>.

All of the above sources can be found online.

10 comments

Comments sorted by top scores.

comment by Slider · 2020-08-05T14:23:23.479Z · LW(p) · GW(p)

Making the choice infinitely often doesn't really cure anything. Sure you can say that you are risk averse and that risk averseness is more warranted in runs that are known to be short. But forgoing a fininte chance of infinite payoff for finite chance of finite payoff needs/expresses infinite risk aversion.

You totally wan tto be aware what makes you okay the expection. One possilbitry is that you ahve a hidden assumtion that the pascal chance plays by different rules than ordinary small finite chances. You could formalise this as the chance being transfinite ie infinitely small and then transfinite times a infinite payoff would be comparable in expectation to finite chance for finite payoff. This means looking hard at an assumption like "neglible chances can be always adequately expressed by a (finite precision) real number". "infinidesimals play by differnt rules" is less arbitrary but then you got a new field of interest.

Also infinite payoffs can be doubbted whether they make sense or not. You coudl imagine that if you are book character your actions might ordianrilöy have appriciable chances to affect your fictional story world. But if there were some actions that could affect the reader of the story and shape the "real world" it could make sense to call that any positive or negataive impact for the real world can not be overcome with fictional outcomes. This would them effectively relatively infinite in value. Then the choice of a single infinite value choice would be 99% chance of defeating the big bad vs 0.0001% chance of saving the reader from alcholism (or whatever real people impact). it is less clear here that certainty for the victory is the morally attractive good.

Replies from: filozof3377@gmial.com
comment by filozof3377@gmial.com · 2020-08-05T17:58:07.402Z · LW(p) · GW(p)

Thanks for your comment
In some sense I would agree that foregoing a finite chance of infinite payoff for finite chance of finite payoff needs infinite risk aversion. Nevertheless, I think that even such extreme risk aversion could be justified in some specific cases. When I consider this issue, I usually do it in terms of a thought experiment, similar to this with Sue, which I presented in my post.
Imagine you are the only being in the entire universe. You know with certainty that you have just one decision to make and after that you will magically disappear. You are faced with a choice between option A, which gives you 0,001 probability of creating really bad outcome and 0,999 probability of creating a moderately good outcome. You have also option B, and if you choose it nothing happens and universe just remains empty. I think that A is the right choice in this case. In my view, when faced with such a single decision it makes sense to go with whatever option which gives you above 0,5 probability of the best possible outcome.
However, this has very counterintuitive implications of its own. On this account, when faced with such only-one-case scenario as described above, it would be rational to choose option A, which gives you 0,49 probability of infinite negative utility and 0,51 probability of some tiny positive outcome, over option B on which just nothing happens.
This is an extremely counterintuitive result, but ,,pure” expected utility theory (EU) also can generate extremely counterintuitive results, such as choosing the option B (nothing happens) instead of the option A with 0,0000000000000000000000000001 probability of creating infinite negative value and 0,999999999999999999999999999 probability of creating an enormously good, but finite outcome. In a reply to the different comment by Wolajacy above I described how I think about this issue, so I don’t want to repeat it here again. Also, I in my view we should not trust our intuitions in such cases, since they evolved to help us spread our genes in the familiar environment, not to tackle the infinity paradoxes. Therefore I’m ready to accept even counterintuitive results based on explicit reasoning.

You mentioned that maybe it would be worth to revise the assumption that ,,negligible chances can be always adequately expressed by a (finite precision) real number". That would be a way out of the paradox. However, I don’t think that this is a very promising approach. Surely, in some (maybe most) cases it is hard to speak about precise probabilities that we attach to different beliefs. Nevertheless, I doubt that infinitesimals would be an adequate representation of such probabilities, especially in the case of a belief in God, where I think we can do better.

I agree that whether infinite payoffs make sense or not may be problematic. On the most basic, standard formulation of EU it seems that we are facing a problem, since if there are multiple options which can lead to infinite payoffs then we have no standard to choose between them. However, I think that this could be fix by a relatively uncontroversial addition, stating that when we are faced with multiple options of infinite value, we just go for the one with the highest probability. There may be other issues connected with the comparability of different outcomes, as you also mentioned that kind of problem in your example with being a fictional character, but it seems that discussing those issues would lead us even further away for the original topic.

If you haven’t yet, you can also check my reply to the other comment under this post, where I’ve tried to express myself more clearly. Of course if you have any objections to my reasoning outlined here feel free to criticize it, I really appreciate a well-thought feedback.

Replies from: Slider
comment by Slider · 2020-08-05T20:12:13.018Z · LW(p) · GW(p)

In standard utlity theory you really need the numbers to answer which one is better for "really bad outcome" and "moderate good outcome". The scheme you are proposing is more of "maximising the value of the expected outcome" rather than maximing the expected utility. This is a signifcant difference and not a mere technicality. For example under that scheme buying a lottery ticket could never be worth it if the oods are fixed no matter how much (finitely) the payout increases or the ticket price lowers. Torture vs dust specks content is probably relevant for that stuff.

The pascal stuff makes material use that while determining that there is a non zero positive chance. If you can imagine only real (as in non-imaginaary or infinidesimal) odds that leaves very little options. Can you describe how or why infinidesimals describe the chance badly?

Just because to values that might represent values are inifinte doesn't mean they are equal. Transfinite quanities can have differnt magnitudes while being relatively infinite to finite values.

Replies from: filozof3377@gmial.com
comment by filozof3377@gmial.com · 2020-08-06T13:41:10.689Z · LW(p) · GW(p)

You’ve written that ,,In standard utility theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome’’’’ I agree, probably I should have put there some numbers to be more precise.
,,The scheme you are proposing is more of "maximising the value of the expected outcome" rather than maximing the expected utility.”
If I understood correctly what you mean by that, then I would say that I agree also with that. But, with one important remark. I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside.

I call it the ,,first order” approach. However, using such an approach in every single decision would lead to a disaster. I think that the correct way to think about it is to look holistically and adopt a strategy, which based on the approach outlined here will lead to the desirable results. I think that EU is such an approach, since adopting the EU over the long run will lead with probability higher than 0,5 to achieving a net positive result with the highest upside. At least for me, this is the rationale behind the EU which justifies it. Although probably some people would disagree with that, and claim that EU is just self-evident in itself (e.g. https://reducing-suffering.org/why-maximize-expected-value/). Accepting this first order approach as the rationale behind the EU suggests also, that adopting EU with one exception for a very influential, low probability (below 0,5) case may be even a better strategy overall.

For the issue of infinitesimals, maybe it depends on the interpretation of what the nature if probability is. I used to think about probability in terms a subjective degree of belief, or level of confidence, maybe also with some frequentist element attached to it. I’m sceptical about the usage of infinitesimals, since it seems problematic to believe something with an infinitely small level of confidence. Although I have to admit that maybe it would make sense in some cases, e.g. in the case when you are confronted with a lottery with infinitely many options, and you know that one of those infinitely many options will be randomly selected, but there is no way to determine which one. Then it may seem plausible that you should assigned to each option an infinitely small probability of being selected. But at least in the case in which I’m interested in here (the case of belief in God), I don’t think that assigning an infinitely small probability to it would be right. My own very rough estimate is that the probability of the existence of some kind of God is about 0,3 (it shouldn’t be treated literally, I use this number only to roughly express my level of confidence that this is the case).

You’ve mentioned at the end of your comment also the issue of different magnitudes of infinities. That deserves the discussion of its own. I’m not sure for example, how to make decision if we have to choose between two possible Gods, when one God is more probable to exist but offers you ,,only” infinite amount of value, while the second God is less probable to exist but offers you a an infinity of a bigger size. This is an interesting topic, but I’m not sure what to say about it at this moment.

Replies from: Slider
comment by Slider · 2020-08-07T14:23:39.705Z · LW(p) · GW(p)

It is fine to use many level of accuracy but one needs to be consisten on which accuracy level gets applied. If the case is that you "need to want to believe" in the proposition to proceed into step 2 then it is a form of motivated reasoning. And in the case of counterexamples it means providing reasons why a step 1 level analysis is sufficient to prove it absurd without taking into account step 2 analysis.

Standard EU has the property that is some option is worth taking then when tasked to make multiple such choices the same option is chosen. With the 0.5 or actually any total central outcome requirement there is the weird property that what you should choose depends on how many choices you are expecting to make / how long you think you are going to live.

Say you have 3 scenario possibly participate in a lottery A) 1 time B) 10 times C) 100 times. Say you have a 1/10 chance to win 1 $, 1/50 chance to win 100$ and thew ticket costs 10$. In scenario A you have a under 1/5 chance of any positive outcome and even in scenario C without the big win chance you would expect to break even. Isn't it weird to say that you should participate in C but not in A? The chances don't need to be that extreme for it starting to get weird. Or as in the world the lottery offices are constantly open it would be weird to recommend not to do it if you are going to do it less than 100 times but recommend if you do it over 100 times if the odds stay they same. If the lottery is worth it is is already worth it at the first ticket.

For example when thiikng of a coin as frequentist you ask "how many times it would come up heads if thrown infinitely often" Then you woudl be comparing heads counts to tails counts an typically both will be infinite (and sneakily amount representing 25/75 odds are different than representing 50/50 despite being infinite amounts). A frequentist could under stand a infinideismal property as the number of outcomes given infinitie trials woudl be a finite number. For example a coin that would come up infinitely many sides on its side but 3 times on heads and 7 times on tails. Note that we talk as if tails and heads encompass all the relative alternatives while saying that it is possible for a coin to land on its side. Making this exact revolves around probability 0 or what is the distinction betwen impossible and possible but doesn't happen finite slice of the time. And because people are allergic to infinities if they can express their ideas otherwise they often do so. But when topic is infinities they become relevant again.

If you apply the "rare event, big impact" correction you start to approach EU without any possiblity thresholds to meet. Addressing the idea of EU seriously needs to take this extremization seriously. Otherwise you will end with a stance like "you should do absolutely nothing about asteroids as they are part of the neglibly rare noise which magnitude doesn't need to be taken into account".

Couldn't you for example think that there coul dbe varaintions of god that don't differn in other than the height of the human avatar if they choose to appear to people. And don't you express height in real numbers and aren't real numbers innumerably infinite? And there are multiple different attributes such as severity of jail or hell sentences levied etc. If one could argue that the set of relevant gods is a finite sized set then it could easily be argued that if one of them were to be true then the chances of any particular vision of it would have finite chance. But the relevant options are mostly gathered by limits of imagination and not constrained by any empirical evidence. And therefore by having a better imagination and showing a palette of innumerably many options you woudl have to atleast argue why my way of imaginine fails to capture the options or captures the wrong options.

Replies from: filozof3377@gmial.com
comment by filozof3377@gmial.com · 2020-08-07T22:03:01.364Z · LW(p) · GW(p)

I agree that it would be weird to accept the lottery with a positive EU only if you take it some specific number of times. In the normal, everyday decision making I wouldn’t argue for this. Indeed, I think the EU is the right approach to making decisions under uncertainty. I’m willing to follow it, even if chances of success are low, but the stakes are high enough to make the EU positive. What I argue for, is the justification why I’m willing to follow the EU. I don’t think that this is self-evident, that I should choose the option with the highest EU. I think that what makes following the EU the right choice is the law of large numbers. If I have to pay 10$ for a lottery where you have 99,9% chance that you will win nothing and 0,1% chance that you will win 100000000000 $ then I think I should pay. But the rationale why I should play, at least for me, is not that it is intrinsically worth it/rational. For me the reason why this is a good option , is that it finally will pay off, and in the end I will have a lot more money than I had at the start. Also, I think this holds also if I was able to buy only one ticket for that lottery, because even if I will gain nothing from this particular choice, still I will encounter low probability-high stakes choices many times in my life, so finally it would be worth it.
My point is just that EU needs the law of large numbers or repeated decision making or series of choices, however we call it, to be the rational strategy. And I don’t mean necessarily ,,choices concerning playing this particular lottery”. I mean making any choices under uncertainty. In other words, in the end the EU is more probable than improbable, to produce the outcome with the highest upside. This is why I think that this first order approach, which I described earlier, in fact implies the EU when we look at our situation holistically. For this reason, I regard the EU as rational.

My whole point is basically that it won’t harm you, if you make the one exception from following the EU in the case of one particular low probability-high stake decision, that is Pascal’s wager. The condition is that you need to be able to reliably restrict yourself to making just one (or at least limited) number of exceptions, since finally you will encounter the case when despite of the low probability, consequences will occur to be real.

Sure, this addition may seem ad hoc an a bit theoretically inelegant. It’s also true that it demands that assumption of how much choices you will have occasion to make, but it doesn’t look very problematic for me. All things considered, it seems to me that the rationale standing behind it (derived from the way in which I think the EU is justified) is enough to justify it.

You’ve mentioned also those infinite possibilities of different Gods only slightly different form each other. Fair enough, in this case maybe the infinitesimals are the right representation of credence that one should have in such options. Nevertheless, if we are to follow the EU in literally every case, then it still seems to make sense to determine which God (or set of possible Gods similar to each other) has the highest probability and then to accept the wager. Maybe the acceptance of the wager would not look like the proponents of it usually imagine (i.e. accepting particular religion) but rather devoting yourself to doing research to find out which God is the most probable one (because of the information value). Nevertheless, it still would have a significant influence on the way we live. And maybe this is fine and we should just accept this conclusion, I’m not sure about it. However, the approach that I’ve proposed here for me seems to be rather more rational.

Replies from: Slider
comment by Slider · 2020-08-09T14:18:19.280Z · LW(p) · GW(p)

There is the issue whether one believes the stated chances are real or whether one is in error about it. If you believed that there was a 1/4th chance of heads when in fact the coin was fair then your betting will be lead astray. However if the odds are correct and the math says you end up with more money there is no way to argue that you can forgo the option and claim to be a money grabbing agent.

We could think of some agent wanting to not buy a payout biased lottery ticket where they think they wil save the cost of the ticket and get to keep to call themselfs as a good decision maker. If the odds are 10% of 1000$ for 1 $ ticket and the agent thinks they expect to lose on money they have made a math error. You don't get to call yourself being able to calculate odds correctly if you make a limited amount of mistakes. And certainly you don't end up going "over the limit" of "all accruable winnings" by the price of ticket. Either the ticket price is part of the accruable winnings, or the total is some subtotal that doesn't actually represent everything achievable.

The usual worry about what would be the policy implication of accepting the pascal wager would be that you would be prone to be pascal mugged. Anyone can fabricate a very remote very low comfortability threat and ask for finite compensation to not do it. But a website saying you are the 1000000000th visitor to the website is not a very good evidence of those chances being real. And in a way very higly tuned chances need very much data to be well founded. That way almost anyone can make 50:50 claims but very few people can plausibly state any 0.00000001% odds. Thus in a finite aged universe none can have the inductive support for any infinidesimal chance.

There can be many dimensions of asking indecidably low odds of what could happen. An agent that systematically excused each of the questions to be a one-off exception could be totally prey to rare events.But one has to distinguish doing well in a the model and doing well in fact. You don't get to not get victimised by supernovas if you lack the capacity to model supernovas. It can make sense to focus on what you can model and stay silent on what you can't model but pushing the edge on what you can model can be critical.

Replies from: filozof3377@gmial.com
comment by filozof3377@gmial.com · 2020-08-16T20:25:25.735Z · LW(p) · GW(p)

Thanks for your comment. I've thought through the issue cerafully and I'm no longer so confident about this topic. Now I'm planning to read more about the Pascal's wager and about the decision theory in general. I want to think about it more, and do my best to come up with a good enough solution for this problem.

Thank you for the whole disccusion and the time devoted for responding to my arguemnts.

comment by wolajacy · 2020-08-04T19:57:46.839Z · LW(p) · GW(p)
Here is why I think so:
[...]
I think that in this case, option B is the right choice.
[...]
But, what if someone decides to make just one decision, which is worse in expectation but very improbable to have any negative consequences? Of course, if this person would start to make such decisions repeatedly, then she will predictably end worse off, but if she is able to reliably restrict herself to making just this single decision solely on the basis of its small probability, and following the expected utility otherwise, then for me it seems to be rational.
[...]
It seems to me that the Sam’s strategy achieve the best result at the end.

So, I'm not really seeing any argument in your post. You claim to answer the question "why", but then just present the cases/stories and go on to say "for me, this is the right choice". Therefore, it's difficult to provide any comment on the reasoning.

So, my question would be: in what sense it's the right choice/rational/achieving best result? The only passage that seems to start addressing that is the one with "it's worse in expectation, but very improbable".

(One way to judge a decision rigorously, as you seem to be doing in the "ordinary" case, would be to create a model and a utility function - in your text: a long sequence of decisions, a payoff at each one, aggregated utility measured by a sum or mean).

Replies from: filozof3377@gmial.com
comment by filozof3377@gmial.com · 2020-08-05T11:50:16.092Z · LW(p) · GW(p)

Thanks for your comment. I’ll try to express myself more clearly.
You’ve asked ,, in what sense it's the right choice/rational/achieving best result?”
This is what I had in mind.
I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside. Let’s call this approach the ,,first order” approach (I’m still uncertain about this exact formulation and I may revise it in the future, but let’s stick with this at the moment).
For example: I have to choose between options A, B and C

Option A: probability 0,9 of gaining 100 utility points and probability 0,1 of gaining 1000 negative utility points

Option B: probability 0,01 of gaining 100000 utility points and probability 0,99 of gaining 10 negative utility points

Option C: probability 0,75 of gaining 1000 utility points and probability 0,25 of gaining 10000 negative utility points

From this set of options first A and C would be selected and finally option C would be chosen. By this choice I will most probably gain 1000 utility points and loose nothing.
However, now is the moment when the expected utility theory (EU) comes into play. Let’s imagine that I know that during my life (let’s say 80 years) I will be confronted with that set of potions many times. If each time I would follow the procedure outlined above, then I would predictably end worse off, since option C has negative EU (indeed, the highest negative EU from all the options).
I think that the approach I’ve defined above is not in contradiction with EU if we look at our life holistically. I used to think of it as choosing the best decision framework, which at the end will lead to the best possible outcome. So my rationale for adopting EU is ultimately based on the first order approach that I defined earlier. I’m not sure how exact probabilities and utility points should look like here, but the situation looks roughly like this:

Option A: Adopt EU (with probability above 0,5 will lead to the best possible result overall)

Option B: Use the first order approach in every single decision (with probability above 0,5 will not lead to the best possible result overall)

That shows that the first order approach leads to the acceptance of EU if we look at the situation holistically. Of course, now the question may arise ,,So for what was that whole fancy theorizing about the first order approach? Isn’t it better to just adopt the EU from the start?”
Well, at least for mi the EU is not self-evident and it needs some further rationale to be justified. The first order approach tries to capture a fundamental intuition which, I think, stand behind the EU.

So what about Pascal’s wager? In this case the acceptance of wager is the best options according to EU. However, as I’ve tried to show above, EU works only because it pays off to follow it over the long series of choices under uncertainty. If some agent is able to reliably restrict herself to making just one exception to following EU, in the case when it is improbable that it would have any negative consequences, then it seems to me that such exception could be justified.
Let’s illustrate it on the same example that I’ve gave above. Suppose that indeed during 80 years of my life I was many times confronted with a choice between the options A, B and C. I followed the EU, so overall I gained a lot of utility points. Now I’m on my deathbed and this is the last hour of my life. Someone approaches me and offers me one more time the choice between options A, B and C. I know that the option B is the best in expectation. However, I have no expectation to life longer than an hour, so there is no more time to make the EU reasoning work. So I decide to choose option C this time. Most probably, I would gain more utility points than if I chose the option B one more time.

Sorry for making this response so long, but I tried to be clear in explaining my reasoning. However, I’m not an expert on probability theory nor on decision theory. If you think that I messed up something in the argument outlined above, fell free to press me on that point. It is really important for me to get things right in this case, so I appreciate the constructive critique.