Knightian Uncertainty: Bayesian Agents and the MMEU rule
post by So8res · 2014-08-04T14:05:47.758Z · LW · GW · Legacy · 8 commentsContents
Antagonistic ambiguity Preferring the least convenient world Bayesian ambiguity aversion None 8 comments
Recently, I found myself in a conversation with someone advocating the use of Knightian uncertainty. We both agreed that there's no point in singling out some of your uncertainty as "special" unless you treat it differently.
I am under the impression that most advice from advocates of Knightian uncertainty can be taken to heart in a Bayesian framework, and so I find the concept of "Knightian uncertainty" uncompelling. My friend, who I'm anonymizing as "Sir Percy", claims that he does treat Knightian uncertainty differently from normal uncertainty, and so he needs to make the distinction. Unlike an aspiring Bayesian reasoner, who attempts to maximize expected value, he maximizes the minimum expected value given his Knightian uncertainty. This is the MMEU rule motivated previously.
This surprised me: is it possible for a rational agent to refuse to maximize expected utility? My reflexive reaction was simple:
If you're a rational agent and you don't think you're maximizing expected utility, then you've misplaced your "utility" label.
Sir Percy had a response ready:
That can't be. Remember Sir Percy's coin toss. A coin has been tossed, and the event "H" is "the coin came up heads". Consider the following two bets:
- Pay 50¢ to be payed $1.10 if H
- Pay 50¢ to be payed $1.10 if ¬H
I don't know whether the coin was biased, I know only that my credence is in the interval
[0.4, 0.6]
. When considering the first bet, I notice that the probability of H may be 0.4, in which case the bet is expected to lose me 6¢. When considering the second bet, I notice that the probability of H may be 0.6, in which case the bet is expected to lose me 6¢. But when considering both together, I see that I will win 10¢ with certainty. So I reject each bet if presented individually, but I will pay up to 10¢ to play the pair.As you can see, my preferences change under agglomeration of bets. It's not possible to view me as a Bayesian reasoner maximizing expected utility, because there is no credence you can assign to H such that a Bayesian reasoner shares my preferences. I can't be maximizing expected utility, no matter where you put your labels.
My rejection was vague and unformed at the time. I've since fleshed it out, and it will be presented below. But before continuing, see if you can spot my argument on your own:
My friend believes that H occured with probability in the interval [.4, .6]. He is unwilling to pay 50¢ to be payed $1.10 if H, and he is unwilling to pay 50¢ to be payed $1.10 if ¬H, but he is willing to pay up to 10¢ for the pair. There is no way to assign a credence to the event H such that these actions are traditionally consistent. Yet, if we allow that rational agents can in principle have preferences about ambiguity, then he is acting 'rationally' in some sense. Is the Bayesian framework capable of capturing agents with these preferences?
Short answer: Yes.
A Bayesian agent can act like this, and Sir Percy did put his utility label in the wrong place. In fact, there are two ways that a Bayesian idealization of Sir Percy can exhibit Sir Percy's preferences. In the coin toss game, at least one of two things is happening:
- Sir Percy isn't treating H like an event.
- Sir Percy isn't treating money like utility.
I'll explore both possibilities in turn, but first, let's build a little intuition:
The preferences of a rational agent are not supposed to be invariant under agglomeration of bets. Imagine that there are two potions, a blue potion and a green potion. Each potion, taken alone, makes you sick. Both potions, taken together, give you superpowers. It may well be that you reject both "bet 1: 100% you drink the blue potion" and "bet 2: 100% you drink the green potion", but that you happily pay for the pair.
This is not irrational: You could say there is no assignment of utility to the action "drink the blue potion" that makes my actions consistent, but this is an error. Preferences are not over actions, they are over outcomes. If I take bet 1, I get sick. If I take bet 2, I get sick. If I take them both together, I get superpowers.
The fact that I do not honor agglomeration of bets does not mean I am irrational. Sir Percy looks irrational not because his preferences disobey agglomeration of bets, but only because there's no credence which Sir Percy can assign to H that makes his actions consistent with expected money maximization.
This means that, if Sir Percy is traditionally rational, then he's either not treating H like an event, or he's not treating money like utility. And as it turns out, idealized Bayesian reasoners can display Sir Percy's preferences. I'll illustrate two of them below.
Antagonistic ambiguity
Remember Sir Percy's original motivation for rejecting both bets individually in Sir Percy's coin toss:
I'm maximizing the minimum expected utility. Given bet (1), I notice that perhaps the probability of H is only 40%, in which case the expected utility of bet (1) is -6¢, so I reject it. Given bet (2), I notice that perhaps the probability of H is 60%, in which case the expected utility of bet (2) is -6¢, so I reject that too."
Notice how Sir Percy is acting here: given each bet, he assumes that the coin is weighted in whatever manner is worst for him. Sir Percy's [0.4, 0.6]
credence interval for H implies that he has seen evidence that convinces him the coin isn't weighted to land H less than 40% of the time, and isn't weighted to land H more than 60% of the time, but he can't narrow it down any further than that. And when he's considering bets, he acts assuming that the coin is actually weighted in the least convenient way.
We can design a Bayesian agent that acts like this, but this Bayesian agent (who I'll call Paranoid Perry) doesn't treat H like an event. Rather, Perry acts as if the coin's weighting isn't chosen until after Perry selects a bet. Then, nature gets to choose the weighting of the coin (within the constraints set by Perry's confidence interval), and nature always chooses what's worst for Perry.
Let's say that Perry is a perfectly rational Bayesian agent. If nature has a selection of coins weighted from 40% heads to 60% heads, and nature gets to select one of the coins after Perry selects one of the bets, then rejecting each of the bets individually but accepting both together is precisely how Perry maximizes expected utility. The game seems a bit odd until you realize that Perry isn't treating H as an event: Perry is acting as if it's choice of bets affects the weighting of the coin.
Notice that Perry isn't treating its [0.4, 0.6]
credence interval for H as if it's normal uncertainty. Perry is treating this uncertainty as if it is in the world: the coin could actually be weighted anywhere from 40% H to 60% H, and nature gets to choose. It's unclear how Perry decides which uncertainty is internal (subject to normal Bayesian reasoning) and which uncertainty is external (resolved adversarially by nature), but this distinction gives rise to the difference between "normal" uncertainty and "Knightian" uncertainty. Given any mechanism for differentiating between uncertainty about what world Perry is in and variability that gets to be resolved by nature, we can construct a Bayesian reasoner that acts like Perry.
Take, for example, the Ellsberg urn game. Perry has somehow been convinced that the process of selecting balls from the urn is fair: balls are selected by some stochastic process which Nature cannot affect. However, Perry also things that the urn in use gets to be chosen by nature after Perry has selected a bet.
There are many mechanisms by which Perry could come to this state of knowledge. Perhaps the ball-selection mechanism was designed and verified by Perry, but the urn is selected by an unverified process that is supposedly random but which Perry has reason to believe has actually been infected by the nefarious spirit of the Adversary. Given this knowledge, Perry knows that if it chooses bet 1b then nature will select the urn with no black balls, and if it chooses bet 2a then nature will select the urn without yellow balls, so Perry rationally prefers bets 1a and 2b.
Perry is averse to ambiguity because Perry believes that nature gets to resolve ambiguity, and nature is working against Perry. This paranoia captures some of the original motivation for ambiguity aversion, but it starts to seem strange under close examination. For example, in the coin toss game, Perry acts as if nature gets to pick the weighting of the coin even after the coin has been tossed. Perry believes that nature gets to resolve ambiguity, even if the ambiguity lies in the past.
Furthermore, Perry acts as if it is certain that nature gets to resolve ambiguity disfavorably, and this can lead to pathological behavior in some edge cases. For example, consider Perry reasoning about the unbalanced tennis game: Perry believes that one of the players is far better than the other, capable of winning 99 games out of 100. But Perry doesn't know whether Anabel or Zara is the better player. How might the conversation go when a bookie approaches Perry offering a bet of 2:1 odds on the player of Perry's choice?
Hello, Perry, I'm a rich eccentric bookie who likes giving people money in the form of strange bets. I'd like to offer you a bet on the unbalanced tennis game. I know that you know that one of the players is far better than the other, and I know you have "Knightian uncertainty" about which player is better. To countermand your concerns, I'm going to make you a very good deal: I'm willing to offer you a bet with 2:1 odds on the player of your choice.
"I'm sorry", Perry responds, "I cannot take that bet."
But why not? I understand that if I came to you offering a bet with 2:1 odds on Anabel, you should reject it and then update in favor of Anabel being the better player, under the assumption that I was trying to take advantage of your uncertainty. But I'm offering you a bet with 2:1 odds on the player of your choice! How can you reject this?
"Well, you see", Perry responds, "no matter what choice I make, nature will resolve my ambiguity against me. If I place 2:1 odds on Zara winning, then nature will decide that Zara is the worse player, and Zara will loose 99% of the time. But if I instead place 2:1 odds on Zara losing, then nature will decide that Zara is the better player, and Zara will win 99% of the time. Either way, I lose money.
How can that be? Surely, you believe that one player is already better than the other. Your choice can't change who is better; that was decided in the past! You're acting like nature will retroactively make the one that you bet on be worse!
"Yes, precisely", Perry responds. "See, I have Knightian uncertainty about which is better, and nature gets to resolve Knightian uncertainty, regardless of causality. No matter which player I pick, she will turn out to be the worse player."
Realizing that this may sound crazy, Perry pauses.
"Now, I'm not sure how this happens. Perhaps nature has acausal superpowers, actually can alter the past. Or perhaps you simulated me to see who I would pick, and you're only offering me this bet because you found that I would pick the wrong player. I don't know how nature does it, I do know for a fact that nature will resolve my ambiguity antagonistically. Of this I am certain. And so I'm sorry, but I can't take your bet."
And if I offer you 10:1 odds on the player of your choice?
Perry shakes it's head. "Sorry."
The bookie dejectedly departs.
Perry is a Bayesian agent that shares Sir Percy's preferences, thereby demonstrating that the Bayesian framework can capture agents with preferences over ambiguity (given some method for distinguishing ambiguity from uncertainty). Any Bayesian agent that believes that there is actually variability in the environment which nature gets to acausally adversarially resolve acts precisely as if it is using the MMEU rule.
In fact, though Perry may seem to be reasoning in an odd fashion, Perry captures much of the original motivation for the MMEU rule. "I can't get a precise credence", Sir Percy would say, "I can only get a credence interval, because reality actually still has a little wiggle room about whether the coin was weighted. So assuming the worst case, how do I maximize expected utility?"
The paranoia of Perry is a little disconcerting, though. Is there perhaps another idealized Bayesian agent that captures Sir Perry's preferences in a less abrasive manner?
Preferring the least convenient world
Allow me to introduce another Bayesian agent, Cautious Caul. Like Perry, Caul distinguishes between different types of uncertainty. The method of distinction is left unspecified, but given any means of differentiating between uncertainty and ambiguity ("Knightian uncertainty"), we can specify an agent that acts like Caul.
Like Perry, Caul treats ambiguity differently form normal uncertainty because Caul believes that ambiguity is not internal lack of knowledge, but rather a fact about the external world. But whereas Perry thinks that Knightian uncertainty denotes variability in the world that is resolved by nature, Caul instead thinks that ambiguity denotes worldparts that actually exist.
And Caul is an expected utility maximizer (and a perfect Bayesian), but Caul's preferences are defined according to the least convenient world.
When faced with Sir Percy's coin toss, Caul honestly believes that a continuous set of worlds exist, each with a coin weighted somewhere between 40% H and 60% H, and that in each of these worlds there is a sliver of Caul. But Caul's preferences are defined such that Caul only cares about the sliver in the least convenient world, the world with the worst odds.
In other words, every Caul-sliver is willing to trade unlimited amounts of expected dollars in their own worlpart in order to increase the expected dollars of the Caul in the least convenient worldpart. Thus, Caul refuses the first bet in Sir Percy's coin toss, because conditioned upon Caul taking the first bet, Caul only cares about the worldpart where the coin is weighted 40% H (and so Caul refuses the bet). But conditioned upon Caul taking the second bet, Caul only cares about the worldpart where the coin is weighted 60% H (and so Caul refuses the bet). But when offered both bets simultaneously, all the Caul slivers in all the worldparts will gain 10¢, so Caul takes the combination.
In this case, we see why there is no Bayesian with a single credence for H that can exhibit Caul's preferences. From Caul's point of view, H is not a single event, it is a set of events with varying credence. Any Bayesian attempting to assign a single credence to H is neglecting a crucial part of (what Caul sees as) the world's structure.
Furthermore, Caul also fails to treat the outcomes of the bet (in dollars) as utility: Caul only cares about the least convenient world given Caul's action, so the utility that Caul gets from a bet is dependent upon Caul's action. We can see this when imagining Caul facing the Ellsberg urn game. In this scenario, Caul has actually come to believe that there is a Caul-sliver facing each of the sixty urns. Conditioned upon taking bet 1b, Caul only cares about the Caul-sliver facing the urn without any black balls. But conditioned upon taking bet 2a, Caul only cares about the Caul-sliver facing the urn without any yellow balls. Caul's preferences depend upon the action which Caul chooses.
Caul is an expected utility maximizer with preferences only for the worldpart with the worst odds (where worldparts are determined by ambiguity and odds are determined by uncertainty). Caul may appear to be using the MMEU rule, but call is actually maximizing minimum expected dollars while maximizing actual expected utility. This only looks like MMEU if we confuse the inner utility in each worldpart ("dollars") with the outer utility of Caul itself.
To illustrate, consider what happens when Caul encounters our enthusiastic bookie. As with Perry, Caul believes that one of the tennis players in the unbalanced game could beat the other 99 times out of 100. However, Caul has Knightian uncertainty about whether Anabel or Zara is the superior player. This means that Caul acutally believes there are two worldparts, each with a Caul-sliver: one in which Anabel beats Zara 99 times out of 100, and the other in which Anabel loses to Zara 99 times out of 100.
How might Caul's conversation with the bookie go?
Ah, Caul! I have an offer for you. You know the unbalanced tennis game that is about to be played between Anabel and Zara? I'd like to offer you a bet that's heavily in your favor: I'll give you a bet with 2:1 odds on the player of your choice!
"Sorry", Caul says, "can't do that."
Why not? Look, I'm not trying to screw you over here. I just really like giving people free money, and if I can't give enough away, I'll have to answer to my board of trustees. I seriously don't want to mess with those folks. This really is a bet in your favor! It pays out 2:1. I haven't simulated you, I promise!
"I'm sorry", Caul replies, "but I can't. See, if I place the bet on Anabel, then in the world where Zara is the better player, I expect that version of Caul to lose money."
Yes, but in the world where Anabel is the better player, that version of Caul wins more than the other version loses!
"Perhaps, but I only care about the sliver of me with the worst odds. If I place the bet on Anabel, then I care only about the Caul in the world where she's worse. But if I place the bet on Zara, then I only care about the Caul in the world where she's better." Caul shrugs. "Sorry, there's just no way I can take the bet without making the least convenient world worse off, and I only care about what happens in the least convenient world."
And so our hapless bookie departs, frantically searching for someone to take free money, muttering about irrationality.
Caul captures another portion of the motivation for the MMEU rule: Caul only cares about the least convenient world. Of course, Caul only cares about the least convenient world given Caul's normal uncertainty: Caul cares about the Caul-sliver with the worst odds, not with the worst outcome. To see the difference, imagine that our bookie offered Caul a bet with 100:1 odds on Anabel. Caul would take this bet, because even in the world with the worst odds the expected Caul-value of this bet is 1¢ (99% of the time Zara wins and Caul loses $1, but 1% of the time Anabel wins and Caul wins $100).
An agent that only cared about the actual worst case (rather than only the worldpart with the worst odds) would refuse even this bet, worrying about scenarios where Anabel is a far superior player but loses anyway. This would be an agent in which all uncertainty is "Knightian", and this would turn Caul into a maximizer of minimum actual utility (instead of minimum expected utility), which is wildly impractical (and of dubious usefulness). This raises the uncomfortable question of how Caul decides which uncertainties denote actual worldparts, and which uncertainties denote "normal" uncertainty with which Caul may gamble in the traditional manner.
Perry and Caul show that it's possible for a Bayesian to act according to Sir Percy's preferences, but they are only able to do this because they believe that some of their uncertainty is not internal (caused by incomplete information) but rather external (actually represented in the environment). Perry believes that some of its uncertainty denotes places where Nature gets to choose how the world works after Perry chooses an action. Caul believes that some of its uncertainty actually denotes real worldparts. In both cases, these agents believe that their ambiguity is part of the world, and this is the mechanism by which they can display ambiguity aversion.
Bayesian ambiguity aversion
Perry and Caul help us answer our first question.
Can an agent with preferences about ambiguity be 'rational' in the Bayesian sense?
To this, we can now answer "yes". Ideal Bayesian agents can exhibit the preferences advocated by Sir Percy. The next question is, is this a sane way to act? A rational agent can exhibit ambiguity aversion, and humans seem to exhibit ambiguity aversion… so should we act like this?
Sir Percy advocates this decision rule. He's put forth some interesting arguments and this post has shown that such a viewpoint can be rational in a Bayesian sense. But a Bayesian can also have an anti-Laplacian prior, so that doesn't speak to this viewpoint's sanity. Is the MMEU rule a sane way for humans to act?
In short: No, for the same reason that Caul and Perry are mad. Their views are consistent, and their actions are rational (given their priors), but I am really glad that I don't have their priors. But that's a point I'll explore in more depth in the next (and final) post.
8 comments
Comments sorted by top scores.
comment by AShepard · 2014-08-04T15:50:08.287Z · LW(p) · GW(p)
I think the analysis in this post (and the others in the sequence) has all been spot on, but I don't know that it is actually all that useful. I'll try to explain why.
This is how I would steel man Sir Percy's decision process (stipulating that Sir Percy himself might not agree):
Most bets are offered because the person offering expects to make a profit. And frequently, they are willing to exploit information that only they have, so they can offer bets that will seem reasonable to me but which are actually unfavorable.
When I am offered a bet where there is some important unknown factor (e.g. which way the coin is weighted, or which urn I am drawing from), I am highly suspicious that the person offering the bet knows something that I don't, even if I don't know where they got their information. Therefore, I will be very reluctant to take such bets
When faced with this kind of bet, a perfect bayesian would calculate p(bet is secretly unfair | ambiguous bet is offered) and use that as an input into their expected utility calculations. In almost every situation one might come across, that probability is going to be quite high. Therefore, the general intuition of "don't mess with ambiguous bets - the other guy probably knows something you don't" is a pretty good one.
Of course you can construct thought experiments where p(bet is secretly unfair) is actually 0 and the intuition breaks down. But those situations are very unlikely to come up in reality (unless there are actually a lot of bizarrely generous bookies out there, in which case I should stop typing this and go find them before they run out of money). So while it is technically true that a perfect Bayesian would actually calculate p(bet is secretly unfair | ambiguous bet was offered) in every situation with an ambiguous bet, it seems like a very reasonable shortcut to just assume that probability is high in every situation and save one's cognitive resources for higher impact calculations.
Replies from: So8res↑ comment by So8res · 2014-08-04T16:47:10.962Z · LW(p) · GW(p)
Thanks! I completely agree that "reject bets offered to you by humans" is a decent heuristic that humans seem to use. I also agree that bet-stigma is a large part of the reason people feel they need something other than Bayesianism (which treats every choice as a bet about which available action is best). These points (and others) are covered in the next post.
In this post, I'm addressing the argument that there are rational preferences that the Bayesian framework cannot, in principle, capture. This addresses a more general concern as to whether Bayesianism captures the intuitive ideal of 'rationality'. Here I'm claiming that, at least, the MMEU rule is no counter-example. The next post will contain my true rejection of the MMEU rule in particular.
comment by VAuroch · 2014-08-05T02:18:02.258Z · LW(p) · GW(p)
It's fairly common in programming, particularly, to care not just about the average case behavior, but the worst case as well. Taken to an extreme, this looks a lot like Caul, but treated as a partial but not overwhelming factor, it seems reasonable and proper.
For example, imagine some algorithm which will be used very frequently, and for which the distribution of inputs is uncertain. The best average-case response time achievable is 125 ms, but this algorithm has high variance such that most of the time it will respond in 120 ms, but a very small proportion of the time it will take 20 full seconds. Another algorithm has average response time 150 ms, and will never take longer than 200 ms. Generally, the second algorithm is a better choice; average-case performance is important, but sacrificing some performance to reduce the variance is worthwhile.
Taking this example to extremes seems to produce Caul-like decisionmaking. I agree that Caul appears insane, but I don't see any way either that this example is wrong, or that the logic breaks down while taking it to extremes.
Replies from: CalmCanary↑ comment by CalmCanary · 2014-08-05T05:14:58.997Z · LW(p) · GW(p)
The most obvious explanation for this is that utility is not a linear function of response time: the algorithm taking 20 s is very, very bad, and losing 25 ms on average is worthwhile to ensure that this never happens. Consider that if the algorithm is just doing something immediately profitable with no interactions with anything else (e.g. producing some crytptocurrency), the first algorithm is clearly better (assuming you are just trying to maximize expected profit), since on the rare occasions when it takes 20 s, you just have to wait almost 200 times as long for your unit of profit. This suggests that the only reason the second algorithm is typically preferred is that most programs do have to interact with other things, and an extremely long response time will break everything. I don't think any more convoluted decision theoretic reasoning is necessary to justify this.
Replies from: VAuroch↑ comment by VAuroch · 2014-08-05T19:33:12.106Z · LW(p) · GW(p)
True, but even in cases where it won't break everything, this is still valued. Consistency is a virtue even if inconsistency won't break anything. And it clearly breaks down in the extreme case where it becomes Caul, but I can't come up with a compelling reason why it should break down.
My best guess: The factor that is being valued here is the variance. Low variance increases utility generally, because predictability is valuable in enabling better expected utility calculations for other connected decisions. There is no hard limit on how much this can matter relative to the average case, but as the discrepancy between the average cases diverge so that the low-variance version becomes worse than a greater and greater fraction of the high-variance cases, it it remains technically rational but its implicit prior approaches an insane prior such as that of Caul or Perry.
I think this would imply that for an unbounded perfect Bayesian, there is no value to low variance outside of nonlinear utility dependence, but that for bounded reasoners, there is some cutoff where making concessions to predictability despite loss of average-case utility is useful on balance.
comment by Optimization Process · 2021-11-27T20:40:13.027Z · LW(p) · GW(p)
My attempted condensation, in case it helps future generations (or in case somebody wants to set me straight): here's my understanding of the "pay $0.50 to win $1.10 if you correctly guess the next flip of a coin that's weighted either 40% or 60% Heads" game:
-
You, a traditional Bayesian, say, "My priors are 50/50 on which bias the coin has. So, I'm playing this single-player 'game':
"I see that my highest-EV option is to play, betting on either H or T, doesn't matter."
-
Perry says, "I'm playing this zero-sum multi-player game, where my 'Knightian uncertainty' represents a layer in the decision tree where the Devil makes a decision:
"By minimax, I see that my highest-EV option is to not play."
...and the difference between Perry and Caul seems purely philosophical: I think they always make the same decisions.
comment by halcyon · 2014-08-08T15:59:53.625Z · LW(p) · GW(p)
No matter how obvious your reasoning may appear to you, there is someone out there stupid enough to have thought the contrary. Believe it or not, this series goes a long way towards dissipating my pessimism about the world. My subconscious really believed it is a fact that on average, nature tends to destroy our mortal ambitions, and that's why it is dangerous to Tempt Fate.
I have always known this is a theological outlook, but I tried to deal with it by avoiding thoughts like that rather than marshaling positive arguments against it. After reading this, I consciously understand, to a significantly greater degree, why it doesn't actually make sense to generalize those thought processes for use in reasoning. I like this much better than just intuitively labeling them as low status. Thank you.
comment by AlexMennen · 2014-08-05T01:26:48.731Z · LW(p) · GW(p)
Cautious Caul is interesting because I actually do expect that my utility is nonlinear with respect to measure. For instance, if I got to choose between either the entire universe getting destroyed with probability 1/2 or half of Everett branches getting destroyed with probability 1, I would much prefer the second one. That said, in practice, I don't expect to be able to make any use of the distinction between measure in a multiverse and probability.