Posts
Comments
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it "objective". It is "objective" in that it looks like the sort of thing that Bayesians call "objective" priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can't apply max entropy now. That's ok: apply max entropy "retroactively" and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the "natural value" (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
No, (un)fortunately it is not so.
I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1/10) with all sorts of things which don't involve uncertainty. We can make anyone "leave money on the table". In my previous message, using ($100, a rock, $10), I "proved" that a rock ought to be worth at least $90.
If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1/10, and one agent still has 1/2+-1/4, but instead the other agent has 1/4. The Bayesian agent holding 1/2+-1/4 thinks it's worth more than 1/4 plus 1/10, so it refuses to trade. Whereas the ambiguity averse agents are under no such illustion.
So, the boot's on the other foot: we trade, and you don't. If your example was correct, then mine would be too. But presumably you don't agree that you are "leaving money on the table".
I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one's knowledge about the possible outcomes.
To quote the article you linked: "Jaynes certainly believed very firmly that probability was in the mind ... there was only one correct prior distribution to use, given your state of partial information at the start of the problem."
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Aren't you violating the axiom of indepentence but not the axiom of transitivity?
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a "state variable". In unambiguous situations the choice is "stateless".
I'm not really sure what a lot of this means
Sorry about that. Maybe I've been clearer this time around?
Right, except this doesn't seem to have anything to do with ambiguity aversion.
Imagine that one agent owns $100 and the other owns a rock. A government agency wishes to promote trade, and so will offer $10 to any agents that do trade (a one-off gift). If the two agents believe that a rock is worth more than $90, they will trade; if they don't, they won't, etc etc
But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.
How does this work, then? Can you justify that the bonus is free without circularity?
From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter's required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).
Sure. There may be circularity concerns here as well though. Also, if one expects there to be a market for something, that should be accounted for. In the extreme case, I have no inherent use for cash, my utility consists entirely in the expected market.
Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.
I also gave the example of risk-aversion though. If trades pay in cash, risk-averse Bayesians can't totally separate them either. But generally I won't dispute that the ideal use of this method is more complex than the ideal Bayesian reasoner.
Would a single ball that is either green or blue work?
That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.
I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.
I think you're really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I'd say the former is your key proposition. It would be sufficient to rule out that an agent's internal variables, like the virtual interval, could have any effect. I'd say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).
Of course even for Bayesians there are equiprobable options, so decisions can't be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
What if I told you that the balls were either all green or all blue?
Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that's a limitation of two dimensions -- it'll handle updating on draws from the urn but not "internals" like that. But I'm guessing. (1/2 +- 1/6) seems like a reasonable prior interval for a structureless event.
So in the standard Ellisberg paradox, you wouldn't act nonbayesianally if you were told "The reason I'm asking you to choose between red and green rather than red and blue is because of a coin flip."
If I take the statement at face value, sure.
but you'd still prefer red if all three options were allowed?
Yes, but again I could flip a coin to decide between green and blue then.
This seems to be going against the whole idea of probability being about mental states;
Well, okay. I don't think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I'd say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.
Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities
Correct.
Where do you draw the line
1) I have no reason to think A is more likely than B and I have no reason to think B is more likely than A
2) I have good reason to think A is as likely as B.
These are different of course. I argue the difference matters.
The boots and the mother example can all be dealt with using standard Bayesian techniques
Correct. See last paragraph of the post.
You would pay to remove ambiguity. And ambiguity removal doesn't increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.
If you mean something like: red has probability 1/3, and green has probability 1/3 "on average", then I dispute "on average" -- that is circular.
The advantage of a money pump or "Dutch book" argument is that you don't need such assumptions to show that the behaviour in question is suboptimal. (Un)fortunately there is a gap between Bayesian reasoning and what money pump arguments can justify.
(Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too -- this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn't affect the first draw, but it would affect multiple draws.)
If you mean repeated draws from the same urn, then they'd all have the same orientation. If you mean draws from different unrelated urns, then you'd need to add dimensions. It wouldn't converge the way I think you're suggesting.
Here's an alternate interpretation of this method:
If two events have probability intervals that don't overlap, or they overlap but they have the same orientation and neither contains the other, then I'll say that one event is unambiguously more likely than the other. If two events have the exact same probability intervals (including orientation), then I'll say that are equally likely. Otherwise they are incomparable.
Under this interpretation, I claim that I do obey rule 2 (see prev post): if A is unambiguously more likely than B, then (A but not B) is unambiguously more likely than (B but not A), and conversely. I still obey rule 3: (A or B) is either unambiguously more likely than A, or they are equally likely. I also claim I still obey rule 4. Finally, I claim "unambiguously more likely" is transitive, but it is not total: there are incomparable events. So I break that part of rule 1.
Passing to utility, I'll also have "unambiguously better", "equally good", and "incomparable".
Of course if an agent is forced to choose between incomparable options, it will choose, but that doesn't mean it considers one of the options "better" in a straightforward way,
Exactly. But there's a major catch: unlike with equal choices, an agent cannot choose arbitrarily between incomparable choices. This is because incomparability is intransitive. If the agent doesn't resolve ambiguities coherently, it can get money pumped. For instance, an 18U bet on green and a 15U bet on red are incomparable. Say it picks red. A 15U bet on green and a 15U bet on red are also incomparable. Say it picks green. But then, an 18U bet on green is unambiguously better than a 15U bet on green.
The rest of the post is then about one method to resolve imcomparability coherently.
I personally think this interpretation is more natural. I also think it will be even less palatable to most LW readers.
I don't think you can just uncritically say "surely the world is thus and so".
But it was a conditional statement. If the universe is discrete and finite, then obviously there are no immortal agents either.
Basically I don't see that aspect of P6 as more problematic than the unbounded resource assumption. And when we question that assumption, we'll be questioning a lot more than P6.
No, this doesn't sound like the Allais paradox. The Allais paradox has all probabiliies given. The Ellsberg paradox is the one with the "undetermined balls". Or maybe you have something else entirely in mind.
I do not run into the Allais paradox -- and in general, when all probabilties are given, I satisfy the expected utility hypothesis.
How do you choose the interval? I have not been able to see any method other than choosing something that sounds good
Heh. I'm the one being accused of huffing priors? :-)
Okay, granted, there are methods like maximum entropy for Bayesian priors that can be applied in some situations, and the Ellsberg urn is such a situation.
Yes, you are correct about the discontinuity in the derivative.
You mean, I will be offered a bet on green, but I may or may not be offered a bet on blue? Then that's not a Dutch book -- what if I'm not offered the bet on blue?
For example: suppose you think a pair of boots is worth $30. Someone offers you a left boot for $14.50. You probably won't find a right boot, so you refuse. The next day someone offers you a right boot for $14.50, but it's too late to go back and buy the left. So you refuse. Did you just leave $1 on the table?
I wouldn't take any of them individually, but I would take green and blue together. Why would you take the red bet in this case?
I wouldn't take any of them individually (except red), but I'd take all of them together. Why is that not allowed?
I don't understand what you mean in the first paragraph. I've given an exact procedure for my decisions.
What kind of discontinuities to you have in mind?
I guess you mean: you offer me a bet on green for $2.50 and a bet on blue for $2.50, and I'd refuse either. But I'd take both, which would be a bet on green-or-blue for $5. So no, no dutch book here either.
Or do you have something else in mind?
If the bet pays $273 if I drew a red ball, I'd buy or sell that bet for $93. For green, I'd buy that bet for $60 and sell it for $120. For red-or-green, I would buy that for $153 and sell it for $213. Same for blue and red-or-blue. For green-or-blue, I'd buy or sell that for $180.
(Appendix A has an exact specification, and you may wish to (re-)read the boot dialogue.)
[ADDED: sorry, I missed "let's drop the asymmetry" .. then, if the bet pays $9 on red, buy or sell for $3; green, buy $2 sell $4; red-or-green, buy $5 sell $7; blue, red-or-blue same, green-or-blue, buy or sell $6. Assuming risk neutrality for $, etc etc no purchase necessary must be over 18 void in Quebec.]
I replied to Manfred with the Ellsberg example having 31 instead of 30 red balls. Does that count as different? If so, do I lose utility?
Well, in terms of decisions, P(green) = 1/3 +- 1/9 means that I'd buy a bet on green for the price of a true randomised bet with probability 2/9, and sell for 4/9, with the caveats mentioned.
We might say that the price of a left boot is $15 +- $5 and the price of a right boot is $15 -+ $5.
Showing that it can't be pumped just means that it's consistent. It doesn't mean it's correct. Consistently wrong choices cost utility, and are not rational.
To be clear: you mean that my choices somehow cost utility, even if they're consistent?
I would greatly love an example that compares a plain Bayesian analysis with an interval analysis.
It's a good idea. But at the moment I think more basic questions are in dispute.
(This argument seems to suggest a "common-sense human" position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)
Well then, P(green) = 1/3 +- 1/3 would be extreme ambiguity aversion (such as would match the adversary I think you are proposing), and P(green) = 1/3 exactly would be no ambiguity aversion , so something like P(green) = 1/3 +- 1/9 would be such a compromise, no? And why is that untenable?
To clarify: the aversary you have in mind, what powers does it have, exactly?
Generally speaking, an adversary would affect my behaviour, unless the loss of ambiguity aversion from the fact that all probabilities are known were exactly balanced by the gain in ambiguity aversion from the fact that said probabilities are under control of a (limited) aversary.
(Which is similar to saying that finding out the true distribution from which the urn was drawn would indeed affect your behaviour, unless you happened to find that the distribution was the prior you had in mind anyway.)
Once they start paying for equivalent options, then they get money-pumped.
Okay. Suppose there is an urn with 31 red balls, and 60 balls that are either green or blue. I choose to bet on red over green, and green-or-blue over red-or-blue. These are no longer equivalent options, and this is definitely not consistent with the laws of probability. Agreed?
(My prior probability interval is P(red) = 31/91 exactly, P(green) = (1/2 +- 1/6)(60/91), P(blue) = (1/2 -+ 1/6)(60/91).)
It sounds like you expected (and continue to expect!) to be able to money-pump me.
you would think it's excessive to trade (20U,0U) for just 1U.
What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1/3 +- 1/9, would be 70U if green, -20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)
What if they were in the care of her future self who already flipped the coin? Why is this different?
This I don't understand. She is her future self isn't she?
Bonus scenario:
Oh boy!
There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?
So there are two urns, one coin is going to be flipped. No matter what I'm offered a randomised bet on the second urn. If the coin comes up heads I'll be offered a bet on green on the first urn, if the coin comes up tails I'll be offered a bet on blue on the first urn. So looks like my options are:
A) choose urn 1 either way
B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails
C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails
D) choose urn 2 either way
And to be pedantic: E) flip my own coin to randomise between options B and C.
I am indifferent between A, D, and E, which I prefer to B or C.
Generally, we seem to be really overanalysing the phrase "ought to flip a coin".
I'm not sure what you mean. If it's because the situation was too symetrical, I think I adressed that.
For example, you could add or remove a couple of red balls. I still choose red over green, and green-or-blue over red-or-blue. I think the fact that it still can't lead to being dutch booked is going to be a surprise to many LW readers.
Well, it would push me away from ambiguity aversion, I would become indifferent between a bet on red and a bet on green, etc.
Put it another way: a frequentist could say to you: "Your Bayesian behaviour is a perfect frequentist model of a situation where:
You choose a bet
An urn is selected uniformly at random from the fictional population
An outcome occurs.
It seems totally unreasonable to apply it in the Ellsberg situation or similar ones. For instance, you would then not react if you were in fact told the distribution."
And actually, as it happens, this isn't too far from the sort of things you do hear in frequentist complaints about Bayesianism. You presumably reject this frequentist argument against you.
And I reject your Bayesian argument against me.
If money has logarithmic value to you, you are not risk neutral, the way I understand the term. How are you using the term?
For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?
If I had set P(green) = 1/3 +- 1/3, then yes. But in this case I'm not ambiguity averse to the extreme, like I mentioned. P(green) = 1/3 +- 1/9 was what I had, i.e. (1/2 +- 1/6)(2/3). The tie point would be 20 red balls, i.e. 1/4 exactly versus (1/2 +- 1/6)(3/4).
On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?).
It makes sense, but I don't feel this really describes me. I'm not sure how to clarify. Maybe an analogy:
What Irina and Joey's mother wants is to not intend to favour either of her children.
Maybe. Though I put it to you that the mother wants nothing more than what is "best for her children". Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is "best for her children".
Perhaps I just want the "best chance of winning".
(ADDED:) If it helps, I don't think the fact that it is she making the decision is the issue - she would wish the same thing to happen if her children were in someone else's care.
I see. My cunning reply is thus:
Suppose you were told that, rather than being from an unknown sources, the urn was in fact selected uniformly at random from 61 urns. In the first urn, there are 30 red balls, and 60 green balls. In the second urn, there are 30 red balls, 1 blue ball, and 59 green balls, etc, and in the sixty-first urn, there are 30 red balls and 60 blue balls.
This seems like pretty significant information. The kind of information that should change your behavior.
Would it change your behavior?
It seems totally unreasonable to apply it in that situation or similar ones.
You mean:
My behaviour could be explained if I were actually Bayesian, and I believed X
But I have agreed that X is false
Therefore my behaviour is unreasonable.
(Where X is the existence of an opponent with certain properties.)
Am I fairly representing what you are saying?
For instance, you would then not react to the presence of an actual adversary.
Why's that then? If there was an adversary, I could apply game theory just like anyone else, no?
Also, I think that to formalize potential connections between different events better, you should replace intervals of probabilities with functions from a parameter space to probabilities.
You may be interested in the general case of variational preferences then. But you could also just go to a finite tuple, rather than a pair, like I briefly mentioned. That would cover a lot of cases.
But for the purposes of this post I think 2 is sufficient.
Yes, replacing the new one. I.e. given a choice between trading the bet on green for a new randomised bet, we prefer to keep the bet on green. And no, the virtual interval is not part of any bet, it is persistent.
Agreed, the structural component is not normative. But to me, it is the structural part that seems benign.
If we assume the agent lives forever, and there's always some uncertainty, then surely the world is thus and so. If the agent doesn't live forever, then we're into bounded rationality questions, and even transitivity is up in the air.
P6 is really both. Structurally, it forces there to be something like a coin that we can flip as many times as we want. But normatively, we can say that if the agent has blah blah blah preference, it shall be able to name a partition such that blah blah blah. See e.g. [rule 4]. This of course doesn't address why we think such a thing is normative, but that's another issue.
Your definition of total pre-order:
A total preorder % satisfies the following properties: For all x, y, and z, if x % y and y % z then x % z (transitivity). For all x and y, x % y or y % x (totality). (I substituted "%" for their symbol, since markdown doesn't translate their symbol.) Let "A %B" represent "I am indifferent between, or prefer, A to B".
Looks to me like it's equivalent to what I wrote for rule 1. In particular, you say:
To wit, I am indifferent between A and B, and between B and C, but I prefer A to C. This satisfies the total preorder, but violates Rule 1.
No, this violates total pre-order, as you've written it.
Since you are indifferent between A and B, and between B and C: A%B, B%A, B%C, C%B. By transitivity, A%C and C%A. Therefore, you are indifferent between A and C.
The "other" type of indifference, you have neither A%B nor B%A (I called this incomparability). But it violates totality.
I don't think Rule 1 is a requirement of rationality
Hope you'll forgive me if I set this aside. I want to grant absolutely every hypothesis to the Bayesian, except the specific thing I intend to challenge.
Hmm! I don't know if that's been tried. Speaking for myself, 31 red balls wouldn't reverse my preferences.
But you could also have said, "On the other hand, if people were willing to pay a premium to choose red over green and green-or-blue over red-or-blue..." I'm quite sure things along those lines have been tried.
Heh :-) I'm okay with people being more interested in the Ellsberg paradox than the Savage theorem. Sections headers are there for skipping ahead. There's even colour :-)
I think it would be unfair to ask me to make the Savage theorem as readable as the Ellsberg paradox. For starters, the Ellsberg paradox can be described really quickly. The Savage theorem, even redux, can't. Second, just about everyone here agrees with the conclusion of the Savage theorem, and disagrees with the Ellsberg-paradoxical behaviour.
My goal was just to make it clearer than the previous post -- and this is not an insult against the previous author, he presented the full theorem and I presented a redux version covering only the relevant part, as I explained in the boring rationality section before the boring representation theorem. I'd be happy if some people who did not understand the previous set of axioms understood the four rules here.
As for the rest, yes, consensus here so far (only a few hours in, of course, but still impressively unanimous) seems to be that it's a bias. Of course, in that case, it's a very famous bias, and it hasn't been covered on LW before. I can still claim to have accomplished something I think, no? And if it turns out it's not so irrational after all, well!
Meh. It should not really affect what I've said or what I intend to say later if you substitute "Violation of the rules of probability" or "of utility" for "paradox" (Ellsberg and Allais resp.) However paradox is what they're generally called. And it's shorter.
Thanks... Where do you see it? I can't see any. I tried logging in and out and all that, it doesn't seem to change anything (except the vote count is hidden when I logout?)
FWIW, agreed, "not given in the problem". My bad.
Very well done! I concede. Now that I see it, this is actually quite general.
My point wasn't just that I had a decision procedure, but an explanation for it. And it seems that, no matter what, I would have to explain
A) Why ((Green and Heads) or (Blue and Tails)) is not a known bet, equiprobable with Red, or
B) Why I change my mind about the urn after a coin flip.
Earlier, some others suggested non-causal/magical explanations. These are still intact. If the coin is subject to the Force, then (A), and if not, then (B). I rejected that sort of thing. I thought I had an intuitive non-magical explanation. But, it doesn't explain (B). So, FAIL.
How about:
Consider $6 iff ((Green and Heads) or (Blue and Tails)). This is a known bet (1/3) so worth $2. But if the coin is flipped first, and comes up Heads, it becomes $6 iff Green, and if it comes up tails, it becomes $6 iff Blue, in either case worth $1. And that's silly.
Is that the same as your objection?
Indifferent. This is a known bet.
Earlier I said $-6 iff Green is identical to $-6 + $6 iff (not Green), then I decomposed (not Green) into (Red or Blue).
Similarly, I say this example is identical to $-1 + $2 iff (Green and Heads) + $1 iff (not Green), then I decompose (not Green) into (Red or (Blue and Heads) or (Blue and Tails)).
$1 iff ((Green and Heads) or (Blue and Heads)) is a known bet. So is $1 iff ((Green and Heads) or (Blue and Tails)). There are no leftover unknowns.
I pay you $1 for the waiver, not $3, so I am down $0.
In state A, I have $6 iff Green, that is worth $1.
In state B, I have no bet, that is worth $0.
In state C, I have $-6 iff Green, that is worth $-3.
To go from A to B I would want $1. I will go from B to B for free. To go from B to A I would pay $1. State C does not occur in this example.
Ohh, I see. Well done! Yes, I lose.
If I had a do-over on my last answer, I would not agree that $-6 iff Green is worth $-1. It's $-3.
But, given that I can't seem to get it straight, I have to admit I haven't given LW readers much reason to believe that I do know what I'm talking about here, and at least one good reason to believe that I don't.
In case anyone's still humouring me, if an event has unknown probability, so does its negation; I prefer a bet on Red to a bet on Green, but I also prefer a bet against Red to a bet against Green. This is actually the same thing as combining two unknown probabilities to produce a known one: both Green and (not Green) are unknown, but (Green or not Green) is known to be 100%.
$-6 iff Green is actually identical to $-6 + $6 iff (not Green). (not Green) is identical to (Red or Blue), and Red is a known probability of 1/3. $6 iff Blue is as good as $6 iff Green, which, for N=2, is worth $1. $-6 iff Green is actually worth $-3, rather than $-1.
That's right.
I take it what is strange is that I could be indifferent between A and B, but not indifferent between A+C and B+C.
For a simpler example let's add a fair coin (and again let N=2). I think $1 iff Green is as good as $1 iff (Heads and Red), but $1 iff (Green or Blue) is better than $1 iff ((Heads and Red) or Blue). (All payoffs are the same, so we can actually forget the utility function.) So again: A is as good as B, but A+C is better than B+C. Is this the same strangeness?
I'm not really clear on the first question. But since the second question asks how much something is worth, I take it the first question is asking about a utility function. Do I behave as if I were maximising expected utility, ie. obey the VNM postulates as far as known probabilities go? A yes answer then makes the second question go something like this: given a bet on red whose payoff has utility 1, and a bet on green whose payoff has utility N, what is the critical N where I am indifferent between the two?
For every N>1, there are decision procedures for which the answer to the first is yes, the answer to the second is N, and which displays the Ellsberg-paradoxical behaviour. Ellsberg himself had proposed one. I did have a thought on how one of these could be well illustrated in not too technical terms, and maybe it would be appropriate to post it here, but I'd have to get around to writing it up. In the meantime I can also illustrate interactively: 1) yes, 2) you can give me an N>1 and I'll go with it.
Good-Turing estimation which was part of the Enigma project should also go under the empirical heading.
I was looking a little bit into this claim that Poincaré used subjective priors to help acquit Dreyfus. In a word, FAIL.
Poincaré's use of subjective priors was not a betrayal of his own principles because he needed to win, as someone above put it. He was granting his opponent's own hypothesis in order to criticise him. Strange that this point was not clear to whoever was researching it, given that the granting of the hypothesis was prefaced with a strong protest.
The court intervention in question was a report on Bertillon's calculations, by Poincaré with Appel and Darboux, « Examen critique des divers systèmes ou études graphologiques auxquels a donné lieu le bordereau » (discussed and quoted [here] ). It speaks for itself.
« Or cette probabilité a priori, dans des question comme celle qui nous occupe, est uniquement formée d'éléments moraux qui échappent absolument au calcul, et si, comme nous ne pouvons rien calculer sans la connaître, tout calcul devient impossible. Aussi Auguste Comte a-t-il dit avec juste raison que l'application du calcul des probabilités aux sciences morales était le scandale des mathématiques. Vouloir éliminer les éléments moraux et y substituer des chiffres, cela est aussi dangereux que vain. En un mot, le calcul des probabilités n'est pas, comme on paraît le croire, une science merveilleuse qui dispense le savant d'avoir du bon sens. C'est pourquoi il faudrait s'abstenir absolument d'appliquer le calcul aux choses morales ; si nous le faisons ici, c'est que nous y sommes contraints ... S'il s'agissait d'un travail scientifique, nous nous arrêterions là ; nous jugerions inutile d'examiner les détails d'un système dont le principe même ne peut soutenir l'examen ; mais la Cour nous a confié une mission que nous devons accomplir jusqu'au bout ... Nous admetterons toujours, dans les calclus qui suiveront, l'hypothèse la plus favorable au système de Bertillon. »
My translation: « Now this a priori probability, in questions such as the one before us, consists entirely of moral elements which absolutely escape calculation, and since we cannot calculate anything without knowing it, all calculations become impossible. Quite rightly did Auguste Comte also say that the application of probability calculations to the moral sciences was the scandal of mathematics. To want to eliminate moral elements and substitute numbers is just as dangerous as vain. In a word, probability calculations are not, as seems to be thought, a marvolous science which dispenses with the need for the scientist to have good sense. This is why one must absolutely abstain from applying these calculations to moral objects; if we do so here, it's because we are forced to ... If it were a scientific work in question, we would stop there; we would find it useless to examine the details of a system whose principle itself does not stand up to examination; but the Court has entrusted us with a mission which we must accomplish to the uttermost ... We will always grant, in the following calculations, the most favourable hypothesis to Bertillon's system. »
Then it is shown that Bertillon nevertheless made other serious errors, even granting this hypothesis.
I find Poincaré not guilty of the charge bayesianism, and what's more, if Bertillon and Poincaré were relevant at all, they would be a counterexample: the bayesian makes a right mess of things and the frequentist saves the world. I can sympathise with Person A above who gets the sudden urge to throw their laptop out the window.