Posts
Comments
Yes, although with some subtlety.
Alice is just an expert on rain, not necessarily on the quality of her own epistemic state. (One easier example: suppose your credence initially in rain is .5. Alice's is either .6 or .4. Conditional on it being .6, you become certain it rains. Conditional on it being .4, you become certain it won't rain. You'd obviously use her credences to bet over your own, but you also take her to be massively underconfident.)
Now, the slight wrinkle here is that the language we used of calibration makes this also seem more "objective" or long-run frequentist than we really intend. All that really matters is your own subjective reaction to Alice's credences, so whether she's actually calibrated or not doesn't ultimately determine whether the conditions on local trust can be met.
There are six total worlds:, and.
All we get are Alice's credences in rain (given by an inequality), so the only propositions we might learn are (corresponding to non-trivial propositions), and , and (corresponding to non-trivial propositions). Local trust only constrains your reaction to these propositions directly, so it won't require deference on the other 58 events. (Well, 56.)
I don't think there really needs to be anything metaphysically like an expert or a proposition that someone is an expert. It's really about capturing the relationship between a principal's credence and some unknown probability function. Certain possible relationships between the two are interesting, and total trust seems to carve things at a particular joint---thinking the unknown function is more accurate and being willing to outsource decision-making and deferring in an easy-to-capture way that's weaker than reflection,
Interesting post! As a technical matter, I think the notion you want is not reflection (or endorsement) but some version of Total Trust, where (leaving off some nuance) Agent 1 totally trusts Agent 2 if for all . In general, that's going to be equivalent to Alice being willing to outsource all decision-making to Bob if she's certain Bob has the same basic preferences she does. (It's also equivalent to expecting Bob to be better on all absolutely continuous strictly proper scoring rules, and a few other things.)
I think the basic approach to commitment for the open-minded agent is right. Roughly, you don't actually get to commit your future-self to things. Instead, you just do what you (in expectation) would have committed yourself to given some reconstructed prior.
Just as a literature pointer: If I recall correctly, Chris Meacham's approach in "Binding and Its Consequences" is ultimately to estimate your initial credence function and perform the action from the plan with the highest EU according to that function. He doesn't talk about awareness growth, but open-mindedness seems to fit in nicely within his framework (or at least the framework I recall him having).