Posts

Comments

Comment by maxflowve2 on Comment on "Endogenous Epistemic Factionalization" · 2020-05-29T02:29:27.156Z · LW · GW

Thanks for the post! I think the ideas here are pretty cool.

However there are some problems with the experiment. 1. The update rule is not bayesian in the sense that if you permute the order of the observations of the experiments, you arrive at different conclusions for each agent (You can test this experimentally easily). This shouldn't happen in a bayesian update. I didn't read the original paper but maybe this is what they meant by the prior of the observations. (ie you probably need a conjugate prior on the belief, which in this case is a beta distribution. But then I'm not sure what the proper way to model the mistrust factor is.)

2. We allow mistrust factor to be exactly 1, which means that the agent ignores the evidence from that reporter completely. This may or may not be appropriate... I can see both sides of the argument. But if we constrain the mistrust factor to be at most 1-eps, then with enough observations everyone will eventually converge to the correct beliefs.

Note that another way to correct of your own mistaken belief even if you distrust everyone is to do the experiments yourself (You have to trust yourself at least?). It's okay to not directly adjust your belief (ie: action A is better) based on evidence from people you don't like but it still makes sense to let the evidence lower the degree of certainty in your own beliefs (ie: I'm 60% certain that action A is better). Decision making should take into account of this degree of uncertainty. If all agents who currently believe that action A is better give action B a try once in a while, the entire community probably converges a lot faster too.