A Bayesian Aggregation Paradox

post by Jsevillamol · 2021-11-22T10:39:59.935Z · LW · GW · 23 comments

Contents

  Why this is weird
  Examples in the wild
    Mennen’s ABC example
    Interpreting Bayesian Networks
  Consequences
  In summary
    Acknowledgements
None
23 comments

In short: There is no objective way of summarizing a Bayesian update over an event with three outcomes  as an update over two outcomes .
 

Suppose there is an event with possible outcomes .
We have prior beliefs about the outcomes .
An expert reports a likelihood factor of .
Our posterior beliefs about  are then .

But suppose we only care about whether  happens.
Our prior beliefs about  are .
Our posterior beliefs are .
This implies that the likelihood factor of the expert regarding  is .

This likelihood factor depends on the ratio of prior beliefs .

Concretely, the lower factor in the update is the weighted mean of the evidence  and  according to the weights  and .

This has a relatively straightforward interpretation. The update is supposed to be the ratio of the likelihoods under each hypothesis. The upper factor in the update is . The lower factor is .
 

I found this very surprising - the summary of the expert report depends on my prior beliefs!

I claim that this phenomena is unintuitive, and being unaware of this can lead to errors.
 

Why this is weird

Bayes' rule describes how to update our prior beliefs using data.

In my mind, one very nice property of Bayes rule was that it cleanly separates the process into a subjective part (eliciting your priors) and an ~objective part (computing the update).

For example, we may disagree on our prior beliefs on whether eg COVID19 originated in a lab. But we cannot disagree on the direction and magnitude of the update caused by learning that it originated in one of the few cities in the world with a gain-of-function lab working on coronaviruses.

Because of this, researchers are encouraged to report their update factors together with their all considered beliefs. This way, users can use their research for their own conclusions by multiplying their prior with the update. And metastudies can just take the product of the likelihoods of all studies to estimate the combined effect of the evidence.

In the above example, we lose this nice property - the update factor depends on the prior beliefs of the user. Researchers would not be able to objectively summarize their likelihood about whether COVID19 originated in a lab accidentally vs zoonotically vs being designed as a bioweapon as a single number for people who only care about whether it originated in a lab versus any other possibility. 
 

Examples in the wild

I ran into this problem twice recently:

  1. When analyzing Mennen’s ABC example [LW · GW] of a case where averaging the logarithmic odds of experts seems to result in nonsense.
  2. In my own research on interpreting Bayesian Networks [LW · GW] as I was trying to come up with a way of decomposing a Bayesian update into a combination of several updates.

In both cases being unaware of the phenomena led me to a conceptual mistake.
 

Mennen’s ABC example

Mennen’s example involves three experts debating an event with three possible outcomes, .

Expert #1 assigns relative odds of .
Expert #2 assigns relative odds of .
Expert #3 assigns relative odds of .

The logodds-averaging pooled opinion of the experts is  i.e. equal odds, which correspond to a probability of  equal to .

But suppose we only care about .

Expert #1’s implicit odds are .
Expert #2’s implicit odds are .
Expert #3’s implicit odds are .

The pooled odds in this case are , which correspond to a probability of  equal to .

We get different results depending on whether we take the implicit odds after or before pooling expert opinion. What is going on?

Mennen claims that this is a strike against logarithmic pooling. The issue according to him is in the step where we take the opinion of the three experts and aggregate it using average logodds.

I think that this is related to the phenomena I described at the beginning of the article. The problem is with the step where we take the relative odds  and summarize them as .

It’s no wonder that logodd pooling gives inconsistent results when we aggregate outcomes. Bayesian updating is not well defined in that case!
 

Interpreting Bayesian Networks

I will not enter into too much detail because my theory of interpretability of Bayesian Networks [LW · GW] is very complex. But it suffices to say that I was getting inconsistent results because of this issue.

In essence, I came up with a way of decomposing a Bayesian update into a series of independent steps, corresponding to different subgraphs of a Bayesian Network.

For example, I would decompose the update over a node with three outcomes  as the product of the baseline odds of the event and a number of updates.

In my system, I only cared about whether  happened. So I naively summarized each update before aggregating them.

This was giving me very poor results - my resulting updates would be very off compared to traditional inference algorithms like message passing. 

It is no wonder this was giving me bad results - it is the wrong way of going about it! Our analysis at the beginning implies that the update should be the average of  and , instead of the sum.

After realizing the paradox, I changed my system to not summarizing the odds of  until after aggregating all the updates. 

Performance improved.
 

Consequences

I am quite confused about what to think about this.

It clearly has consequences, as illustrated by the examples in the previous section. But I am not sure what to recommend doing in response.

My most immediate takeaway is to be very careful when aggregating outcomes - there is an important chance we will be introducing an error along the way. 
 

Beyond that, the aggregation paradox seems to imply that we need to work at the correct level of aggregation. We cannot naively deduce implied binary odds from the distribution of a multiple outcome event.

But what is the right level of aggregation?

When aggregating, the lower factor of the update is a weighted mean of the evidence likelihoods  and . This suggests that the problem disappears when we impose  for any disaggregation of the joint event  into subevents  and .

But this condition is too strong. For example, we could base our disaggregation on the observed evidence. For example, if the evidence  can either be  or  we could disaggregate ~A into the cases where  and the cases where . In that case, the condition cannot ever be satisfied, by definition.

We can say that this disaggregation is not a sensible one, and ought to be excluded for the purposes of the condition. But in that case we have passed the bucket down to defining what is a sensible disaggregation.
 

Another approach is to assume that the prior relative likelihood of any aggregated outcomes is uniform, ie . In that case, we have that .

But then we can no longer chain updates - after applying any likelihood where  the resulting posterior will no longer meet this condition.

 

Pragmatically, it seems like the best we can do if we want to rescue objetivity is to resign ourselfs to summarize the updates assuming a uniform prior. That is, by averaging the evidence associated to each aggregated outcome.

This is not enough to correctly approximate Bayesian updating, as we can see in the example below: 

But I can't see how to do better in the absence of more information.

One key takeaway here is that beliefs and updates are summarized in different ways


 

In summary

I have explained one counterintuitive consequence of Bayesian updating on variables with more than two outcomes. This paradox implies that we should be careful when grouping together outcomes of a variable. And I have shown two situations where this unintuitive consequence is relevant.

This is a post meant to explore and start a discussion more than provide definite answers. Some things I’d be keen on discussing include:

  1. Is this a documented phenomena? Where can I find more discussion?
  2. What does this imply for formulating forecasting questions? Will this result in problems when asking binary questions about events that are multifaceted?
  3. What is “the right level” of outcome aggregation for a given problem?
  4. Are there other examples where similar issues come up?

I’d be really interested in your thoughts - please leave a comment if you have any!


Acknowledgements

Thanks to rossry, Nuño Sempere, Eric Neyman, Ehud Reiter and ForgedInvariant for discussing this topic with me and helping me clarify some ideas.

Thanks to Alex Mennen for coming up with the example I referenced in the post.

23 comments

Comments sorted by top scores.

comment by rossry · 2021-11-22T13:58:39.883Z · LW(p) · GW(p)

The framing of this issue that makes the most sense to me is " is a function of ".

When I look at it this way, I disagree with the claim (in "Mennen's ABC example") that "[Bayesian updating] is not invariant when we aggregate outcomes" -- I think it's clearer to say the Bayesian updating is not well-defined when we aggregate outcomes.

Additionally, in "Interpreting Bayesian Networks", the framing seems to make it clearer that the problem is that you used  for  -- but they're not the same thing! In essence, you're taking the sum where you should be taking the average...

With this focus on (mis)calculating , the issue seems to me more like "a common error in applying Bayesian updates", rather than a fundamental paradox in Bayesian updating itself. I agree with the takeaway "be careful when grouping together outcomes of a variable" -- because grouping exposes one to committing this error -- but I'm not sure I'm seeing the thing that makes you describe it as unintuitive?

Replies from: Jsevillamol
comment by Jsevillamol · 2021-11-22T14:40:39.982Z · LW(p) · GW(p)

I like this framing.

This seems to imply that summarizing beliefs and summarizing updates are two distinct operations.

For summarizing beliefs we can still resort to summing:

 

But for summarizing updates we need to use an average - which in the absence of prior information will be a simple average:

 

Annoyingly and as you point out this is not a perfect summary - we are definitely losing information here and subsequent updates will be not as exact as if we were working with the disaggregated odds.

I still find it quite disturbing that the update after summarizing depends on prior information - but I can't see how to do better than this, pragmatically speaking.

Replies from: rossry
comment by rossry · 2021-11-22T17:46:00.951Z · LW(p) · GW(p)

Right, I agree that for the update aggregation is better than (but still lossy). And the thing that affects is the weighting in the average -- so if then the s don't matter! (which is a possible answer to your question of "how much aggregation/disaggregation can you do?")

But yeah if is very different from then I don't think there's any way around it, because the effective could be one or the other depending on what the are.

comment by AlexMennen · 2021-11-24T00:44:11.434Z · LW(p) · GW(p)

(Possibly a bit of a tangent) It occurred to me while reading this that perhaps average log odds could make sense in the context in which there is a uniform prior, and the probabilities provided by experts differ because the experts disagree on how to interpret evidence that brings them away from the uniform prior. This has some intuitive appeal:

1) Perhaps, when picking questions to ask forecasters, people have a tendency to pick questions for which they believe the probability that the answer is yes is approximately 50%, because that offers the most opportunity to update in response to the beliefs of the forecasters. If average log odds is an appropriate pooling method to use if you have a uniform prior, then this would explain its good empirical performance. I think I mentioned in our discussion on your EA forum post that if there is a tendency for more knowledgeable forecasters to give more extreme probabilities, then this would explain good performance by average log odds, which weights extreme predictions heavily. A tendency for the questions asked to have priors of near 50% according to the typical unknowledgeable person would explain why more knowledgeable forecasters would assign more extreme probabilities on average: it takes more expertise to justifiably bring their probabilities further from 50%.

2) It excuses the incoherent behavior of average log odds on my ABC example as well. If A, B, and C are mutually exclusive, then they can't all have 50% prior probability, so a pooling method that implicitly assumes that they do will not give coherent results.

Ultimately, though, I don't think this is actually true. Consider the example of forecasting a continuous variable x by soliciting probability density functions  and  from two experts, and pooling them to get the pdf proportional to  (renormalized so it integrates to 1). You could also consider forecasting the variable  for some differentiable, strictly increasing function f. Then your experts give you pdfs  and  satisfying , and you pool them to get the pdf proportional to . I claim that, if what we're doing implicitly depends on a uniform prior in a sneaky way, that the first thing should be the appropriate thing to do if x has a uniform prior, and the second thing should be appropriate if y has a uniform prior. If f is nonlinear, then a uniform prior on x induces a non-uniform prior on y, and vice-versa, so we should get incompatible results from each way of doing this, as we were implicitly using different priors each time. But let's try it: . Thus, given that both experts provided pdfs satisfying the formula  making their probability distributions on x and y compatible with , our pooled pdfs also satisfies that formula, and is also compatible with . That is, if we pooled using beliefs about x, and then find the implied beliefs about y, we get the same thing as if we directly pooled using beliefs about y. Different implicit priors don't appear to be ruining anything.

I conclude that the incoherent results in my ABC example cannot be blamed on switching between the uniform prior on {A,B,C} and the uniform prior on {A,A}, and, instead, should be blamed entirely on the experts having different beliefs conditional on A, which is taken account in the calculation using A,B,C, but not in the calculation using A,A.

Replies from: Jsevillamol
comment by Jsevillamol · 2021-11-24T13:24:16.029Z · LW(p) · GW(p)

average log odds could make sense in the context in which there is a uniform prior

 

This is something I have heard from other people too, and I still cannot make sense of it. Why would questions where uninformed forecasters produce uniform priors make logodds averaging work better?
 

A tendency for the questions asked to have priors of near 50% according to the typical unknowledgeable person would explain why more knowledgeable forecasters would assign more extreme probabilities on average: it takes more expertise to justifiably bring their probabilities further from 50%.

I don't understand your point. Why would forecasters care about what other people would do? They only want to maximize their own score.
 

If A, B, and C are mutually exclusive, then they can't all have 50% prior probability, so a pooling method that implicitly assumes that they do will not give coherent results.

This also doesn't make much sense to me, though it might be because I still don't understand the point about needing uniform priors for logodd pooling. 

 

Different implicit priors don't appear to be ruining anything.

Neat!

 

I conclude that the incoherent results in my ABC example cannot be blamed on switching between the uniform prior on {A,B,C} and the uniform prior on {A,A}, and, instead, should be blamed entirely on the experts having different beliefs conditional on A, which is taken account in the calculation using A,B,C, but not in the calculation using A,A.

I agree with this.

Replies from: AlexMennen
comment by AlexMennen · 2021-11-24T16:28:32.311Z · LW(p) · GW(p)

Why would questions where uninformed forecasters produce uniform priors make logodds averaging work better?

Because it produces situations where more extreme probability estimates correlate with more expertise (assuming all forecasters are well-calibrated).

I don't understand your point. Why would forecasters care about what other people would do? They only want to maximize their own score.

They wouldn't. But if both would have started with priors around 50% before they acquired any of their expertise, and it's their expertise that updates them away from 50%, then more expertise is required to get more extreme odds. If the probability is a martingale that starts at 50%, and the time axis is taken to be expertise, then more extreme probabilities will on average be sampled from later in the martingale; i.e. with more expertise.

This also doesn't make much sense to me, though it might be because I still don't understand the point about needing uniform priors for logodd pooling.

If logodd pooling implicitly assumes a uniform prior, then logodd pooling on A vs A assumes A has prior probability 1/2, and logodd pooling on A vs B vs C assumes A has a prior of 1/3, which, if the implicit prior actually was important, could explain the different results.

comment by Unnamed · 2021-11-23T07:03:09.090Z · LW(p) · GW(p)

I think I've followed the basic argument here? Let me try a couple examples, first a toy problem and then a more realistic one.

Example 1: Dice. A person rolls some fair 20-sided dice and then tells you the highest number that appeared on any of the dice. They either rolled 1 die (and told you the number on it), or 5 dice (and told you the highest of the 5 numbers), or 6 dice (and told you the highest of the 6 numbers).

For some reason you care a lot about whether there were exactly 5 dice, so you could break this down into two hypotheses:

H1: They rolled 5 dice
H2: They rolled 1 or 6 dice

Let's say they roll and tell you that the highest number rolled was 20. This favors 5 dice over 1 die, and to a lesser degree it favors 6 dice over 5 dice. So if you started with equal (1/3) probabilities on the 3 possibilities, you'll update in favor of H1. Someone who also started with a 1/3 chance on H1, but who thought that 1 die was more likely than 6 dice, would update even more in favor of H1. And someone whose prior was that 6 dice was more likely than 1 die would update less in favor of H1, or even in the other direction if it was lopsided enough.

Relatedly, if you repeated this experiment many times and got lots of 20s, that would eventually become evidence against H1. If the 100th roll is 20, then that favors 6 dice over 5, and by that point the possibility of there being only 1 die is negligible (if the first 99 rolls were large enough) so it basically doesn't matter that the 20 also favors 5 dice over 1. This seems like another angle on the same phenomenon, since your posterior after 99 rolls is your prior for the 100th roll (and the evidence from the first 99 rolls has made it lopsided enough so that the 20 counts as evidence against H1).

Example 2: College choice. A high school freshman hopes & expects to attend Harvard for college in a few years. One observer thinks that's unlikely, because Harvard admissions is very selective even for very good students. Another observer thinks that's unlikely because the student is into STEM and will probably wind up going to a more technical university like MIT; they haven't thought much yet about choosing a college and Harvard is probably just serving as a default stand-in for a really good school.

The two observers might give the same p(Harvard), but for very different reasons. And because their models are so different, they could even update in opposite directions on the same new data. For instance, perhaps the student does really well on a math contest, and the first observer updates in favor of the student attending Harvard (that's an impressive accomplishment, maybe they will make it past the admissions filter) while the second observer updates a bit against the student attending Harvard (yep, they're a STEM person).

You could fit this into the "three outcomes" framing of this post, if you split "not attending Harvard" into "being rejected by Harvard" and "choosing not to attend Harvard".

Replies from: cousin_it, Unnamed
comment by cousin_it · 2021-11-23T12:16:06.577Z · LW(p) · GW(p)

I think your first example could be even simpler. Imagine you have a coin that's either fair, all-heads, or all-tails. If your prior is "fair or all-heads with probability 1/2 each", then seeing heads is evidence against "fair". But if your prior is "fair or all-tails with probability 1/2 each", then seeing heads is evidence for "fair". Even though "fair" started as 1/2 in both cases. So the moral of the story is that there's no such thing as evidence for or against a hypothesis, only evidence that favors one hypothesis over another.

Replies from: Pattern
comment by Pattern · 2021-11-24T17:02:05.994Z · LW(p) · GW(p)

That's a great explanation. Evidence may also be compatible or incompatible with a hypothesis. For instance, if I get a die (without the dots on the sides that indicate 1-6), and I instead label* it:

Red, 4, Life, X-Wing, Int, path through a tree

Then finding out I rolled a 4, without knowing what die I used, is compatible with the regular dice hypothesis, but any of the other rolls, is not.

*(likely using symbols, for space reasons)

comment by Unnamed · 2021-11-23T07:08:32.083Z · LW(p) · GW(p)

This seems related to philosophy of science stuff, where updating is about pitting hypotheses against each other. In order to do that you have to locate the leading alternative hypotheses. It doesn't work well to just pit a hypothesis against "everything else" (it's hard to say what p(E|not-H) is, and it can change as you collect more data). You need to find data that distinguishes your hypothesis from leading alternatives. An experiment that favors Newtonian mechanics over Aristotelian mechanics won't favor Newtonian mechanics over general relativity.

comment by Forged Invariant · 2021-11-23T04:28:53.479Z · LW(p) · GW(p)

Seeing the equations, it was hard to intuitively grasp why updates work this way. This example made things more intuitive for me:

If an event can have 3 outcomes, and we encounter strong evidence against outcomes B and C, then the update looks like this:

The information about what hypotheses are in the running is important, and pooling the updates can make the evidence look much weaker than it is.

Replies from: Jsevillamol
comment by Jsevillamol · 2021-11-23T08:53:53.973Z · LW(p) · GW(p)

Note that you are making the same mistake than me! Updates are not summarized in the same way as beliefs - for the update the "correct" way is to take an average of the  likelihoods:

 

This does not invalidate the example though!

Thanks for suggesting, I think it helps clarify the conondrum.

Replies from: Forged Invariant
comment by Forged Invariant · 2021-11-25T04:23:30.979Z · LW(p) · GW(p)

The left hand side of the example is deliberately making the mistake described in your article, as a way to build intuition on why it is a mistake. 

(Adding instead of averaging in the update summaries was an unintended mistake)

Thanks for explaining how to summarize updates, it took me a bit to see why averaging works.

comment by Jsevillamol · 2022-06-10T20:06:10.268Z · LW(p) · GW(p)

As it is often the case, I just found out that Jaynes was already discussing a similar issue to the paradox here in his seminal book.

This wikipedia article summarizes the gist of it.

comment by a gently pricked vein (strangepoop) · 2022-03-06T13:34:57.775Z · LW(p) · GW(p)

There's probably a radical constructivist argument for not really believing in open/noncompact categories [LW · GW] like . I don't know how to make that argument, but this post too updates me slightly towards such a Tao [LW · GW]of conceptualization.

(To not commit this same error at the meta level: Specifically, I update away from thinking of general negations as "real" concepts, disallowing statements like "Consider a non-chair [LW · GW], ...").

But this is maybe a tangent, since just adopting this rule doesn't resolve the care required in aggregation with even compact categories.

Replies from: MrMind
comment by MrMind · 2022-06-15T13:35:27.337Z · LW(p) · GW(p)

There is, at least at a mathematical / type theoretic level.

In intuitionistic logic,  is translated to , which is the type of processes that turn an element of  into an element of , but since  is empty, the whole  is absurd as long as  is istantiated (if not, then the only member is the empty identity). This is also why constructively  but not 

Closely related to constructive logic is topology, and indeed if concepts are open set, the logical complement is not a concept. Topology is also nice because it formalizes the concept of edge case

Replies from: strangepoop
comment by a gently pricked vein (strangepoop) · 2022-06-17T10:11:23.129Z · LW(p) · GW(p)

I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted [LW · GW] when working with a concept, when you're actually working with a negation-of-concept. And "believing in" might be one of those things that you can't really do with negation-of-concepts.

Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in topological semantics for intuitionistic logic, the "logical complement" is in fact defined to be the interior of the set-theoretic complement, which guarantees an open.)

Replies from: MrMind
comment by MrMind · 2022-07-13T14:03:42.881Z · LW(p) · GW(p)

I should have written "algebraic complement", which becomes logical negation or set-theoretic complement depending on the model of the theory.

Anyway, my intuition on why open sets are an interesting model for concepts is this: "I know when I see it" seems to describe a lot of the way we think about concepts. Often we don't have a precise definition that could argue all the edge case, but we pretty much have a strong intuition when a concept does apply. This is what happens to recursively enumerable sets: if a number belongs to a R.E. set, you will find out, but if it doesn't, you need to wait an infinite amount of time. Systems that take seriously the idea that confirmation of truth is easy falls under the banner of "geometric logic", whose algebraic model are  frames, and topologies are just frames of subsets. So I see the relation between "facts" and "concepts" a little bit like the relation between "points" and "open sets", but more in a "internal language of a topos" or "pointless topology" fashion: we don't have access to points per se, only to open sets, and we imagine that points are infinite chains of ever precise open sets

comment by DanielFilan · 2024-04-25T16:48:32.188Z · LW(p) · GW(p)

Is this just the thing where evidence is theory-laden? Like, for example, how the evidentiary value of the WHO report on the question of COVID origins depends on how likely one thinks it is that people would effectively cover up a lab leak?

comment by Adele Lopez (adele-lopez-1) · 2022-03-06T20:45:56.375Z · LW(p) · GW(p)

I think entropy is a key to understanding this more deeply. I believe you could consider the unaggregated distribution as the "microstates" and the aggregated one as the "macrostates". The entropy would then tell you how much information you lose by aggregating in this way.

Minor quibble: The likelihood part of probability is also subjective in the sense that it depends on the evidence the agent is aware of.

comment by JonasMoss · 2022-03-07T20:30:12.841Z · LW(p) · GW(p)

I find the beginning of this post somewhat strange, and I'm not sure your post proves what you claim it does. You start out discussing what appears to be a combination of two forecasts, but present it as Bayesian updating. Recall that Bayes theorem says . To use this theorem, you need both an  (your data / evidence), and a  (your parameter). Using “posterior prior  likelihood” (with priors  and likelihoods ), you're talking as if your expert's likelihood equals  – but is that true in any sense? A likelihood isn't just something you multiply with your prior, it is a conditional pmf or pdf with a different outcome than your prior.

I can see two interpretations of what you're doing at the beginning of your post:

  1. You're combining two forecasts. That is, with  being the outcome, you have your own pmf  and the expert's , then combine them using . That's fair enough, but I suppose  or maybe  for some  would be a better way to do it.
  2. It might be possible to interpret your calculations as a proper application of Bayes' rule, but that requires stretching it. Suppose  is your subjective probability vector for the outcomes  and  is the subjective probability vector for the event supplied by an expert (the value of  is unknown to us). To use Bayes' rule, we will have to say that the evidence vector , the probability of observing an expert judgment of  given that  is true. I'm not sure we ever observe such quantities directly, and it is pretty clear from your post that you're talking about  in the sense used above, not .

Assuming interpretation 1, the rest of your calculations are not that interesting, as you're using a method of knowledge pooling no one advocates.

Assuming interpretation 2, the rest of your calculations are probably incorrect. I don't think there is a unique way to go from to, let's say, , where  is the expert's probability vector over  and  your probability vector over .

Replies from: Jsevillamol
comment by Jsevillamol · 2022-03-07T23:36:25.251Z · LW(p) · GW(p)

Thanks for engaging!

 

To use this theorem, you need both an  (your data / evidence), and a  (your parameter).

Parameters are abstractions we use to simplify modelling. What we actually care about is the probability of unkown events given past observations.

 

You start out discussing what appears to be a combination of two forecasts

To clarify: this is not what I wanted to discuss. The expert is reporting how you should update your priors given the evidence, and remaining agnostic on what the priors should be.

 

A likelihood isn't just something you multiply with your prior, it is a conditional pmf or pdf with a different outcome than your prior.

The whole point of Bayesianism is that it offer a precise, quantitative answer to how you should update your priors given some evidence - and that is multiplying by the likelihoods.

This is why it is often recommend in social sciences and elsewhere to report your likelihoods.

 

I'm not sure we ever observe [the evidence vector] directly

I agree this is not common in judgemental forecasting, where the whole updating process is very illegible. I think it holds for most Bayesian-leaning scientific reporting.

 

it is pretty clear from your post that you're talking about  in the sense used above, not .

I am not, I am talking about evidence = likelihood vectors.

One way to think about this is that the expert is just informing us about how we should update our beliefs. "Given that the pandemic broke out in Wuhan, your subjective probability of a lab break should increase and it should increase by this amount". But the final probability depends on your prior beliefs, that the expert cannot possibly know.

 

I don't think there is a unique way to go from to, let's say, , where  is the expert's probability vector over  and  your probability vector over .

Yes! If I am understanding this right, I think this gets to the crux of the post. The compression is lossy, and neccessarily loses some information.

Replies from: JonasMoss
comment by JonasMoss · 2022-03-08T09:38:21.890Z · LW(p) · GW(p)

Okay, thanks for the clarification! Let's see if I understand your setup correctly. Suppose we have the probability measures and , where is the probability measure of the expert. Moreover, we have an outcome

In your post, you use , where is an unknown outcome known only to the expert. To use Bayes' rule, we must make the assumption that . This assumption doesn't sound right to be, but I suppose some strange assumption is necessary for this simple framework. In this model, I agree with your calculations.

Yes! If I am understanding this right, I think this gets to the crux of the post. The compression is lossy, and necessarily loses some information.

I'm not sure. When we're looking directly at the probability of an event (instead of the probability of the probability an event), things get much simpler than I thought.

Let's see what happens to the likelihood when you aggregate from the expert's point of view. Letting , we need to calculate the expert's likelihoods and . In this case,

which is essentially your calculations, but from the expert's point of view. The likelihood depends on , the prior of the expert, which is unknown to you. That shouldn't come as a surprise, as he needs to use the prior of in order to combine the probability of the events and .

But the calculations are exactly the same from your point of view, leading to

Now, suppose we want to generally ensure that . Which is what I believe you want to do, and which seems pretty natural to do, at least since we're allowed to assume that for all simple events . To ensure this, we will probably have to require that your priors are the same as the expert. In other words, your joint distributions are equal, or .

Do you agree with this summary?