What's so special about likelihoods?

post by mfatt · 2024-08-10T01:07:58.592Z · LW · GW · 0 comments

Contents

No comments

This is a coin.

It might be biased.

This is Bayes’ theorem.

Bayes’ theorem tells us how we ought to update our beliefs given evidence.

It involves the following components:

The overall shape of the theorem is this:

Posterior  likelihood  prior


If you were explain this to a high-school student, they might ask this naïve question:

Why should we bother to go through the process of calculating the likelihood and prior at all? Why can’t we just try and directly calculate the posterior? We have a formula for , namely .

Maybe you'll say "That formula is fine but not useful in real life. It's usually tractable to go via conditional updates rather than the high school definition."

But if conditionals are easy to get, why not just go directly to the posterior? What's even the difference between A and B? Aren't they just symbols? We could easily rearrange the theorem to calculate  as a function of .

What is it that makes using strings of coin flips to calculate biases more natural or scientific?

Perhaps it is ease. If it is the case that for some reason calculating  is easier, what makes it easier?

Perhaps it is usefulness. If likelihoods are what's worth publishing, not posteriors, why are they worthier?

How do you spot a likelihood in the wild?

0 comments

Comments sorted by top scores.