A pragmatic story about where we get our priors

post by Fiora from Rosebloom · 2025-01-02T10:16:54.019Z · LW · GW · 6 comments

Contents

  Q. How can I find the priors for a problem?
  Q. Uh huh. Then where do scientists get their priors?
  Q. I see. And where does everyone else get their priors?
  Q. What if the priors I want aren’t available on Kazaa?
None
6 comments

expectation calibrator: stimulant-fueled vomiting of long-considered thoughts

In a 2004 essay, "An Intuitive Explanation of Bayes' Theorem [LW · GW]", Yudkowsky puts forth that it's not clear where Bayesain priors originally come from. Here's a dialogue from the post poking fun at that difficulty.

Q. How can I find the priors for a problem?

A. Many commonly used priors are listed in the Handbook of Chemistry and Physics.

Q. Where do priors originally come from?

A. Never ask that question.

Q. Uh huh. Then where do scientists get their priors?

A. Priors for scientific problems are established by annual vote of the AAAS. In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations. This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time. No one is really sure.

Q. I see. And where does everyone else get their priors?

A. They download their priors from Kazaa.

Q. What if the priors I want aren’t available on Kazaa?

A. There’s a small, cluttered antique shop in a back alley of San Francisco’s Chinatown. Don’t ask about the bronze rat.

The problem of where priors originally come from has caused significant philosophical confusion on LessWrong, but I think it actually has a pretty clear naturalistic solution. Our brains supply the answers to questions of probability (e.g. "How likely is Donald Trump to win the 2024 presidential election?"), and our brains were shaped by natural selection. That is to say, they were shaped by a process which generates cognitive algorithms which are reproduced if they work in practice. It wasn't like we were being evaluated on how well our brains performed in every possible universe. They just had to produce well-calibrated expectations in our universe, or more accurately, just the parts of the universe they actually had to deal with.

You can build some intuition for this topic by considering large language models. Prior to undergoing reinforcement learning and becoming chatbots, LLMs are pure next-word predictors. If such an LLM had good enough training data and a good enough architecture, its outputs will tend to be pretty good as predictions of the types of things that humans actually say in real life.[1] However, it's not hard to contrive situations where an LLM base model's predictions fail dramatically.

For instance, if you type

Once upon a

into a typical base model, the vast majority of its probability mass falls on the token "time". However, you could easily continue training the model on documents which followed up "once upon a" with random noise tokens, and its predictions will completely fail on this new statistical distribution.

In this analogy, humans are like the language models, in the sense that we're both designed to predict that the future will broadly behave the same way as the past (in a well-defined statistical learning sense). In practice, this has historically worked out well for both humans and LLMs. They've historically proven well-calibrated as models of the world, giving them advantages which have resulted in their architectures being selected for replication and further improvement by natural selection and deep learning engineers respectively.

However, if an LLM was exposed to a malicious distributional shift, or if the laws of physics a human lived under suddenly completely changed, both systems would more or less stop working. It's impossible to rule this possibility out; indeed, it's impossible to even prove that it's unlikely, except by deferring to the very probabilistic systems whose calibrations would be thrown off by such a cataclysm. The best each system can do is keep working with its current learning algorithm, and hoping for the best.

Anyway, all of that's to say that there's a good chance that human priors don't come from any bespoke mathematical process which provably achieves relatively good results in all possible universes, however we'd want to define that. There's just some learning algorithm it runs which results in us having the ability to make natural-language statements about the probabilities of future events, which have turned out to be reasonably well-calibrated in practice.

The fact that this works comes down to the probably-inexplicable fact that we seem to live in a universe amenable enough to induction that natural selection could find an algorithm that has world well enough historically, as well as whatever the implementation details of the brain's learning algorithm actually are. I doubt that it's a reflection of some deep insight supporting an airtight strategy for universal learning.

(I have a more negative critique of why I don't think popular theories like "our brains approximate Solomonoff induction" are very enlightening or even coherent as explanations of where priors come from or ought to come from, but that seems like a topic for another post.)

  1. ^

    This can be rigorously quantified by using the LLM's loss function; see this video series if you're ignorant and curious about what that means.

6 comments

Comments sorted by top scores.

comment by TAG · 2025-01-03T04:23:16.195Z · LW(p) · GW(p)

A) If priors are formed by an evolutionary process common to all humans, why do they differ so much? Why are there deep ethical, political and religious divides?

B) how can a process tuned to achieving directly observable practical results allow different agents to converge on non-obvious theoretical truth?

These questions answer each other, to a large extent. B -- they cant, A -- that's where the divides come from. Values aren't dictated by facts, and neither are interpretations-of-facts.

@quila

The already-in-motion argument is even weaker than the evolutionary argument, because it says nothing about the validity of the episteme you already have...and nothing about the uniformity/divergence between individuals , either

@Carl Feynman

Observations overwhelming priors needs to account for the divergence as well. But , of course, real agents aren't ideal Bayesians...in particular , they dont have access to every possible hypothesis , and if you've never even thought of a hypothesis, the evidence can't support it in practice. It's as if the unimagined hypotheses -- the overwhelming majority -- have 0 credence.

Replies from: quila
comment by quila · 2025-01-04T22:06:00.164Z · LW(p) · GW(p)

A) If priors are formed by an evolutionary process common to all humans, why do they differ so much? Why are there deep ethical, political and religious divides?

ethical, political and religious differences (which i'd mostly not place in the category of 'priors', e.g. at least 'ethics' is totally separate from priors aka beliefs about what is) are explained by different reasons (some also evolutionary, e.g. i guess it increased survival for not all humans to be the same), so this question is mostly orthogonal / not contradicting that human starting beliefs came from evolution.

i don't understand the next three lines in your comment.

Replies from: TAG
comment by TAG · 2025-01-04T22:59:49.405Z · LW(p) · GW(p)

ethical, political and religious differences (which i’d mostly not place in the category of ‘priors’, e.g. at least ‘ethics’ is totally separate from priors aka beliefs about what is)

That's rather what I am saying. Although I would include "what is" as opposed to "what appears to be". There may well be fact/value gap, but there's also an appearance/reality gap. The epistemology you get from evolutionary argument only goes as far as the apparent. You are not going to die if you have interpreted the underlying nature or reality of a dangerous thing incorrectly -- you should drink water even if you think it's a fundamental element, you should avoid marshes even if you think fever is caused by bad smells.

are explained by different reasons (some also evolutionary, e.g. i guess it increased survival for not all humans to be the same), so this question is mostly orthogonal / not contradicting that human starting beliefs came from evolution.

But that isn't the point of the OP. The point of the OP is to address an epistemological problem, to show that our priors have some validity, because the evolutionary process that produced them would tend to produce truth seeking ones. It's epistemically pointless to say that we have some arbitrary starting point of no known validity -- as the already-in-motion argument in fact does

I don’t understand the next three lines in your comment.

The point is that an evolutionary process depends on feedback from what is directly observable and workable ("a process tuned to achieving directly observable practical results")...and that has limitations. It's not useless, but it doesn't solve every epistemological problem. (Ie. "non-obvious theoretical truth").

Truth and usefulness, reality and appearance are different

The usefulness cluster of concepts includes the ability to make predictions, as well as create technology. The truth cluster of concepts involves identification of the causes of perceptions, and offering explanations, not just predictions. The usefulness cluster corresponds to scientific instrumentalism , the truth cluster to scientific instrumentalism. The truth cluster corresponds to epistemological rationalism, the usefulness cluster to instrumental rationalism. Truth is correspondence to reality , which is not identical to the ability to make predictions. One can predict that the sun will rise, without knowing what the Sun really is. "Curve fitting" science is adequate to make predictions. Trial and error is adequate to come up with useful technologies. But other means are needed to find the underlying reality. One can't achieve convergence by "just using evidence" because the questions of what evidence is, and how to interpret it depends on one's episteme.

comment by Carl Feynman (carl-feynman) · 2025-01-03T15:25:21.535Z · LW(p) · GW(p)

If we're being pragmatic, the priors we had at birth almost don't matter.  A few observations will overwhelm any reasonable prior.  As long as we don't assign zero probability to anything that can actually happen, the shape of the prior makes no practical difference.

comment by quila · 2025-01-02T18:21:36.815Z · LW(p) · GW(p)

also see created already in motion [LW · GW]. this applies to more than just priors.

Replies from: quila
comment by quila · 2025-01-03T18:04:35.777Z · LW(p) · GW(p)

(as a legible example, 'created already in motion' applies to the dynamic[1] of having probabilistic expectations at all[2]. i think it also applies to much more, possibly even math itself is such a contingent-dynamic, which would dissolve [? · GW] the question of what breathes fire into the equations (and raise some hard new questions), but i'd probably need to write a careful post about this for it to make sense/be conceivable.)

  1. ^

    ('dynamic' is defined in the linked post)

  2. ^

    i.e. of ~approximating/{doing something at least vaguely similar to} the particular equation we call the one Of Probability, instead of some other equation, which we would similarly give special name to if it were the one which worked instead