What’s this probability you’re reporting?

post by Eric Chen (Equilibrate), Sami Petersen (sami-petersen) · 2023-04-14T15:07:42.844Z · LW · GW · 9 comments

Contents

    Deterministic
    Stochastic
    Ensemble
  Takeaways
None
9 comments

It’s unclear what people mean when saying they’re reporting a probability according to their inside view model(s). We’ll look through what this could mean and why most interpretations are problematic. Note that we’re not making claims about which communication norms are socially conducive to nice dialogue. We’re hoping to clarify some object-level claims about what kinds of probability assignments make sense, conceptually. These things might overlap.

Consider the following hypothetical exchange:

Person 1: “I assign 90% probability to X”

Person 2: “That’s such a confident view considering you might be wrong”

Person 1: “I’m reporting my inside view credence according to my model(s)”

This response looks coherent at first glance. But it’s unclear what Person 1 is actually saying. Here are three kinds of model(s) they could be referring to:

  1. Deterministic: There is a model that describes the relevant parts of the world, some deterministic laws of motion, and therefore a description of how it will evolve through time. There are no probabilities involved.
  2. Stochastic: There is a model that describes the relevant parts of the world, and the evolution of the model is stochastic. Model-based probabilities here correspond to precise statements about random variables within the model. 
  3. Ensemble: There is a set of models, deterministic or stochastic, that describe the evolution of the world. You have some way of aggregating over these. E.g., if nine models say “X happens” and one says “X doesn’t happen”, you might assign  if you have a uniform prior over the ten models.

There are troubles with all of these.

Deterministic

Models are often deterministic. When an engineer says a bridge is “unlikely” to collapse, it’s not necessarily because their model outputs probabilities; it could simply be because they aren’t confident that the model fully captures everything relevant. A deterministic model will not have any probabilities associated with it. So if someone is using such a model to assign a credence to some proposition, the source of uncertainty has to come from outside the model.

Stochastic

In a stochastic model, model-based probabilities correspond to very precise statements about the random variables in the model and their distributions. These random variables tend to correspond to genuine indeterminism in the system or at least the best we can formalise at some level of description for highly complex or chaotic systems. Examples include exogenous shocks in DSGE macroeconomic models or the evolution of entropy in statistical mechanics. The choice of adding a stochastic component to a model is very particular; its justification is usually based on features of the system, not simply one’s uncertainty—that’s what overall credences are for. This is a subtle point in the philosophy of science, but in slogan form: stochastic components in models represent emergent indeterministic macrodynamics rather than credences.[1]

Ensemble

You could claim that “model-based probabilities” are weighted averages of the outputs you get from several models. But this comes scarily close to your all things considered view. If not, then which models are you averaging over? What procedure are you using to choose them? How is it justified? If you chose the ten models you find most applicable, why is ten the magic number? Simply stating a probability that’s based on several models, but that decidedly isn’t your overall confidence, is quite uninformative. Most of the information plausibly comes from the choice of how many models you are including and how you weigh them. And if this is not based on your overall view, what is it? Stochastic models and Bayesian agents have clearly-defined probabilities but ensembles don’t. It’s unclear what people are using to distinguish an ensemble’s average from their all things considered view. If you’re doing this rather peculiar thing when reporting probabilities, it would be useful to know the procedure used.


The fundamental issue is this. Credences are unambiguous. For a Bayesian, the statement “I assign 90% probability to X” is perfectly well-defined. And we have a well-established epistemic infrastructure for handling credences. We know what to do with credences in models of decision-making (including betting),[2] opinion aggregation,[3] information transmission,[4] peer disagreement,[5] and more.[6] We have powerful theorems showing how to use credences in such ways. In contrast, “model-based” or “inside view” probabilities do not clearly correspond to well-defined objects, neither in the abstract nor in people’s heads. (Deterministic? Stochastic? Which models? Which aggregation procedure?) As a result, there does not exist a corresponding infrastructure for handling the various objects that they could refer to.

As an aside, we believe the various possible disambiguations of model-based probabilities can in fact correspond to useful objects for decision-making. But to justify their usefulness, we need to depart pretty drastically from Bayesian orthodoxy and examine which decision-making heuristics are rational for bounded agents. These can be perfectly reasonable choices but require justification or at least some clarification about which are being used.[7]

Takeaways

  1. Credences are well-defined, have well-studied properties, and fit into a wider epistemic and decision-making infrastructure. Things like “the probability on my main models” or “the probabilities generated by my initial inside view impression” don't have these properties. They are ambiguous and may just lack a referent.
  2. If your probability statements refer to something other than a credence, it is worthwhile to clarify precisely what this is to avoid ambiguity or incoherence.
  3. If you are reporting numbers based on a particular heuristic decision-making approach, this is more messy, so extra care should be made to make this clear. Because this leaves the well-trodden path of Bayesian epistemology, you should have some reason for thinking these objects are useful. There is an emerging literature on this and we'd be quite excited to see more engagement with it.[8]
  1. ^

    See Wallace (2012) Chapter 4.1 for a particularly lucid explanation.

  2. ^

    E.g., expected utility theory (Bradley 2017).

  3. ^

    E.g., Dietrich (2010).

  4. ^

    E.g, cheap talk (Crawford and Sobel 1982) and Bayesian persuasion (Kamenica and Gentzkow 2011).

  5. ^

    E.g., Aumann’s (1976) agreement theorem.

  6. ^

    Stochastic models are in a similar position in that we know how to handle them in principle. But we doubt that something like “my model has a mean-zero, variance  exogenous shock component” is what people mean by “model-based” or “inside view” probabilities.

  7. ^

    Very briefly, in the literature on decision-making under deep uncertainty (DMDU), the use of a small collection of models (roughly interpretations 1 and 2 above) corresponds to what is called scenario-based decision-making. And the use of a large ensemble of models (roughly interpretation 3) corresponds to a popular method developed by the RAND Corporation termed robust decision-making. See Thorstad (2022) for some reasons for thinking these are good heuristics for decision-making under severe or Knightian uncertainty. But the key part for now is that this is a very immature field compared to Bayesian epistemology, and so thinking in these terms should be done as clearly as possible.

  8. ^

    E.g., Roussos et al (2022) and Thorstad (2022).

9 comments

Comments sorted by top scores.

comment by Dagon · 2023-04-14T16:12:35.712Z · LW(p) · GW(p)

It's fallen out of favor, but "I'll {take/lay} $100 at those odds, what's our resolution mechanism?" is an excellent clarification mechanism.  The sequence https://www.lesswrong.com/tag/making-beliefs-pay-rent [? · GW] uses a lot more words to say the same thing.  

Basically, you're absolutely right - Bayesean probabilities are about future experiences, things that can be tested and measured, and at some point will collapse to 0 or 1.  Other types of probability estimate are often given, without the actual definition of what they mean.

Replies from: quanticle
comment by quanticle · 2023-04-14T19:25:04.571Z · LW(p) · GW(p)

I agree that the betting approach is better at clarification, but the problem is that it's often too much better. For example, if I say, I'll bet $10 at 80% odds that the weather tomorrow will be sunny, the discussion rapidly devolves into the definitional question of what is a sunny day, exactly? Do I win if I see the sun at any point in the day? Is there a certain amount of cloud cover at which point the day no longer counts as sunny? Where is the cloud cover measured from? If the sky starts out with < 5% clouds, clouds over to > 50%, but then the clouds clear later in the day, does the day still count as "sunny"? Etc.

Sometimes I want to make a certain probability judgement about an outcome defined by a colloquially understood category (such as "sunny day") without having to precisely specify all of my definitions exactly.

Replies from: Dagon, D0TheMath
comment by Dagon · 2023-04-14T21:54:21.108Z · LW(p) · GW(p)

Well, I'm not sure how you can have both well-defined propositional probabilities AND undefined, "colloquial" inexact meanings.

if I say, I'll bet $10 at 80% odds that the weather tomorrow will be sunny, the discussion rapidly devolves into the definitional question of what is a sunny day, exactly?

I think I'd use the word "progresses" rather than "devolves".  This is necessary to clarify what you actually assign 80% chance to happening.

Sometimes I want to make a certain probability judgement about an outcome defined by a colloquially understood category (such as "sunny day") without having to precisely specify all of my definitions exactly.

You can absolutely do so, but you need to recognize that the uncertainty makes your prediction a lot less valuable to others.  "80% chance that it might conceivably be considered sunny" is just less precise than "80% chance that the weather app at noon will report sunny".  

If someone disagrees, and you care about it, you'll need to define what you're disagreeing on.  If nobody cares, then hand-waving is fine.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-14T23:53:08.369Z · LW(p) · GW(p)

If someone disagrees, and you care about it, you'll need to define what you're disagreeing on.  If nobody cares, then hand-waving is fine.

That's what I've also thought was the norm this whole time without being consciously aware of it. So I appreciate it being spelled out, but I'm now surprised that the opposite norm could even be taken seriously.

 

(Though perhaps its more of a pretend-to-care about each other's vague probability numbers to signal desirable things dynamic and not literally believing in them as reliable estimates.)

comment by Garrett Baker (D0TheMath) · 2023-04-14T19:54:38.575Z · LW(p) · GW(p)

For sunny days you can just get a reliable reporter to tell you whether its sunny.

comment by quanticle · 2023-04-14T21:11:56.351Z · LW(p) · GW(p)

I was thinking more about the inside view/outside view distinction, and while I agree with Dagon's conclusion that probabilities should correspond to expected observations and expected observations only, I do think there is a way to salvage the inside view/outside view distinction. That is to treat someone saying, "My 'inside view' estimate of event is ," as being equivalent to someone saying that . It's a conditional probability, where they're telling you what their probability of a given outcome is, assuming that their understanding of the situation is correct.

In the case of deterministic models, this might seem like a tautology — they're telling you what the outcome is, assuming the validity of a process that deterministically generates that outcome. However, there is another source of uncertainty: observational uncertainty. The other person might be uncertain whether they have all the facts that feed into their model, or whether their observations are correct. So, in other words, when someone says, "My inside view probability of is ," that's a statement about the confidence level they have in their observations.

Replies from: sami-petersen
comment by Sami Petersen (sami-petersen) · 2023-04-15T12:27:17.327Z · LW(p) · GW(p)

probabilities should correspond to expected observations and expected observations only

FWIW I think this is wrong. There's a perfectly coherent framework—subjective expected utility theory (Jeffrey, Joyce, etc)—in which probabilities can correspond to many other things. Probabilities as credences can correspond to confidence in propositions unrelated to future observations, e.g., philosophical beliefs or practically-unobservable facts. You can unambiguously assign probabilities to 'cosmopsychism' and 'Everett's many-worlds interpretation' without expecting to ever observe their truth or falsity.

However, there is another source of uncertainty: observational uncertainty. The other person might be uncertain whether they have all the facts that feed into their model, or whether their observations are correct.

This is reasonable. If a deterministic model has three free parameters, two of which you have specificied, you could just use your prior over the third parameter to create a distribution of model outcomes. This kind of situation should be pretty easy to clarify though, by saying something like "my model predicts event E iff parameter A is above A*" and "my prior P(A>A*) is 50% which implies P(E)=50%."

But generically, the distribution is not coming from a model. It just looks like your all things considered credence that A>A*. I'd be hesitant calling a probability based on it your "inside view/model" probability.

Replies from: quanticle
comment by quanticle · 2023-04-16T00:43:30.309Z · LW(p) · GW(p)

Probabilities as credences can correspond to confidence in propositions unrelated to future observations, e.g., philosophical beliefs or practically-unobservable facts. You can unambiguously assign probabilities to ‘cosmopsychism’ and ‘Everett’s many-worlds interpretation’ without expecting to ever observe their truth or falsity.

You can, but why would you? Beliefs should pay rent in anticipated experiences. If two beliefs lead to the same anticipated experiences, then there's no particular reason to choose one belief over the other. Assigning probability to cosmopsychism or Everett's many-worlds interpretation only makes sense insofar as you think there will be some observations, at some point in the future, which will be different if one set of beliefs is true versus if the other set of beliefs is true.

Replies from: Equilibrate
comment by Eric Chen (Equilibrate) · 2023-04-16T13:05:24.584Z · LW(p) · GW(p)

Because the meaning of statements does not, in general, consist entirely in observations/anticipated experiences, and it makes sense for people to have various attitudes (centrally, beliefs and desires) towards propositions that refer to unobservable-in-principle things.

Accepting that beliefs should pay rent in anticipated experience does not mean accepting that the meaning of sentences are determined entirely by observables/anticipated experiences. We can have that the meanings of sentences are the propositions they express, and the truth-conditions of propositions are generally states-of-affairs-in-the-world and not just observations/anticipated experiences. Eliezer himself puts it nicely here: [LW(p) · GW(p)]"The meaning of a statement is not the future experimental predictions that it brings about, nor isomorphic up to those predictions [...] you can have meaningful statements with no experimental consequences, for example:  "Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us.""

As to how to choose one belief over another, if both beliefs are observationally equivalent in some sense, there are many such considerations. One is our best theories predict it: if our best cosmological theories predict something does not cease to exist the moment it exits our lightcone, then we should assign higher probability to the statement "objects continue to exist outside our lightcone" than the statement "objects vanish at the boundary of our lightcone". Another is simplicity-based priors: the many-worlds interpretation of quantum mechanics is strictly simpler/has a shorter description length than the Copenhagen interpretation (Many-Worlds = wave function + Schrödinger evolution; Copenhagen interpretation = wave function + Schrödinger evolution + collapse postulate), so we should assign a higher prior to many-worlds than to Copenhagen.

If your concern is instead that attitudes towards such propositions have no behavioural implications and thus cannot in principle be elicited from agents, then the response is to point to the various decision-theoretic representation theorems available in the literature. Take the Jeffrey framework: as long as your preferences over propositions satisfies certain conditions (e.g. Ordering, Averaging), I can derive both a quantitative desirability measure and probability measure, characterising your desire and belief attitudes (respectively) towards the propositions you are considering. The actual procedure to elicit this preference relation looks like asking people to consider and compare actualising various propositions, which we can think of as gambles. For example, a gamble might look like "If the coin lands Heads, then one person comes into existence outside of our future lightcone and experiences bliss; If the coin lands Tails, then one person comes into existence outside of our future lightcone and experiences suffering". Note, the propositions here can refer to unobservables. Also, it seems reasonable to prefer the above gamble involving a fair coin to the same gamble but with the coin biased towards Tails. Moreover, the procedure to elicit an agent's attitudes to such propositions merely consists in the agents considering what they would do if they were choosing which of various propositions to bring about, and does not cash out in terms of observations/anticipated experiences. 

(As an aside, doing acausal reasoning in general requires agent to have beliefs and desires towards unobservable-in-principle stuff in, e.g. distant parts of our universe, or other Everett branches).