What is the best way to talk about probabilities you expect to change with evidence/experiments?

post by Will_Pearson · 2024-04-19T15:35:47.779Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    19 Richard_Ngo
    2 Dagon
    1 sloonz
    1 harfe
None
1 comment

I was thinking about my p(doom) in the next 10 years and came up with something around 6%[1]. However that involves lots of current unknowns to me, like the nature of current human knowledge production (and the bottle necks involved) which impact my P(doom) to be either 3% or 15% depending upon what type of bottle necks are found or not found. Is there a technical way to describe this probability distribution contingent on evidence?

  1. ^

    I'm bearish on LLMs leading AI directly (10% chance) and roughly a 30% chance of LLMs  based AI fooming quickly enough to kill us and to want to kill us within 10 years. There is a 3% chance that something will come out of left field and doing the same.

Answers

answer by Richard_Ngo · 2024-04-19T20:45:06.670Z · LW(p) · GW(p)

I don't think there's a very good precise way to do so, but one useful concept is bid-ask spreads, which are a way of protecting yourself from adverse selection of bets. E.g. consider the following two credences, both of which are 0.5.

  1. My credence that a fair coin will land heads.
  2. My credence that the wind tomorrow in my neighborhood will be blowing more northwards than southwards (I know very little about meteorology and have no recollection of which direction previous winds have mostly blown).

Intuitively, however, the former is very difficult to change, whereas the latter might swing wildly given even a little bit of evidence (e.g. someone saying "I remember in high school my teacher mentioned that winds often blow towards the equator.")

Suppose I have to decide on a policy that I'll accept bets for or against each of these propositions at X:1 odds (i.e. my opponent puts up $X for every $1 I put up). For the first proposition, I might set X to be 1.05, because as long as I have a small edge I'm confident I won't be exploited.

By contrast, if I set X=1.05 for the second proposition, then probably what will happen is that people will only decide to bet against me if they have more information than me (e.g. checking weather forecasts), and so they'll end up winning a lot of money for me. And so I'd actually want X to be something more like 2 or maybe higher, depending on who I expect to be betting against, even though my credence right now is 0.5.

In your case, you might formalize this by talking about your bid-ask spread when trading against people who know about these bottlenecks.

comment by Razied · 2024-04-19T21:17:10.590Z · LW(p) · GW(p)

Surely something like the expected variance of  would be a much simpler way of formalising this, no? The probability over time is just a stochastic process, and OP is expecting the variance of this process to be very high in the near future.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-04-19T21:27:30.821Z · LW(p) · GW(p)

The variance over time depends on how you gather information in the future, making it less general. For example, I may literally never learn enough about meteorology to update my credence about the winds from 0.5. Nevertheless, there's still an important sense in which this credence is more fragile than my credence about coins, because I could update it.

I guess you could define it as something like "the variance if you investigated it further". But defining what it means to investigate further seems about as complicated as defining the reference class of people you're trading against. Also variance doesn't give you the same directional information—e.g. OP would bet on doom at 2% or bet against it at 16%.

Overall though, as I said above, I don't know a great way to formalize this, and would be very interested in attempts to do so.

Replies from: Razied
comment by Razied · 2024-04-19T22:03:05.745Z · LW(p) · GW(p)

Wait, why doesn't the entropy of your posterior distribution capture this effect? In the basic example where we get to see samples from a bernoulli process, the posterior is a beta distribution that gets ever sharper around the truth. If you compute the entropy of the posterior, you might say something like "I'm unlikely to change my mind about this, my posterior only has 0.2 bits to go until zero entropy". That's already a quantity which estimates how much future evidence will influence your beliefs. 

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-04-19T22:10:47.672Z · LW(p) · GW(p)

The thing that distinguishes the coin case from the wind case is how hard it is to gather additional information, not how much more information could be gathered in principle. In theory you could run all sorts of simulations that would give you informative data about an individual flip of the coin, it's just that it would be really hard to do so/very few people are able to do so. I don't think the entropy of the posterior captures this dynamic.

comment by Thomas Kwa (thomas-kwa) · 2024-04-20T02:22:01.652Z · LW(p) · GW(p)

Someone asked basically this question before, and someone gave basically the same answer. It's a good idea, but there are some problems [LW(p) · GW(p)] with it: it depends on your and your counterparties' risk aversion, wealth, and information levels, which are often extraneous.

answer by Dagon · 2024-04-19T21:06:48.995Z · LW(p) · GW(p)

If you're giving one number, that IS your all-inclusive probability.  You can't predict the direction that new evidence will change your probability (per https://www.lesswrong.com/tag/conservation-of-expected-evidence), [? · GW] but you CAN predict that there will be evidence with equal probability of each direction.  

An example is if you're flipping a coin twice.  Before any flips, you give 0.25 to each of HH, HT, TH, and TT.  But you strongly expect to get evidence (observing the flips) that will first change two of them to 0.5 and two to 0, then another update which will change one of the 0.5 to 1 and the other to 0.  

Likewise, p(doom) before 2035 - you strongly believe your probability will be 1 or 0 in 2036.  You currently believe 6%.  You may be able to identify intermediate updates, and specify the balance of probability * update that adds to 0 currently, but will be specific when the evidence is obtained.  

I don't know any shorthand for that - it's implied by the probability given.  If you want to specify your distribution of probable future probability assignments, you can certainly do so, as long as the mean remains 6%.  "There's a 25% chance I'll update to 15% and a 75% chance of updating to 3% over the next 5 years" is a consistent prediction.

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-04-20T05:17:19.307Z · LW(p) · GW(p)

you CAN predict that there will be evidence with equal probability of each direction.

More precisely the expected value of upwards and downwards updates should be the same; it's nonetheless possible to be very confident that you'll update in a particular direction - offset by a much larger and proportionately less likely update in the other.

For example, I have some chance of winning. lottery this year, not much lower than if I actually bought a ticket. I'm very confident that each day I'll give somewhat lower odds (as there's less time remaining), but being credibly informed that I've won would radically change the odds such that the expectation balances out.

answer by sloonz · 2024-04-20T07:00:41.208Z · LW(p) · GW(p)

I think you’re trying to point towards multimodal distributions ?

If you can decompose P(X) as P(X) = P(X|H1)P(H1) + ... + P(X|Hn)P(Hn), and the P(X|Hn) are nice unimodal distributions (like a normal distribution), you end up with a multimodal distribution.

answer by harfe · 2024-04-19T20:19:39.918Z · LW(p) · GW(p)

A lot of the probabilities we talk about are probabilities we expect to change with evidence. If we flip a coin, our p(heads) changes after we observe the result of the flipped coin. My p(rain today) changes after I look into the sky and see clouds. In my view, there is nothing special in that regard for your p(doom). Uncertainty is in the mind, not in reality.

However, how you expect your p(doom) to change depending on facts or observation is useful information and it can be useful to convey that information. Some options that come to mind:

  1. describe a model: If your p(doom) estimate is the result of a model consisting of other variables, just describing this model is useful information about your state of knowledge, even if that model is only approximate. This seems to come closest to your actual situation.

  2. describe your probability distribution of your p(doom) in 1 year (or another time frame): You could say that you think there is a 25% chance that your p(doom) in 1 year is between 10% and 30%. Or give other information about that distribution. Note: your current p(doom) should be the mean of your p(doom) in 1 year.

  3. describe your probability distribution of your p(doom) after a hypothetical month of working on a better p(doom) estimate: You could say that if you were to work hard for a month on investigating p(doom), you think there is a 25% chance that your p(doom) after that month is between 10% and 30%. This is similar to 2., but imo a bit more informative. Again, your p(doom) should be the mean of your p(doom) after a hypothetical month of investigation, even if you don't actually do that investigation.

1 comment

Comments sorted by top scores.

comment by MetallicDragon (james-bowden) · 2024-04-19T20:04:23.503Z · LW(p) · GW(p)

Is that not just Conditional Probability? Your overall P(doom) is 6%, but your P(doom|something) is 3%, and your P(doom|something_else) is 15%. If you need something more complex you could draw a probability tree.