Estimate Stability

post by lukeprog · 2013-04-13T18:33:23.799Z · LW · GW · Legacy · 15 comments

Contents

15 comments

I've been trying to get clear on something you might call "estimate stability." Steven Kaas recently posted my question to StackExchange, but we might as well post it here as well:

I'm trying to reason about something I call "estimate stability," and I'm hoping you can tell me whether there’s some relevant technical language...
What do I mean by "estimate stability?" Consider these three different propositions:
  1. We’re 50% sure that a coin (known to be fair) will land on heads.
  2. We’re 50% sure that Matt will show up at the party.
  3. We’re 50% sure that Strong AI will be invented by 2080.
These estimates feel different. One reason they feel different is that the estimates have different degrees of "stability." In case (1) we don't expect to gain information that will change our probability estimate. But for cases (2) and (3), we may well come upon some information that causes us to adjust the estimate either up or down.
So estimate (1) is more "stable," but I'm not sure how this should be quantified. Should I think of it in terms of running a Monte Carlo simulation of what future evidence might be, and looking at something like the variance of the distribution of the resulting estimates? What happens when it’s a whole probability distribution for e.g. the time Strong AI is invented? (Do you do calculate the stability of the probability density for every year, then average the result?)
Here are some other considerations that would be useful to relate more formally to considerations of estimate stability:
  • If we’re estimating some variable, having a narrow probability distribution (prior to future evidence with respect to which we’re trying to assess the stability) corresponds to having a lot of data. New data, in that case, would make less of a contribution in terms of changing the mean and reducing the variance.
  • There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I’m doing. With the Strong AI case, I don’t really have any good idea of what I’m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.
  • Another difference between the three cases is the degree to which our actions allow us to improve our estimates, increasing their stability. For example, we can reduce the uncertainty and increase the stability of our estimate about Matt by calling him, but we don’t really have any good ways to get better estimates of Strong AI timelines (other than by waiting).
  • Value-of-information affects how we should deal with delay. Estimates that are unstable in the face of evidence we expect to get in the future seem to imply higher VoI. This creates a reason to accept delays in our actions. Or if we can easily gather information that will make our estimates more accurate and stable, that means we have more reason to pay the cost of gathering that information. If we expect to forget information, or expect our future selves not to take information into account, dynamic inconsistency becomes important. This is another reason why estimates might be unstable. One possible strategy here is to precommit to have our estimates regress to the mean.
Thanks for any thoughts!

15 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2013-04-13T18:43:13.789Z · LW(p) · GW(p)

There's a chapter of Jaynes called "The A_p distribution and Rule of Succession" that answers this question.

Replies from: TsviBT
comment by TsviBT · 2013-04-13T19:39:05.296Z · LW(p) · GW(p)

Seconded.

comment by Squark · 2013-04-13T21:31:33.313Z · LW(p) · GW(p)

Given a random variable with probability distribution , the probability distribution you will have about after time is a random variable in itself. It must satisfy . The (un)stability parameter you are looking for sounds like ]), where stands for Kullback–Leibler divergence. The meaning of this parameter is the expected number of bits of information about you will receive over period .

Replies from: Daniel_Burfoot, gwern
comment by Daniel_Burfoot · 2013-04-14T01:41:58.644Z · LW(p) · GW(p)

The first two sentences of this comment are a very good formal (re-) statement of the idea of Conservation of Expected Evidence.

comment by gwern · 2013-04-13T23:19:55.666Z · LW(p) · GW(p)

That's an interesting approach, but I'm not really sure it's what Luke's after. He seems to be talking about closer to Knightian uncertainty and out-of-sample error; given a specific model of AI risk over time, I suppose you could figure out how many bits you receive per time period and calculate such a number, but I think Luke is asking a question more like 'how much reliability do I have that this model is capturing anything meaningful about the real dynamics? are the results being driven by one particular assumption or some small unreliable set of datapoints? Is this set of predictions just overfitting?' One of his points:

There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I’m doing. With the Strong AI case, I don’t really have any good idea of what I’m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.

Replies from: Squark
comment by Squark · 2013-04-14T19:50:02.222Z · LW(p) · GW(p)

It's always a concern in Bayesian reasoning whether you're using a sensible prior. Theoretically you should always start with the Solomonoff prior and update from there but implementing it in practice is difficult, to say the least. However, if we wish to stay in the realm of mathematical formalism (I think Knightian uncertainty lies outside of it by definition?) then the parameter I suggested is sensible. In particular the relationship between model uncertainty and estimate stability is well-captured by this parameter. For example suppose you have three possible models of strong AI development M1, M2 and M3 and you have a meta-model which assigns them probabilities p1, p2 and p3. Then your probability distribution is the convex linear combination of the probability distributions assigned by M1, M2 and M3 with coefficients p1, p2 and p3. Now, if during time period t you expect to learn which of these models is the right one then my parameter will show the resulting "unstability".

Replies from: gwern
comment by gwern · 2013-04-14T21:06:07.808Z · LW(p) · GW(p)

Theoretically you should always start with the Solomonoff prior and update from there but implementing it in practice is difficult, to say the least.

Yes, but you can check your models in a variety of ways. You can test your inferred results from your dataset by doing bootstrapping or cross-validation, and see how often your result changed (coefficients or estimation accuracy etc). To step up a level, you can set parameters in your model to differing values based on hyperparameters, and see how each of the variants on the model performs on the data (and then you can bootstrap/cross-validate each of the possible models as well), and then see how sensitive your results are to specific parameters like, yes, whatever priors you were feeding in. You can have families of models, like pitting logistic regression models against random forests, and you can see how often they differ as another form of sensitivity (and then you can vary the hyperparameters in each model and then bootstrap/cross-validate with each possible model). You can have ensembles of models from various families and obviously vary which models are picked and what weights are put on them... and there my knowledge peters out.

But while you still would not have come close to what a Solomonoff approach might do, you have still learned a great deal about your model's reliability in a way that I can't see as having any connection with your time and KL-related approach.

Replies from: Squark
comment by Squark · 2013-04-17T18:51:02.072Z · LW(p) · GW(p)

But while you still would not have come close to what a Solomonoff approach might do, you have still learned a great deal about your model's reliability in a way that I can't see as having any connection with your time and KL-related approach.

I think there is a connection. Namely, the methods you mentioned are possible mechanisms of a learning process but ]) is a quantification of the expected impact of this learning process.

Replies from: gwern
comment by gwern · 2013-04-17T20:54:23.593Z · LW(p) · GW(p)

Yes, I see what you mean - the mean/expectation of how big the divergence between our current probability distribution and the future probability distribution - but this seems like a post hoc or purely descriptive approach: how do we estimate how much divergence there may be?

Having gotten estimates of future divergence, quantifying the divergence may then be useful, but it seems like putting the horse before the cart to start with your measure.

comment by NancyLebovitz · 2013-04-13T18:59:16.236Z · LW(p) · GW(p)

Thanks for working this up. I've thought for a long time that there was something fishy about the lack of error bars, and if I remember correctly, I was told that the error bars were somehow subsumed in the percentage. Maybe there should be more than one kind of error bar.

Replies from: gwern
comment by gwern · 2013-04-13T19:41:16.400Z · LW(p) · GW(p)

I was told that the error bars were somehow subsumed in the percentage.

They sort of are. In the long run, if your percentages aren't calibrated, you're doing something wrong; issues of sensitivity and robustness are one form of error among others and subsumed under the grand ultimate rubric - if you did your arithmetic wrong, you can expect to be uncalibrated; if you overestimated the quality of your data, you can expect to be uncalibrated; if you used rigid models which required powerful assumptions which often are violated in practice, you can expect to be uncalibrated. And so on.

Maybe there should be more than one kind of error bar.

Every summary statistic is going to cause problems somehow compared to something like a posterior distribution. One error bar won't cover all the questions one might want to ask, and it's not clear what error bars you want in advance. (Statistics and machine learning seem to be moving towards ensembles of models and hierarchical approaches like models over models and so forth, where one can vary all the knobs in general and see how the final answers perform, but 'perform' is going to be defined differently in different places.)

comment by lukeprog · 2015-08-20T17:30:53.991Z · LW(p) · GW(p)

I guess subjective logic is also trying to handle this kind of thing. From Jøsang's book draft:

Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expressing uncertainty about the probability values themselves, meaning that it is possible to reason with argument models in presence of uncertain or incomplete evidence.

Though maybe this particular formal system has really undesirable properties, I don't know.

comment by private_messaging · 2013-04-14T05:57:05.315Z · LW(p) · GW(p)

In case of the coin (I am assuming a coin that is making a fairly large number of bounces when it lands, so that it can't be easily biased with trick tossing), you have high confidence in a process (coin bouncing around) which maps initial coin position and orientation into final orientation in such a way that the final distribution is almost independent on your prior for coin orientation and position. I.e. you have a highly objective probability of 0.5 , in the sense that it is not dependent on subjective, arbitrary quantification of unknown.

If you were to assign a probability of 0.7 instead, you'd have to entirely change the way physics works for the coin, or adopt a very ridiculous prior over coin's initial orientation, which involves coin's own motion as part of the prior.

Meanwhile, in the case of "We’re 50% sure that Strong AI will be invented by 2080." , it's a number entirely pulled out of your ass. You could of pulled 10%, or 90%, depending to what suits you best, without changing anything even about your knowledge of the world. Numbers pulled out of your ass have an interesting, empirically verifiable property that maximization of their products does not tend to actually result in win, irrespective of the labels that you give those numbers.

comment by lukeprog · 2014-01-12T18:16:49.814Z · LW(p) · GW(p)

There's also Skyrms' 1980 paper "Higher order degrees of belief" and Gaifman's 1988 paper "A theory of higher order probabilities."