[LINK] Common fallacies in probability (when numbers aren't used)

post by Stuart_Armstrong · 2016-01-15T08:29:28.743Z · LW · GW · Legacy · 9 comments

Contents

9 comments

Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:

  1. "It's not certain" does not mean "It's impossible" (and vice versa).
  2. "We don't know" absolutely does not imply "It's impossible".
  3. "There is evidence against it" doesn't mean much on its own.
  4. Being impossible *in a certain model*, does not mean being impossible: it changes the issue to the probability of the model.

Common fallacies in probability

9 comments

Comments sorted by top scores.

comment by buybuydandavis · 2016-01-18T04:02:06.554Z · LW(p) · GW(p)

it changes the issue to the probability of the model.

To throw out an idea I never followed up on, I think the "probability of a model" is a category error. Most models we deal with, and particularly in the context of assigning probabilities to models, are not propositions that are true or false, but maps that are more or less accurate.

I'm not sure what the implications to model testing and generalization theory would be in that, but I expect there would be some, and it always just irked me to see things like P(M1).

I think 4 generalizes better as

Impossible under certain assumptions does not mean impossible.

Remembering Jaynes' "background information I" is often helpful.

Replies from: AstraSequi, Lumifer
comment by AstraSequi · 2016-01-19T12:30:46.668Z · LW(p) · GW(p)

Another way to generalize 4 is

Always correct your probability estimates for the possibility that you've made an incorrect assumption.

I don't think "changes the issue" is the best way to say this, because there is always a probability that your model won't work even if it doesn't say something is impossible.

I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-01-20T00:26:24.450Z · LW(p) · GW(p)

Always correct your probability estimates for the possibility that you've made an incorrect assumption.

I think that's good too - Jaynes advocated including a "something else that I didn't think of" hypothesis to your hypothesis to avoid accepting something strongly when all you've done is eliminate the alternatives you've considered.

I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition

"Is accurate" isn't much of a proposition in itself, as it leaves out the level of accuracy.

Probability of a proposition. Propositions are true or false. Level of accuracy of a model. Models are more or less accurate.

Replies from: AstraSequi
comment by AstraSequi · 2016-01-20T02:31:35.912Z · LW(p) · GW(p)

Maybe "Is accurate enough that it doesn't change our answer by an unacceptable amount"? The level of accuracy we want depends on context.

How would you measure the accuracy of a model, other than by its probability of giving accurate answers? "Accurate" depends on what margin of error you accept, or you can define it with increasing penalties for increased divergence from reality.

comment by Lumifer · 2016-01-18T16:59:40.634Z · LW(p) · GW(p)

In the Bayesian setting where probabilities are subjective beliefs there shouldn't be too many problems with the "probability of a model" expression.

There is a related concept of "model error" which is easier to clarify. To give a simple example, imaging you're trying to model a relationship between two variables which is actually well-described by a log curve, but you are using linear regression without any tranformations. Even if your sample size goes to infinity, your fit will still have a particular error component which is known as model error.

What if you define "probability of a model" as 1 - (probability that replacing it with a different model will improve things)? Or, in simpler terms, that the current model is the appropriate one for the task at hand.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-01-19T09:46:57.739Z · LW(p) · GW(p)

In the Bayesian setting where probabilities are subjective beliefs there shouldn't be too many problems with the "probability of a model" expression.

In Jaynes' bayesian setting, a probability is a number you assign to a proposition. Models as generally used are not propositions.

that the current model is the appropriate one for the task at hand.

Don't like that one. For any model, you can generally conceive of an infinite number of slightly tweaked, slightly better versions, so that for any particular model P(model is the appropriate one) is 0.

What if you define "probability of a model" as 1 - (probability that replacing it with a different model will improve things)?

The probability that some "random sample" from some set of models will have improved performance?
What aggregated error function to quantify "better"? How was the domain of the model sampled for the error function?

I see an ocean of structural commitments being imposed on the problem, commitments about how you choose to think about the problem, to define a "probability of a model".

And after all that, I still don't see a proposition that you're assigning a probability to, I see a model. I could just as well define the probability of my shoe. I could have all sorts of structural commitments about the meaning of "the probability of my shoe". But in the end, that doesn't make my shoe a proposition, nor the probability of a shoe that I've just defined the same category of thing as the probability of a proposition.

The Map is not the Territory. There is no "true" map. There is no "true" model. The relevant thing for a model is how well it gets you to where you want to go.

Replies from: Lumifer
comment by Lumifer · 2016-01-19T18:23:21.936Z · LW(p) · GW(p)

It's true that models are maps. It's also true, to recall a George Box quote, that "all models are false but some are useful".

I agree that

The relevant thing for a model is how well it gets you to where you want to go

...and that, to my mind, supports the notion of the "probability of a model", or, rather, the "probability of this particular model being sufficiently good to get you to where you want to go".

I think it's a fairly practical concept -- if I'm modeling something and I am fitting several models which give me various trade-offs, it's useful for me to think in terms of, say, the probability that a linear model will be sufficient for my purposes. If I define my purposes rigorously enough, the "model is sufficient" becomes a proposition.

But in a more general and more handwavy sense, I think it's fine to assign to whole maps the probability of being correct. Take a literal example, say a nautical chart. Let's say I have a chart of a coast unknown to me and as I explore it, I find that the chart is partially correct, but partially off. It depicts this peninsula, but fails to show that rock and the sandbar on the chart doesn't exist in reality. After a while my belief in the accuracy of chart becomes partial so when I go around a point and the chart says there will be shoals, I expect to actually find these shoals with the probability of X%.

comment by AstraSequi · 2016-01-19T12:21:40.445Z · LW(p) · GW(p)

I would add the reverse of #3: "There is evidence for it" doesn't mean much on its own either, for the same reasons.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-01-19T12:35:24.701Z · LW(p) · GW(p)

Yep. I always consider that these issues are symmetric with their negation.