# In plain English - in what ways are Bayes' Rule and Popperian falsificationism conflicting epistemologies?

post by Sandro P. · 2021-04-02T21:21:23.865Z · LW · GW · No commentsThis is a question post.

## Contents

Answers Viliam Zac Hatfield Dodds tkpwaeub None No comments

Bayes' Rule dictates how much credence you should put in a given proposition in light of prior conditions/evidence. It answers the question *How probable is this proposition?*

Popperian falsificationism dictates whether a given proposition, construed as a theory, is epistemically justifiable, if only tentatively. But it doesn't say anything about how much credence you should put in an unfalsified theory (right?). It answers the question *Is this proposition demonstrably false (and if not, lets hold on to it, for now)? *

I gather that the tension has something to do with inductive reasoning/generalizing, which Popperians reject as not only false, but imaginary. But I don't see where inductive reasoning even comes in to Bayes' Rule. In Arbital's waterfall example, it just is the case that "the bottom pool has 3 parts of red water to 4 parts of blue water" - which means that there just is a roughly 43% probability that a randomly sampled water molecule from that pool is red. How could a Popperian disagree?

What am I missing?

Thanks!

## Answers

For the record, the popular interpretation of "Popperian falsificationism" is *not* what Karl Popper actually believed. (According to Wikipedia, he did not even like the word "falsificationism" and preferred "critical rationalism" instead.) What most people know as "Popperian falsificationism" is a simplification optimized for memetic power, and it is quite simple to disprove. Then we can play *motte and bailey* with it: the *motte* being the set of books Karl Popper actually wrote, and the *bailey* being the argument of a clever internet wannabe meta-scientist about how this or that isn't scientific because it does not follow some narrow definition of falsifiability.

I have not read Popper's book, therefore I am only commenting here on the traditional internet usage of "Popperian falsificationism".

The good part is noticing that beliefs should pay rent in anticipated consequences [LW · GW]. A theory that explains everything, predicts nothing. In the "Popperian" version, beliefs pay rent by saying which states of the world are *impossible*. As long as they are right, you keep them. When they get wrong *once*, you mercilessly kick them out.

An obvious problem: How does this work with *probabilistic* beliefs? Suppose we flip a fair coin, and one person believes there is a 50% chance of head/tails, and other person believes it is 99% head and 1% tails. How exactly is each of these hypotheses falsifiable? How many times exactly do I have to flip the coin and what results exactly do I need to get in order to declare each of the hypotheses as falsified? Or are they both unfalsifiable, and therefore both equally unscientific, neither of them better than the other?

That is, "Popperianism" feels a bit like Bayesianism for mathematically challenged people. Its probability theory only contains three values: yes, maybe, no. Assigning "yes" to any scientific hypothesis is a taboo (Bayesians agree [LW · GW]), therefore we are left with "maybe" and "no", the latter for falsified hypotheses, the former for everything else. And we need to set the rules of the social game so that the "maybe" of science does *not* become completely worthless (i.e. equivalent to any other "maybe").

This is confusing again. Suppose you have two competing hypotheses, such as "there is a finite number of primes" and "there is an infinite number of primes". To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved. Wait, what?! How exactly would you falsify one of them *without* automatically proving the other?

I suppose the answer by Popper might be a combination of the following:

- mathematics is a special case, because it is
*not*about the real world -- that is, whenever we apply math to the real world, we have two problems: whether the math itself is correct, and whether we chose the right model for the real world, and the concept of "falsifiability" only applies to the latter; - there is always a chance that we
*left out something*-- for example, it*might*turn out that the concept of "primes" or "infinity" is somehow ill-defined (self-contradictory or arbitrary or whatever), therefore one hypothesis being wrong does not necessarily imply the other being right.

Yet another problem is that scientific hypotheses actually get disproved all the time. Like, I am pretty sure there were at least dozen popular-science articles about experimental refutation of theory of relativity upvoted to the front page of Hacker News. The proper reaction is to ignore the news, and wait a few days until someone provides an explanation of why the experiment was set up wrong, or the numbers were calculated incorrectly. That is business as usual for a scientist, but would pose a philosophical problem for a "Popperian": how do you justify believing in the scientific result during the time interval *between* the experiment and its refutation were published? How long is the interval allowed to be: a day? a month? a century?

The underlying problem is that experimental outcomes are actually *not* clearly separated from hypotheses. Like, you get the raw data ("the machine X beeped today at 14:09"), but you need to combine them with some assuptions in order to get the conclusion ("therefore, the signal travelled faster than light, and the theory of relativity is wrong"). So the end result is that "data + some assumptions" disagree with "other assumptions". There as assumptions on both sides; either of them could be wrong; there is no such thing as pure falsification.

Sorry, I got carried away...

## ↑ comment by TAG · 2021-04-03T13:53:23.030Z · LW(p) · GW(p)

This is confusing again. Suppose you have two competing hypotheses, such as “there is a finite number of primes” and “there is an infinite number of primes”. To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved.

It's been known for two thousand years that there are infinitely many primes.

## ↑ comment by Sandro P. · 2021-04-05T23:39:54.345Z · LW(p) · GW(p)

Thanks for your generous reply. Maybe I understand the* bailey *and would need to acquaint myself with the *motte* to begin to understand what is meant by those who say it's being 'dethroned by the Bayesian revolution'.

## ↑ comment by Viliam · 2021-04-06T19:12:43.237Z · LW(p) · GW(p)

Sorry for jargon. But it's a useful concept, so here is the explanation:

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of pleasantly habitable land (the Bailey), which in turn is encompassed by some sort of a barrier, such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible, and so neither is the Bailey. Rather, one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

The writers of the paper compare this to a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you retreat to an obvious, uncontroversial statement, and say that was what you meant all along, so you’re clearly right and they’re silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

-- All In All, Another Brick In The Motte

The latter also contains a few examples.

Considered *as an epistemology*, I don't think you're missing anything.

To reconstruct Popperian falsification from Bayes, see that if you observe something that some hypothesis gave probability ~0 ("impossible"), that hypothesis is almost certainly false - it's been "falsified" by the evidence. With a large enough hypothesis space you can recover Bayes from Popper - that's Solomonoff Induction [LW · GW] - but you'd never want to in practice.

For more about science - as institution, culture, discipline, human activity, etc. - and ideal Bayesian rationality, see the Science and Rationality sequence [? · GW]. I was going to single out particular essays, but honestly the whole sequence is probably relevant!

I'm not sure Bayes' Rule *dictates* anything beyond its plain mathematical content, which isn't terribly controversial:

When people speak of *Bayesian inference*, they are talking about a mode of reasoning that *uses* Bayes' Rule a lot, but it's mainly motivated by a different "ontology" of probability.

As to whether Bayesian inference and Popperian falsificationism are in conflict - I'd imagine that depends very much on the subject of investigation (does it involve a need to make immediate decisions based on limited information?) and the temperaments of the human beings trying to reach a consensus.

## ↑ comment by Charlie Steiner · 2021-04-08T02:34:03.018Z · LW(p) · GW(p)

Hm. I don't think people who talk about "Bayesianism" in the broad sense are using a different ontology of probability than most people. I think what makes "Bayesians" different is their willingness to use probability *at all*, rather than some other conception of knowledge.

Like, consider the weird world of the "justified true belief" definition of knowledge and the mountains of philosophers trying to patch up its leaks. Or the FDA's stance on whether covid vaccines work in children. It's not that these people would deny the proof of Bayes' theorem - it's just that they wouldn't think to apply it here, because they aren't thinking of the status of some claim as being a probability.

Replies from: TAG## ↑ comment by TAG · 2021-04-08T14:57:05.994Z · LW(p) · GW(p)

Like, consider the weird world of the “justified true belief” definition of knowledge and the mountains of philosophers trying to patch up its leaks.

What were the major problems with JTB before Gettier? There were problems with equating knowledge with certainty...but then pretty much everyone moved to fallibilism. Without abandoning JTB. So JTB and probablism, broadly defined, aren't incompatible. There's nothing about justification, or truth or belief that cant come in degrees. And regarding all three of them as non-binary is a richer model than just regarding belief as non-binary.

Replies from: Charlie Steiner## ↑ comment by Charlie Steiner · 2021-04-08T15:37:18.565Z · LW(p) · GW(p)

I'm not really sure about the history. A quick search turns up Russell making similar arguments at the turn of the century, but I doubt there was the sort of boom there was after Gettier - maybe because probability wasn't developed enough to serve as an alternative ontology.

Replies from: TAG## ↑ comment by TAG · 2021-04-08T15:59:28.400Z · LW(p) · GW(p)

It remains the case that JTB isn't that bad, and Bayes isn't that good a substitute.

Replies from: Charlie Steiner## ↑ comment by Charlie Steiner · 2021-04-08T17:01:49.463Z · LW(p) · GW(p)

"Classic flavor" JTB is indeed that bad. JTB shifted to a probabilistic ontology is either Bayesian, wrong, or answering a different question altogether.

Replies from: TAG## No comments

Comments sorted by top scores.