Theories That Can Explain Everything

post by Chris_Leong · 2020-01-02T02:12:28.772Z · LW · GW · 6 comments

Contents

6 comments

It's generally accepted here that theories are valuable to the extent that they provide testable predictions. Being falsifiable means that incorrect theories can be discarded and replaced with theories that better model reality (see Making Beliefs Pay Rent [LW · GW]). Unfortunately, reality doesn't play nice and we will sometimes possess excellent theoretical reasons for believing a theory, but that theory will possess far too many degrees of freedom to make it easily falsifiability.

The prototypical example are the kinds of hypotheses that are produced by evolutionary psychology. Clearly all aspects of humanity have been shaped by evolution and the idea that our behaviour is an exception would be truly astounding. In fact, I'd say that it is something of an anti-prediction.

But what use is a theory that doesn't make any solid predictions? Firstly, believing in such a theory will normally have a significant impact on your priors, even if no-one observation would provide strong evidence of its falsehood. But secondly, if the existing viable theories all claim A and you propose viable a theory that would be compatible with A or B, then that would make B viable again. And sometimes that can be a worthy contribution in and of itself. Indeed, you can have a funny situation arise where people nominally reject a theory for not sufficiently constraining expectations, while really opposing it because of how people's expectations would adjust if the theory was true.

See also: Building Intuitions on Non-Empirical Arguments in Science

6 comments

Comments sorted by top scores.

comment by gjm · 2020-01-02T13:40:16.310Z · LW(p) · GW(p)

Some "theories that can explain everything" may actually have the property that they can explain any individual observation but constrain what combinations of observations we can observe.

Consider, for instance, a vague but strongly adaptationist version of evolutionary psychology: it says that all features of human thought and behaviour have their origins in evolutionary advantage. Pretty much any specific feature of thought or behaviour can surely be given some sort of just-so-story explanation that will fit this theory, but it might be that feature 1 and feature 2 require mutually incompatible just-so stories, in which case the theory will forbid them both to occur; or at least that no one is able to come up with a plausibly-compatible pair of stories, in which case the theory will predict that features 1 and 2 are unlikely to occur together.

Arguably all theories are actually somewhat like this.

comment by Shmi (shminux) · 2020-01-02T03:12:28.131Z · LW(p) · GW(p)
if the existing viable theories all claim A and you propose viable a theory that would be compatible with A or B, then that would make B viable again

Doesn't it mean that this theory makes a prediction related to B? Maybe you want to give an example of what you mean.

Replies from: Chris_Leong
comment by Chris_Leong · 2020-01-02T12:24:59.428Z · LW(p) · GW(p)

Saying anything is possible is a prediction, but a trivial prediction. Nonetheless, it changes expectations if before only A seemed possible.

comment by Dagon · 2020-01-03T16:49:34.933Z · LW(p) · GW(p)

[note: not sure where I saw this concept, and I haven't explored it enough to know if it's useful]

Some things called "theories" aren't predictive, but are explanatory. Such models may be useful for organizing your beliefs, rather than for updating your beliefs.

Replies from: Chris_Leong
comment by Chris_Leong · 2020-01-03T23:54:27.839Z · LW(p) · GW(p)

Interesting idea. What is the use of organising beliefs without updating them?

Replies from: Dagon
comment by Dagon · 2020-01-04T21:10:34.555Z · LW(p) · GW(p)

The idea would be that these kinds of frameworks can improve the salience or accessibility of information, used when evaluating or executing more predictive models. Human brains can't actually access all the details of all the evidence they have experienced, so indexing is necessary to help determine which are available.

Thinking more about it, though, this may be just a restatement of what ALL models do - they're not evidence in themselves, they're filters on evidence to make the quantity manageable and the weightings useful.