Some limitations of reductionism about epistemology

post by Richard_Ngo (ricraz) · 2022-02-14T08:50:02.419Z · LW · GW · 7 comments

Contents

7 comments

This post is largely based on a lightning talk I gave at a Genesis event on metacognition, with some editing to clarify and expand on the arguments.

Reductionism is the strategy of breaking things down into smaller pieces, then trying to understand those smaller pieces and how they fit together into larger pieces. It’s been an excellent strategy for physics, for most of science, for most of human knowledge. But my claim is that, when the thing we're trying to understand is how to think, being overly reductionist has often led people astray, particularly in academic epistemology.

I’ll give three examples of what goes wrong when you try to be reductionist about epistemology.  Firstly, we often think of knowledge in terms of sentences or propositions with definite truth-values - for example, “my car is parked on the street outside”. Philosophers have debated extensively what it means to know that such a claim is true; I think the best answer is the bayesian one, where we assign credences to propositions based on our evidence. Let’s say I have 90% credence that my car is parked on the street outside, based on leaving it there earlier - and let’s assume it is in fact still there. Then whether we count this as “knowledge” or not is mainly a question about what threshold we should use for the definition of “knows” (one which will probably change significantly depending on the context).

But although bayesianism makes the notion of knowledge less binary, it still relies too much on a binary notion of truth and falsehood. To elaborate, let’s focus on philosophy of science for a bit. Could someone give me a probability estimate that Darwin’s theory of evolution is true? [Audience answer: 97%] Okay, but what if I told you that Darwin didn’t know anything about genetics, or the actual mechanisms by which traits are passed down? So I think that 97% points in the right direction, but I think it’s less that the theory has a 97% chance of being totally true, and more like a 97% chance of being something like 97% true. If you break down all the things Darwin said into a list of propositions: animals inherit from their parents, and 100 different things - almost certainly at least one of these is false. That doesn’t change the fact that overall, the theory is very close to true (even though we really have no idea how to measure or quantify that closeness).

I don’t think this is a particularly controversial or novel claim. But it’s surprising that standard accounts of bayesianism don’t even try to account for approximate truth. And I think that’s because people have often been very reductionist in trying to understand knowledge by looking at the simplest individual cases, of single propositions with few ambiguities or edge cases. By contrast, when you start looking into philosophy of science, and how theories like Newtonian gravity can be very powerful and accurate approximations to an underlying truth that looks very different, the notion of binary truthhood and falsehood becomes much less relevant.

Second example: Hume’s problem of induction. Say you’re playing billiards, and you hit a ball towards another ball. You expect them to bounce off each other. But how do you know that they won’t pass straight through each other, or both shoot through the roof? The standard answer: we’ve seen this happen many times before, and we expect that things will stay roughly the same. But Hume says that this falls short of a deductive argument, it’s just an extrapolation. Since then, philosophers have debated the problem extensively. But they’ve done so in a reductionist way which focuses on the wrong things. The question of whether an individual ball will bounce off another ball is actually a question about our whole systems of knowledge: I believe the balls will bounce off each other because I believe they’re made out of atoms, and I have some beliefs about how atoms repel each other. I believe the balls won’t shoot through the roof due to my beliefs about gravity. If we try to imagine the balls not bouncing off each other, you have to imagine a whole revolution in our scientific understanding.

Now, Hume could raise the same objection in response: why can’t we imagine that physics has a special exception in this one case, or maybe that the fundamental constants fluctuate over time? If you push the skepticism that far, I don’t think we have any bulletproof response to it - but that’s true for basically all types of skepticism. Yet, nevertheless, thinking about doing induction in relation to models of the wider world, rather than individual regularities, is a significant step forward. For example, it clears up Nelson Goodman’s confusion about his New Riddle of Induction. Broadly speaking, the New Riddle asks: why shouldn’t we do induction on weird “gerrymandered” concepts instead of our standard ones? For any individual concept, that’s hard to answer - but when you start to think in a more systematic way, it becomes clearer that trying to create a model of the world in terms of gerrymandered concepts is hugely complex.

Third example: in the history of AI, one of the big problems that people have faced is the problem of symbol grounding: what does it mean for one representation in my AI to correspond to the real world. What does it mean for an AI to have a concept of a car - what makes the internal variable in my AI map to cars in the real world? Another example comes from neuroscience - you may have heard of Jennifer Aniston neurons, which fire when they recognise a single person, across a range of modalities. How does this symbolic representation in your brain relate to the real world?

The history of AI is the history of people trying to solve this from the ground up. Start with a few concepts, add some more to them, branch out, do a search through them, etc. This research program, known as symbolic AI, failed pretty badly. And we can see why when we think more holistically. The reason that a neuron in my brain represents my grandmother has nothing to do with that neuron itself, it’s because it’s connected to my arms which make me reach out and hug her when I see her, and the speech centers in my brain which remind me of her name when I talk about her, and the rest of my brain which brings up memories when I think of her. These aren’t things you can figure out by looking at the individual case, nor is it something you can design into the system on a step by step basis, as AI researchers used to try to do.

So these are three cases where, I claim, people have been reductionist about epistemology when they should instead have taken a much more systems-focused approach.

7 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2022-02-14T09:23:23.895Z · LW(p) · GW(p)

Huh, that isn't at all what I would mean by "reductionist epistemology". For me it would be something like "explain complicated phenomena in terms of simpler, law-based [LW · GW] parts". (E.g. explain motion of objects through billiard ball physics; explain characteristics and behavior of organisms through the theory of evolution, etc.) Looking at the simplest individual cases can be a recipe for success at reductionist epistemology, but as you point out, often it is not.

For your first example, it still seems like Bayesianism is better than anything else out there -- the fact that we agree that there is something to the concept of "97% true" just means that there is still more to be done.

For your second example, I would say it's a success of reductionist epistemology: the best answer I know of to it is Solomonoff induction, which posits the existence of hypotheses and then uses Bayesian updating. (Or perhaps you prefer logical induction, which involves a set of traders and finding prices that are inexploitable.) There's plenty of reasons to be unsatisfied with Solomonoff induction, but I like it more than anything else out there, and it seems like a central example of reductionist epistemology.

I agree that the third example is a reasonable attempt at doing reductionist epistemology, and I'd say it didn't work out. I don't think it's quite as obvious ex ante as you seem to think that it was destined to fail. But mostly I just want to say that of course some attempts to do reductionist epistemology are going to be wrongheaded and fail; reductionist epistemology is much more the claim that whatever succeeds will look like "explaining complicated phenomena in terms of simpler, law-based parts". (This is similar to how Science says much about how hypotheses can be falsified, but doesn't say much about how to find the correct hypotheses in the first place.)

I also like what little of systems theory I've read, and it seems quite compatible with (my version of) reductionist epistemology. Systems theory talks about various "levels of organization", where the concepts that make sense of each level are very different from each other, and high-level concepts are "built out of" lower-level concepts. I think systems theory is usually concerned with the case where the low-level concepts + laws are known but the high-level ones are not (e.g. chaos theory) or where both levels are somewhat known but it's not clear how they relate (e.g. ecosystems, organizations), whereas reductionist epistemology is concerned with the case where we have a confusing set of observations, and says "let's assume our current concepts are high-level concepts, and invent a set of low-level concepts + rules that explain the high-level concepts" (e.g. atoms invented to explain various chemical reactions, genes invented to explain the Mendelian pattern, Bayesianism invented to explain various aspects of "good reasoning").

Replies from: ricraz, gworley
comment by Richard_Ngo (ricraz) · 2022-02-15T03:39:37.001Z · LW(p) · GW(p)

the fact that we agree that there is something to the concept of "97% true" just means that there is still more to be done

My point is, specifically, that being overly reductionist has made it harder for people to do that work, because they keep focusing on atomic propositions, about which claims like "97% true" are much less natural.

For your second example, I would say it's a success of reductionist epistemology

In this case, Solomonoff induction is less reductionist than the alternative, because it postulates hypotheses over the whole world (aka things like laws of physics), rather than individual claims about it (like "these billiard balls will collide").

I don't think it's quite as obvious ex ante as you seem to think that it was destined to fail

Oh yeah, I don't think it was obvious ex ante. But insofar as it seems like reductionism about epistemology fails more often than reductionism about other things, that seems useful to know.

reductionist epistemology is concerned with the case where we have a confusing set of observations, and says "let's assume our current concepts are high-level concepts, and invent a set of low-level concepts + rules that explain the high-level concepts" (e.g. atoms invented to explain various chemical reactions, genes invented to explain the Mendelian pattern, Bayesianism invented to explain various aspects of "good reasoning").

In hindsight I should have said "reductionism about epistemology", since I'm only talking about applying reductionism to epistemology itself, not the epistemological strategy of applying reductionism to some other domain. I've changed the title to clarify, as well as talking about "some limitations" of it rather than being against the thing overall.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2022-02-15T12:58:06.589Z · LW(p) · GW(p)

Ah, I'm much more on board with "reductionism about epistemology" having had limited success, that makes sense.

comment by Gordon Seidoh Worley (gworley) · 2022-02-15T00:17:14.814Z · LW(p) · GW(p)

This is a classic problem. "Reductionism" means several different related things in philosophy.

Replies from: TAG
comment by TAG · 2022-02-15T00:42:03.467Z · LW(p) · GW(p)

The Oxford Companion to Philosophy suggests that reductionism is "one of the most used and abused terms in the philosophical lexicon" and suggests a three part division.

Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.

Methodological reductionism: the scientific attempt to provide explanation in terms of ever smaller entities.

Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation and explanation.[4]" -- WP

comment by Leo P. · 2022-02-15T13:34:53.550Z · LW(p) · GW(p)

But although bayesianism makes the notion of knowledge less binary, it still relies too much on a binary notion of truth and falsehood. To elaborate, let’s focus on philosophy of science for a bit. Could someone give me a probability estimate that Darwin’s theory of evolution is true?

What do you mean by that question? Because the way I understand it, then the probability is "zero". The probability that, in the vast hypotheses space, Darwin's theory of evolution is the one that's true, and not a slightly modified variant, is completely negligible. My main problem is: "is theory X true?" is usually a question which does not carry any meaning. You can't answer that question in a vacuum without specifying against which other theories you're "testing" it (or here, asking the question). 

If I understand correctly, what you're saying with the "97% of being 97% true" is that the probability that the true theory is within some bounds in the hypotheses space, which correspond to the property that inside those bounds the theories share 97% of the properties of "Darwin's point" (whatever that may mean), is 97%. Am I understanding this correctly? 

comment by Luke Stebbing (LukeStebbing) · 2022-02-14T13:17:53.363Z · LW(p) · GW(p)

There’s a Bayesian-adjacent notion of closeness to the truth: observations narrow down the set of possible worlds, and two hypotheses that heavily overlap in the possible are “close”.

But the underlying notion of closeness to the truth is underdetermined. If we were relativistic beings, we’d privilege a different part of the observation set when comparing hypotheses, and Newtonian gravity wouldn’t feel close to the truth, it would feel obviously wrong and be rejected early (or more likely, never considered at all because we aren’t actually logically-omniscient Bayesians).