All knowledge is circularly justified
post by Chris_Leong · 2019-06-04T22:39:20.766Z · LW · GW · 19 commentsContents
19 comments
Many philosophers have tried to find the foundations of our knowledge, but why do we think there are any? The framing of foundations implies a separate bottom layer of knowledge from which everything is built up. And while this is undoubtedly a useful model in many contexts, why should we believe in this as the complete and literal truth as opposed to merely a simplification?
Consider:
1) If we dig deep enough into any of our truth claims, we'll eventually reach a point at which they are justified by intuition
2) The reliability of intuition or various intuitions is not something that is merely taken as basic or for granted, but can instead be justified somewhat by arguments from experience and evolutionary arguments.
3) However both empirical verification and evolutionary arguments themselves both rely on assumptions that are justified by intuition
This is circular, but is this necessarily a problem? If your choice is a circular justification or eventually hitting a level with no justification, then the circular justification suddenly starts looking pretty attractive. In other words, we have to be comparative and consider what the alternative is to circular epistemology and not just consider it in isolation.
Is this important? It seems to depend on context. For applied rationality, not so much. But I would like to suggest that the more philosophical areas of the rationalist project would look quite different if they were built upon a circular epistemology.
19 comments
Comments sorted by top scores.
comment by jessicata (jessica.liu.taylor) · 2019-06-04T23:31:16.383Z · LW(p) · GW(p)
This seems like a restatement of Where Recursive Justification Hits Bottom [LW · GW].
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-06-05T08:30:35.467Z · LW(p) · GW(p)
Yeah, I think you're right. I read that post a while back, but forgot about it until you mentioned it. Nonetheless, I think I'll find this post useful as he discusses a few different ideas there, while sometimes I want to pick out just one. And apart from this one post, these ideas seem to have been mainly ignored, when there should be a while host of consequences
comment by romeostevensit · 2019-06-05T16:52:21.054Z · LW(p) · GW(p)
comment by Shmi (shminux) · 2019-06-05T03:16:37.972Z · LW(p) · GW(p)
It's worth remembering that we are some 90% animal brains (and 98.8% chimps), so if you dig deeper, you hit the limits of introspection and slide into rationalizing without realizing it. I guess it's what you are saying. Best we can do as aspired (post-)rationalists is to point at the vague blob of intuition and call it intuition/subconscious/instincts rather than "knowledge".
Replies from: Evan_Gaensbauer, Chris_Leong↑ comment by Evan_Gaensbauer · 2019-06-10T07:01:24.198Z · LW(p) · GW(p)
Have you looked at cognitive science before? I haven't looked at it extremely deeply, but I think it can offer routes to empirically based insights into how the human mind-brain operates and functions. However, to unify the insights cognitive science can offer with human consciousness and other difficult issues like a longing for meaning is a whole other set of very hard problems.
↑ comment by Chris_Leong · 2019-06-05T08:32:33.497Z · LW(p) · GW(p)
"so if you dig deeper, you hit the limits of introspection and slide into rationalizing without realizing it. I guess it's what you are saying." - that's not what I'm saying. It's almost impossible to achieve, but I think it is possible to understand the exact extent to which you are rationalising
comment by Gordon Seidoh Worley (gworley) · 2019-06-06T00:47:27.938Z · LW(p) · GW(p)
You might like the work of Roderick Chisholm on this topic. He spent a good deal of effort on addressing the issue of epistemic circularity (the issue created by the problem of the criterion) and gives what is, in my opinion, one of the better and more technical treatments of the topic. His work also lets us make a distinction between particularism (making minimal leaps of faith) and pragmatism (making any leaps of faith), which I find useful because in practice most people seem to be pragmatists (they have other things to do than wrestle with epistemology) while thinking they are particularists because their particular leaps of faith (the facts they assume without justification) are intuitive to them and they can't think of a way to make them smaller.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-06-06T11:08:11.540Z · LW(p) · GW(p)
Where would you start with his work?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-06-06T19:05:27.506Z · LW(p) · GW(p)
Actually, good thing you asked, because I gave wrong information in my original comment. Chisholm is an expert on the problem of the criterion, but I was actually thinking of William Alston in my comment. Here's two papers, one by Alston and one by another author that I've referenced in the past and found useful:
William P. Alston. Epistemic Circularity. Philosophy and Phenomenological Research, 47(1):1, sep 1986.
Jonathan Dancy. Ethical Particularism and Morally Relevant Properties. Mind, XCII(368):530– 547, 1983.
comment by Evan_Gaensbauer · 2019-06-10T06:58:21.310Z · LW(p) · GW(p)
I haven't read a lot about it, 'but this seems related to a kind of problem in philosophy that I know as 'grounding problems'. E.g., the question of 'how do we ground truth?' On Wikipedia, the article I found to describe it calls it the symbol grounding problem. On the Stanford Encyclopedia of Philosophy, this kind of problem are known as problems of metaphysical grounding. For rationalists, one application of the question of metaphysical grounding is to what makes propositions true. That constitutes my reading on the subject, but those links should provide further reading resources. Anyway, the connection between the question of how to ground knowledge, and this post, is that if knowledge can't be grounded, it seems by default it can only be circularly justified. Another way to describe this issue is to see it as a proposition that all worldviews entail some kind of dogma to justify their own knowledge claims.
comment by hereisonehand · 2019-06-05T17:47:38.226Z · LW(p) · GW(p)
Have you checked out any work on coherentist theories of epistemic justification? I definitely haven't done the work to have an opinion on this, but I remember this dichotomy (foundationalism v. coherentism) being referred to in old introductory epistemology coursework.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-06-05T22:29:36.558Z · LW(p) · GW(p)
I've heard of it, but I haven't read into it, so I avoided using the term
Replies from: hereisonehand↑ comment by hereisonehand · 2019-06-05T23:06:48.004Z · LW(p) · GW(p)
From your post:
This is circular, but is this necessarily a problem? If your choice is a circular justification or eventually hitting a level with no justification, then the circular justification suddenly starts looking pretty attractive.
I think the coherentist article I linked to has some useful perspective here. The quote below is from a section in that article on regress. The first paragraph outlines a view similar to yours and raises an important objection against the circular justification view. The 2nd paragraph raises a potential response.
What is the coherentist’s response to the regress? The coherentist can be understood as proposing that nothing prevents the regress from proceeding in a circle. Thus, A can be a reason for B which is a reason for C which is a reason for A. If this is acceptable, then what we have is a chain of reasons that is never-ending but which does not involve an infinite number of beliefs. It is never-ending in the sense that for each belief in the chain there is a reason for that belief also in the chain. Yet there is an immediate problem with this response due to the fact that justificatory circles are usually thought to be vicious ones. If someone claims C and is asked why she believes it, she may reply that her reason is B. If asked why she believes B, she may assert A. But if prompted to justify her belief in A, she is not allowed to refer back to CC which in the present justificatory context is still in doubt. If she did justify A in terms of C nonetheless, her move would lack any justificatory force whatsoever.
The coherentist may respond by denying that she ever intended to suggest that circular reasoning is a legitimate dialectical strategy. What she objects to is rather the assumption that justification should at all proceed in a linear fashion whereby reasons are given for reasons, and so on. This assumption of linearity presupposes that what is, in a primary sense, justified are individual beliefs. This, says the coherentist, is simply wrong: it is not individual beliefs that are primarily justified, but entire belief systems. Particular beliefs can also be justified but only in a secondary or derived sense, if they form part of a justified belief system. This is a coherence approach because what makes a belief system justified, on this view, is precisely its coherence. A belief system is justified if it is coherent to a sufficiently high degree.
Of course, the key coherentist claim is that an entire belief system can be justified without individual beliefs being justified because the property of being justified is a property of belief systems that emerges from the coherence of multiple beliefs.
comment by Slider · 2019-06-05T14:52:20.375Z · LW(p) · GW(p)
I don't think 1) is so safe a bet. THere is atleast the possibility of an infinite chain to be ruled out. Furthermore if this refers to actual chains of questions of "why?" there can be other kinds of termination points. You can encounter confusion and you can encounter lack of imagination ("What you can question that?").
Wittgenstein has an argument about a table on how to consult a table and how it seems problematic that you would need inifinte tables in order to use one. Instead there is some level where you just are competent, able to perform the operations without instruction. Intuition as a word would suggest that there is some abstract thing that you just feel is "true". But other kind of termination is where you just function that way. One can for example understand the function of a eye cell as thought (atleast on the primitive level) that happens upon photon collision. "In the mind world" there is no preceding thought. It just happens just like a cosmic ray would bug out and malfunction an inner brain piece (which would in effect onlyh be a photoreceptor for a very different wavelength off-ocurse withouyt supporting structures like lenses etc).
But loopiness doesn't need to imply circularity. If you start with some system, make it reflect on itself until it stabilises the end result might seem circular. However it might be possible to reverse engineer the reflection and there are reflection balances that have only a finite history. With chicken and egg you find dinosaurs which is a story much longer than the lifespan of a chicken but still some finite generation number. With parents you can go back to start of sexual reproduction. With off-spring you can go back to multicellular-life. With lineage you can go back to start of cell membranes. With reproduction you can go to auto-catalysm. Progressively surprisingly longer stories but they end up being linear instead of the initial seeming circular nature. And there is a direction to the loopiness, it's only loopy towards "the future". Thus when you are tracking where a thought comes from or its justification (if it not constructive in the sense that the discovery process builds an entity that didn't exist before (ie you make up the why as you answer the question instead of discovering something preexisting)) you know you are going "in the wrong direction" and the circularity could end at any step.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-06-13T14:26:38.091Z · LW(p) · GW(p)
I don't have a rigorous argument against an infinite chain, but here's my current set of intuitions: Let's suppose that we have an infinite chain of reasons. Where does the chain come from? Does it pop out of nowhere? Or is there some intuition or finite collection of intuitions that we can posit as an explanation for the chain? While technically possible that the infinite chain could require infinite different intuitions to justify, this seem rather unlikely to me. What then if we accept that there is an intuition or there are intuitions behind the chain? Well, now we ask why these intuitions are reliable. And if we hit an infinite chain again, we can try the same trick and so on until we actually find a cycle
Replies from: Slider↑ comment by Slider · 2019-06-13T20:44:28.540Z · LW(p) · GW(p)
Sure you can carry on trying but you are not guaranteed to succeed. You could go increasingly meta without finding a loop.
If meta-justifications are just not adhoc you could employ them against loops. If I was unsatisafied with a level of justification being circular I could insist that there must be a further level of intuitions that warrant the situation which themselfs don't have the loopy nature.
I don't really think that infinite chains are a good approach but I am not convinced that the investigation is cast in solid enough of logic that it makes explicit the reasons to take its finding seriously. A method of exhaustion with open vents is comparatively weak.
comment by habryka (habryka4) · 2019-06-04T23:02:15.574Z · LW(p) · GW(p)
Edit note: Removed large amounts of trailing whitespace that I presume were not intentional.
comment by TAG · 2019-06-05T09:56:00.977Z · LW(p) · GW(p)
If circular justification has some non zero level of validity, then you might as well use it. But it's not clear that it has non zero validity.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-06-05T10:27:08.938Z · LW(p) · GW(p)
My response here is along the lines of Pascal's Wager, if it has no validity, then nothing has validity since everything is circular, so we may as well assume validity. But of course that is circularly justified as well