Circular Reasoning
post by abramdemski · 2024-08-05T18:10:32.736Z · LW · GW · 37 commentsContents
Circular Reasoning is Valid Circular Arguments as Positive Evidence Inevitable Circularity Brief Aside on Other Circles But then what do you say to Bob? Don't Double Count Reflective Loops Conclusion None 37 comments
The opinions here are my own, but owe some salience-work to Sahil.
The idea that circular reasoning is bad is widespread. However, this reputation is undeserved. While circular reasoning should not be convincing (at least not usually), it should also not be considered invalid.
Circular Reasoning is Valid
The first important thing to note is that circular reasoning is logically valid. A implies A. Longer circular arguments (like A implies B implies A) may introduce an invalidity (perhaps A does not imply B); but if so, invalidity is not due to the circularity.
If circularity itself is to be critiqued, it must be by some other standard than logical validity.
I think it's fair to say that the most relevant objection to valid circular arguments is that they are not very good at convincing someone who does not already accept the conclusion. You are talking to another person, and need to think about communicating with their perspective [LW · GW]. Perhaps the reason circular arguments are a common 'problem' is because they are valid. People naturally think about what should be a convincing argument from their own perspective, rather than the other person's.
However, notice that this objection to circular reasoning assumes that one party is trying to convince the other. This is arguments-as-soldiers [? · GW] mindset.[1] If two people are curiously exploring each other's perspectives, then circular reasoning could be just fine!
Furthermore, I'll claim: valid circular arguments should actually be considered as a little bit of positive evidence for their positions!
Let's look at a concrete example. I don't think circular arguments are quite so simple as "A implies A"; the circle is usually a bit longer. So, consider a more realistic circular position:[2]
Alice: Why do you believe in God?
Bob: I believe in God based on the authority of the Bible.
Alice: Why do you believe what the Bible says?
Bob: Because the Bible was divinely inspired by God. God is all-knowing and good, so we can trust what God says.
Here we have a two-step loop, A->B and B->A. The arguments are still logically fine; if the Bible tells the truth, and the Bible says God exists, then God exists. If the Bible were divinely inspired by an all-knowing and benevolent God, then it is reasonable to conclude that the Bible tells the truth.
If Bob is just honestly going through his own reasoning here (as opposed to trying to convince Alice), then it would be wrong for Alice to call out Bob's circular reasoning as an error. The flaw in circular reasoning is that it doesn't convince anyone; but that's not what Bob is trying to do. Bob is just telling Alice what he thinks.
If Alice thinks Bob is mistaken, and wants to point out the problems in Bob's beliefs, it is better for Alice to contest the premises of Bob's arguments rather than contest the reasoning form. Pointing out circularity only serves to remind Bob that Bob hasn't given Alice a convincing argument.
You probably still think Bob has made some mistake in his reasoning, if these are his real reasons. I'll return to this later.
Circular Arguments as Positive Evidence
I claimed that valid circular arguments should count as a little bit of evidence in favor of their conclusions. Why?
Imagine that the Bible claimed itself to be written by an evil and deceptive all-knowing God, instead of a benign God:
Alice: Why do you believe in God?
Bob: Because the Bible tells me so.
Alice: Why do you believe the Bible?
Bob: Well... uh... huh.
Sometimes, belief systems are not even internally consistent. You'll find a contradiction[3] just thinking through the reasoning that is approved of by the belief system itself. This should make you disbelieve the thing.
Therefore, by the rule we call conservation of expected evidence [? · GW], reasoning through a belief system and deriving a conclusion consistent with the premise you started with should increase your credence. It provides some evidence that there's a consistent hypothesis here; and consistent hypotheses should get some credence, EG, proportional to their complexity [? · GW].[4]
Inevitable Circularity
After all this, you might say something like: sure, circular arguments are valid, but they don't provide justification for beliefs. Bob is still wrong to answer "why" questions in a circular way.[5]
The "Regress Argument" in epistemology goes as follows:
- Every belief requires justification.
- However, any justification must itself rest on other beliefs.
- Therefore, any chain of justification must lead to an infinite regress. (By 'infinite regress' I mean either a circle or an infinite nonrepeating chain.)
- However, an infinite regress does not count as a working justification, either.
- Therefore, no proper justification can be given.
The point is, you have to live with at least one of:[6]
- Some beliefs do not normatively require justification;
- Some justifications do not rest on beliefs;
- Some justification chains allowed to be circular;
- Some justification chains are allowed to be infinite and non-repeating;
- No beliefs are permissible.
There are a few common schools of thought on this:
Foundationalism:[7] There are special "foundational" beliefs. These beliefs might not need justification, or are uniquely open to circular justification (EG, can be justified by themselves), or are uniquely justifiable in a way that does not rest on further beliefs (EG, they are justified by virtue of being true). All other beliefs need to be justified in ways that obey all the axioms of the regress argument.
Coherentism: Circular justification is allowed in some fashion.
Infinitism: Infinite chains of justification are allowed. This position seems rare.
Foundherentism: Some combination of foundationalism and coherentism. Foundationalists concede that foundations are still open to questioning, and thus, nontrivial justifying beliefs. Coherentists concede that some beliefs are more foundational than others. This seems to be the most popular position.
Overall, I would endorse some variety of foundherentism. My main point in this essay, however, is to argue the coherentist part. When I first encountered the regress argument (in undergraduate philosophy), I strongly identified as a foundationalist. I suspect many LessWrongers will have similar feelings. However, in retrospect I think it's pretty clear that any foundations are also subject to justificatory work, and the sort of justification needed is of the same kind as is needed for everything else. Therefore, coherentism.[8]
To get one misunderstanding out of the way: coherence is not a criterion [LW · GW] in the sense of something that can tell you what is true, or what to believe. Just as circular arguments can't convince you of their conclusions, coherence can't tell you which coherent perspective to take. It's more of a rationality condition. Nor is it saying that all coherent positions are equally true, or equally good; only equally justified, from their own perspectives.
Brief Aside on Other Circles
John Wentworth has a thematically similar post on circular definitions [LW · GW], which have a similarly bad reputation. He similarly points out that they can be mathematically fine. We could also argue as above, that if you have the belief that everything should be definable, and also that definitions should not be circular, you'll run into trouble eventually.
But then what do you say to Bob?
Compare to: but then what do you say to the republican? [LW · GW]
I think the temptation to outlaw circular arguments as a form of justification comes mainly from trying to construct an objective third-person perspective [LW · GW] to judge disagreements. In other words, the concept "justification" is doing double duty. We cannot consistently use it for both honest philosophical examination and constructing arguments to persuade others.
As I said earlier, I think the right thing to do is to question the premises of a circular argument (which of course also means questioning the conclusion), rather than objecting to the argument form.
It is also worth pointing out and avoiding double-counting of evidence, which can result from a form of circular reasoning.
Don't Double Count
One way to think about "re-examining beliefs" is that you somehow blot out a specific belief, and then re-estimate it from your other beliefs.
In the case of Bayesian networks, this intuition can be formalized as the Belief Propagation algorithm.
In Probabilistic Reasoning in Intelligent Systems, Pearl sets up an analogy between his Belief Propagation algorithm and people who are passing messages in order to count the total number of people. To cut out as many details as possible while conveying the basic idea: you can follow a simple procedure where each person adds 1 to a piece of paper (counting themselves), but this only works if you avoid loops (which could cause people to count themselves twice). The analogy between this procedure and Belief Propagation justifies thinking about probabilistic inference as "counting" evidence. In the context of Belief Propagation, circular reasoning can result in double-counting of evidence (or triple-counting, quadruple-counting, etc; but "double counting" has become a common phrase to point at over-counting of evidence.)
So, if you have some degree of credence in the Bible, this might propagate to give you some degree of credence in God, and vice versa; but if you let this reasoning form a self-reinforcing loop, there's a problem.
Reflective Loops
The content of this essay is rather similar to Where Recursive Justification Hits Bottom [LW · GW]. However, Eliezer tries to draw a distinction between the form of reasoning he is accepting and circular logic:
So, at the end of the day, what happens when someone keeps asking me "Why do you believe what you believe?"
At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct."
But then... haven't I just licensed circular logic?
Actually, I've just licensed reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else.
He proceeds to check for parallels between the circular line of reasoning he in fact approves of, and other circular arguments which he disapproves of:
Is this the same as the one who says, "I believe that the Bible is the word of God, because the Bible says so"?
Couldn't they argue that their blind faith must also have been placed in them by God, and is therefore trustworthy?
However, on my reading, he mostly fails to point out a structural difference between the reasoning, and instead ends up taking issue with the specific claims of the religious perspective. Although he argues that circular reasoning doesn't work as a way of arriving at knowledge which you don't already have, he ultimately seems to realize he hasn't fully specified how to distinguish the reasoning which he endorses from circular reasoning:
Everything, without exception, needs justification. Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops. I do think that reflective loops have a meta-character which should enable one to distinguish them, by common sense, from circular logics. But anyone seriously considering a circular logic in the first place, is probably out to lunch in matters of rationality; and will simply insist that their circular logic is a "reflective loop" even if it consists of a single scrap of paper saying "Trust me". Well, you can't always optimize your rationality techniques according to the sole consideration of preventing those bent on self-destruction from abusing them.
On my reading, Eliezer is offering two main suggestions:
- "Hold nothing back": you must reason using all the information at your disposal. The naive way to avoid problematic circular reasoning is to try to be a "philosopher of perfect emptiness" who "assumes nothing". However, there are no universally compelling arguments [LW · GW]. If you assume nothing, you can conclude nothing. Furthermore, it is incorrect to try and figure out the truth using anything less than your full knowledge. The correct way to avoid the problematic sort of circular argument is not to avoid assuming the consequent, but rather, to use the fullness of your knowledge to re-evaluate your assumptions.
- Acceptable circular arguments can be contrasted with unacceptable circular arguments via some "meta-character" which Eliezer points at using the term "reflective loop". Based on his examples, I think what Eliezer means here is that "reflective loop" arguments are the ones which go through the epistemology itself: Eliezer's examples always pass through some fact about the process by which he reaches conclusions. The epistemological beliefs are justified in a way which fans back out to a broader picture of the world (eg, facts about physics or evolution may be invoked in the justification of epistemic practices). These broad considerations would again be justified by epistemological claims.
In terms of my earlier classification, the second idea seems to be a form of foundationalism: epistemological considerations play a special foundational role, and circular arguments are OK only if they pass through these foundational statements. Foundational statements need to be justified in a special way;[9] EG we can't say that a reasoning step is reliable because its specific conclusion is correct (even though this inference might be perfectly valid). Instead, we are supposed to consider it as an instance of a class [LW · GW]. This seems to be the same sort of outside-view reasoning which Eliezer argues for in Ethical Injunctions [LW · GW] and argues against in Inadequate Equilibria [? · GW].
On the other hand, the first point seems to be saying that circular arguments are nothing special, and you always need to evaluate things in the same way (namely, using all your information, which results in some circles forming). So perhaps my reading of the advice is a bit contradictory.
In any case, it's at least clear that there is some concession to circles, although it isn't clear exactly how far that concession goes.
Conclusion
Circular reasoning is valid, and moreover, circular justifications seem necessary in practice. The foundationalist project of building all justifications on some centralized foundation is perhaps productive, but as foundations also need justification, circular justifications or infinite justification chains are mathematically inevitable, and circles seem inevitable in practice. Justification must therefore be seen as a coherence condition, which throws out some belief systems as irrational, but does not point to a unique correct belief system. Specific circular arguments should be critiqued on the merits of their assumptions and reasoning steps, just like any other arguments, rather than by the circularity itself.
You can support me on Patreon.
- ^
Well, perhaps this statement is a bit too strong. Mathematicians avoid circular reasoning when proving theorems, but I wouldn't accuse them of making an arguments-as-soldiers mistake.
However, if you're curiously inquiring about someone's belief system, and they employ circular reasoning, and then you object to their reasoning on the grounds that it is circular, then I'd accuse you of making an arguments-as-soldiers mistake. Their reasoning doesn't have to be designed to convince you.
- ^
This example is also used in Where Recursive Justification Hits Bottom [LW · GW].
- ^
Here, I include "probabilistic contradiction", such as a deceptive God writing a truthful Bible. It isn't actually logically inconsistent, just improbable.
- ^
Notice that the circularity of the argument doesn't play a special role here. Any path through the structure of the belief system could find a contradiction. So any illustration of how the belief system reasons which doesn't illustrate a contradiction should raise your credence in the belief system (at least slightly), if you expected that there might be a contradiction along that path before checking.
- ^
You might want to say something like:
Reader: Sure, Bob's position might be logically consistent. If Bob was born with a prior strongly favoring the Bible and God then yeah, maybe Bob could technically be perfectly rational here. But no one is born with that prior! Alice is asking Bob why Bob believes these things. It might be accurate as a matter of historical record that Bob came to believe in God due to reading the Bible. But if so, it cannot also be that he believed what the Bible said because it was written by God! Either he came to believe one thing first, or the other (or both at once, which also invalidates the story that each belief caused the other).
I would agree that, in practice, Bob has made a mistake. However, I don't think it makes sense to focus so much on the descriptive explanation of how Bob came to believe a thing.
Alice's question of "why" Bob believes a thing is ambiguous; Bob could be answering with the descriptive, historical reason why he came to a conclusion, but he could also be answering in a normative way, giving his currently endorsed explanation of why he should believe.
For example, I once heard a philosophy student say that he originally endorsed a philosophy (presentism) because it felt similar to other things he believed. However, his adviser told him that was not a very good reason to believe it.
It may be the case that this is where the story ended for this particular philosophy student (IE, they were simply admitting that they had non-normative reasons for their belief). However, imagine that the student's advisor gave him better reasons for the beliefs, which the student came to normatively endorse.
The cause of the student's beliefs would still be "I thought presentism felt similar to other things I believed." But the student's justification for the belief could have changed.
So: if Bob might be talking about justification (normative rather than historical "reasons" for beliefs), then the complaint-from-the-reader I postulated above seems misplaced.
You might adjust your complaints about Bob to account for this distinction:
Reader: Sure, Bob's position might be logically consistent. However, beliefs need to be justified! Bob cannot be answering the historical-why, as previously argued. So I interpret Bob as attempting to offer a normative-why. However, Bob fails to do this, because normative-why answers aren't allowed to contain cycles any more than historical-why answers are. Although there is no logical fallacy here, there is nonetheless a fallacy of justification: justifications are not allowed to be circular.
If this reply is tempting to you, then you are making the central mistake this post is trying to argue against. Your requirements for "justification" may seem locally reasonable, but they imply a global problem.
- ^
Mathematics does quite well with just #1 and #2. In the context of mathematics, cases of #1 are "axioms" -- the designated set of propositions which can be assumed without any justification, in the service of justifying everything else. However, mathematical justifications need to be made up of inference rules as well as axioms (otherwise, we don't know how to check whether one statement is the consequence of another). This can give rise to examples of #2, as it is sometimes possible to prove things without any axioms (by using only the inference rules).[10]
However, mathematical proofs are about convincing another party of something they don't already accept. Also, mathematics is quite comfortable to always operate in the hypothetical -- it's fine for axioms to remain unjustified, because later (when we apply the math) we will check whether the axioms hold true (or, true enough) for particular cases.
In a less hypothetical mode of reasoning, more about what's true than what can be defended, I think it makes sense to reject #1 and #2 (reasserting that we desire, normatively, for all our beliefs to be justifiable), but embrace #3 (and perhaps #4). We can't pursue the justification of all beliefs at all times, but we do want all of our beliefs to be open to re-examination; therefore, I think we need to be open to an infinite regress.
My reasons for endorsing an infinite regress, rather than postulating that the chain ends somewhere, are practical: I'm open to the idea of codifying excellent foundational theories (such as Bayesianism, or classical logic, or set theory, or what-have-you) which justify a huge variety of beliefs. However, it seems to me that in practice, such a foundation needs its own justification. We're not going to find a set of axioms which just seem obvious to all humans once articulated. Rather, there's some work to be done to make them seem obvious.
Overall, though, I suppose I endorse a sort of pluralism about justification: "justification" can work different ways in different contexts, and the seeming paradox in the Regress Argument results from an overly simplistic and unrealistic view of what justification is.
- ^
Gordon Worley calls this particularism [LW · GW], presumably because some philosophers use the term.
- ^
Coherentism also fits much better with the probabilist worldview; notice that the basic rationality arguments for Bayesians are "coherence theorems" rather than, say, soundness and completeness theorems.
- ^
I get the feeling that Eliezer is gesturing in the direction of Tiling [? · GW]. The emphasis on the general efficacy of epistemic principles rather than their efficacy in a special case resembles Vingean reflection [LW · GW]. I think for Eliezer, a full analysis of the Tiling Agents problem (EG, something like a reflective Bayesian [LW · GW] with a full Vingean Tiling result justifying its cognition to itself) would also be a complete response to the regress argument, showing exactly how agents should think about their own self-justification.
- ^
An axiom a=a is interchangeable with an inference rule which lets us conclude a=a. Axioms and inference rules can also be interchangeable in more complex ways. Therefore, the distinction between #1 and #2 here does not seem very important. The wikipedia article on the regress argument mentions no such distinction; I'm being extra pedantic by including it.
One example where I might want to invoke this distinction is sensory perception. It might be sensible to say that a belief about direct perception is justified not by some belief, but rather by the fact that you have perceived it. In symbols, X justifies P(X)=1 in cases where X causes P(X)=1 (or at least, in some such cases).
- ^
- ^
Gordon Worley discusses "the problem of the criterion [LW · GW]", which seems to me like a version of the regress argument which is worse in that it confuses several issues (see my comment [LW(p) · GW(p)] for some issues I think it confuses). However, his discussion of possible solutions to the problem seems on-point for the regress argument.
37 comments
Comments sorted by top scores.
comment by sunwillrise (andrei-alexandru-parfeni) · 2024-08-05T19:54:01.887Z · LW(p) · GW(p)
Therefore, by the rule we call conservation of expected evidence [? · GW], reasoning through a belief system and deriving a conclusion consistent with the premise you started with should increase your credence.
This is correct but (slightly) less relevant to the point you are trying to make in this particular section [LW · GW] that it might come across as at first. Valid reasoning increases your credence, but circular reasoning, by itself, does not.
"Circular reasoning is valid" [LW · GW] only means that circularity does not create any additional logical invalidity issues; it does not mean that a line of logical reasoning is made to be valid just because it contains circularity. Indeed, just add a bit of circularity in between (or as part of) a few other steps of otherwise invalid reasoning: the validity does not change. The titles of the sections in your post, as well as the explanation you gave in the part I quoted, give off a slight impression that you are conflating circularity and validity,[1] while in reality the latter screens off [LW · GW] the effects of the former.
Of course, you can say that observing a single instance of circularity and determining that the argument carries through locally [LW · GW] gives a modicum of evidence towards the overall line of reasoning being correct.[2] But the magnitude of this update should depend a fair bit on how the argument is presented to you [? · GW], given that my impression in real life is that circular steps are not generally the true load-bearing part in most complex chains of reasoning.
Infinitism: Infinite chains of justification are allowed. This position seems rare.
It might be rare to a non-Bayesian [? · GW], but if you take the position that you must update on observations seriously, it sems rather straightforward to me to say that there is an infinite sigma-algebra of events that you assign probabilities to over time (if you are an idealized, as opposed to bounded [LW(p) · GW(p)], reasoner), and an infinite set of possible observations that you can update on. Everything affects everything else, in the reinterpretation of "justification" as conditioning. And your priors relative to a certain Bayesian updating are really your posteriors from the Bayesian update that came immediately before it, which were modified by the update right before that, and on, and on, and on. (And there is nothing theoretically wrong with updating continuously over time, either, such that when you start with a prior at time t=0, at t=1 you have already gone through an "infinite" chain of justification-updates)
Of course, you can say the idea of "justification" doesn't really carve reality at the joints [LW · GW]once you adopt a Bayesian epistemology. You just select a large-enough probability space to contain events you care about, start with your prior, make sure you are capable of updating [LW · GW], and then you just update. Who needs any "justification"? Just run the math, bro [? · GW].[3]
Alternatively, you can also simply abandon Bayesianism entirely in favor of another theory [? · GW] that builds on top of it (which I get the impression is the approach you endorse?)
I think the temptation to outlaw circular arguments as a form of justification comes mainly from trying to construct an objective third-person perspective [LW · GW] to judge disagreements. In other words, the concept "justification" is doing double duty. We cannot consistently use it for both honest philosophical examination and constructing arguments to persuade others.
Ah, but now that we bring real life into this,[4] a couple of additional problems (which were not relevant back in nice-old math-land) start coming into play:
Firstly, the fact that somebody used a circular argument in favor of a conclusion they support should, in most circumstances, be treated as (usually really weak) evidence against their conclusion, because a conclusion that is correct is more likely than one that is false to have much better justifications than circularity.
In a conversation where another person explains why they believe something, or they are trying to convince you of it, they are more likely than not to try to find the best arguments they can.[5] So, conditional on you observing that they have nonetheless selected an argument that uses circularity, it becomes much more likely that they do not have a better argument to give than that, which lowers your credence that they have "good reason" to believe it, and also lowers your expectation that the conclusion is correct.[6]
This somewhat ties into how clever arguers [LW · GW] can be used successfully as part of a truthseeking scheme regardless of whether they themselves are not truthseeking and are only trying to persuade you for personal reasons. Just do what the legal system does (at least in an adversarial system): put clever arguers on both sides, make sure they are incentivized to fight for their positions as hard as possible (while not violating ethical norms of behavior), and then read their arguments: an arguer advocating for the "false" side might nonetheless seem convincing in an absolute sense, but they will (on average) be less convincing than you would expect them to, conditional on them being clever arguers. Bertrand Russell has said [LW(p) · GW(p)] that "In a man whose reasoning powers are good, fallacious arguments are evidence of bias"; I would modify that to "weak evidence that his conclusion is false."
Secondly, and perhaps more straightforwardly, people want to ban circular arguments because they dislike circular arguments. And they dislike circular arguments because they are used very often by people who are so inexperienced at reasoning that they don't understand what the problem with circularity even is. By which I mean, people who don't understand that, for example and as you said, double counting [LW · GW] is an issue.
It's the same reason people sometimes try to ban other formally fallacious lines of reasoning in organized debates (or try to get moderators to publicly call them out): sure, you could go with a full-on libertarian approach and say that banning (or even mentioning) fallacies is unnecessary because all participants in the discussion will readily identify the reasoning as fallacious all by themselves and thus immediately reverse their updates towards the conclusion, but in practice we know that this obviously doesn't work and all human beings, regardless of how rational, are susceptible [LW · GW] to changing their views in an epistemically unjustifiable manner because of strongly-presented evidence that's not "logical."
So if someone makes an argument that's circular, the thinking goes, someone needs to point this out immediately so that people don't get fooled into assigning higher probative value to it than it deserves. Or better yet, to disallow such argumentation entirely.[7]
- ^
To be clear, I think this is just an impression I get from your specific writing choices here; I strongly suspect you are not actually making such an error in your thinking.
- ^
Because if you have 10 logical steps, in order for the argument to be valid you need all 10 of them to be valid, so seeing that one is valid makes it more likely that the entirety of it is.
- ^
Unless you're dealing with problems of embeddedness [? · GW] or reflective reasoning [LW · GW] or indexicality [LW · GW] or anthropics [? · GW]... Hmm, kind of a lot of stuff, actually.
- ^
By talking about the "temptation" and thinking of actual people on this matter.
- ^
Of course, they often expect short inferential distances [LW · GW] and don't tailor the argument to be as persuasive to the other person as possible, etc., so this is not guaranteed, but in expectation, the sign of the update you should make in such a spot seems clear to me.
- ^
As already mentioned, this should be a small update because you should have already expected that most arguments would be bad [LW(p) · GW(p)] and most arguers would be incompetent, regardless of the truth of the position they are arguing for. But, ceteris paribus, "If you want to convince people of something, it's much easier if it's true", as Paul Graham has said.
- ^
I am not saying I necessarily endorse this perspective, of course.
↑ comment by abramdemski · 2024-08-06T18:23:08.542Z · LW(p) · GW(p)
Of course, you can say the idea of "justification" doesn't really carve reality at the joints [LW · GW]once you adopt a Bayesian epistemology. You just select a large-enough probability space to contain events you care about, start with your prior, make sure you are capable of updating [LW · GW], and then you just update. Who needs any "justification"? Just run the math, bro [? · GW].[3]
Alternatively, you can also simply abandon Bayesianism entirely in favor of another theory [? · GW] that builds on top of it (which I get the impression is the approach you endorse?)
I admit I am a bit inconsistent about how I use the term "Bayesian"; sometimes I use it to point to the "classic Bayesian picture" (like Dogmatic Probabilism here [LW · GW]and Reductive Utility here [LW · GW]). However, I think the core idea of Bayesian philosophy is subjective probability, which I still think is quite important (even within, say, infrabayesianism). Granted, I also think frequentist notions of probability can be useful and have an important role to play (EG, in the theory of logical induction).
Anyway, I think there's something right about "justification doesn't carve reality at its joints", but I think stopping at that would be a mistake. Yes, I think justification works different ways in different contexts, and the seeming paradox of the regress argument comes mostly from conflating the "trying to convince someone else" context with the "examining your own beliefs" context. However, I do think there's something interesting going on with both of those notions of justification, and it seems potentially fruitful to examine them rather than throw them out. "Just run the math" only works if you're not in the business of examining your beliefs (eg, the faith you have in the math) or justifying said math to others.
↑ comment by abramdemski · 2024-08-06T18:03:04.092Z · LW(p) · GW(p)
It might be rare to a non-Bayesian [? · GW], but if you take the position that you must update on observations seriously, it sems rather straightforward to me to say that there is an infinite sigma-algebra of events that you assign probabilities to over time (if you are an idealized, as opposed to bounded [LW(p) · GW(p)], reasoner), and an infinite set of possible observations that you can update on. Everything affects everything else, in the reinterpretation of "justification" as conditioning. And your priors relative to a certain Bayesian updating are really your posteriors from the Bayesian update that came immediately before it, which were modified by the update right before that, and on, and on, and on. (And there is nothing theoretically wrong with updating continuously over time, either, such that when you start with a prior at time t=0, at t=1 you have already gone through an "infinite" chain of justification-updates)
I'm not really sure what infinite chain of justification you are imagining a Bayesian position to suggest. A posterior seems justified by a prior combined with evidence. We can view this as a single justificatory step if we like, or as a sequence of updates. So long as the amount of information we are updating on is finite, this seems like a finite justification chain. I don't think it matters if the sigma-algebra is infinite or if the space of possible observations is infinite, although as you seem to recognize, neither of these assumptions seems particularly plausible for bounded beings. The idea that everything effects everything else doesn't actually need to make the justificatory chain infinite.
I think you might be conflating the idea that Bayesianism in some sense points at idealized rationality, with the idea that everything about a Bayesian needs to be infinite. It is possible to specify perfect Bayesian beliefs which can be calculated finitely. It is possible to have finite sigma algebras, finite amounts of information in an update, etc.
The idea of continuous updating is interesting, but not very much a part of the most common Bayesian picture. Also, its relationship to infinite chains seems more complex than you suggest. If I am observing, say, a temperature in real time, then you can model me as having a prior which gets updated on an observation of [the function specifying how the temperature has changed over time]. This would still be a finite chain. If you insist that the proper justification chain includes all my intermediate updates rather than just one big update, then it ceases to be a "chain" at all (because ALL pairs of distinct times have a third time between them, so there are no "direct" justification links between times whatsoever -- any link between two times must summarize what happened at infinitely many times between).
↑ comment by abramdemski · 2024-08-06T18:02:54.222Z · LW(p) · GW(p)
The titles of the sections in your post, as well as the explanation you gave in the part I quoted, give off a slight impression that you are conflating circularity and validity, while in reality the latter screens off [LW · GW] the effects of the former.
I agree that that's a bad misunderstanding that I want to avoid, so although I'm a bit incredulous at people reading it this way, I'll try to edit to make this misunderstanding less plausible.
comment by AnthonyC · 2024-08-06T14:17:21.677Z · LW(p) · GW(p)
I'd add that one feature of circular reasoning is that it points out beliefs that follow from one another in ways that may not be obvious to some reasoners. If you accept any of the states of the loop as a premise, then the rest of the loop follows. If the logic is valid, then the loop is a package deal. You can't use it to persuade someone of something, but you can use it to point out cases where when someone's beliefs are internally inconsistent. This does not tell you whether to accept the whole loop or none of it, of course, but it does suggest you should pick one or the other, unless you can show that it's not really a logically valid loop.
comment by Mr Frege · 2024-08-06T03:12:06.452Z · LW(p) · GW(p)
"The first important thing to note is that circular reasoning is logically valid. A implies A. If circular arguments are to be critiqued, it must be by some other standard than logical validity."
I challenge that premise: "A implies A" is not circular; this confuses a logical law with a valid deductive inference---reasoning, circular or otherwise, is about argument/inference. The logical law "A implies A" doesn't say that ones believes A; no position on the belief/truth of A is being made. This is different from having the belief A, and inferring A, which is just saying that you can infer what you already believe. What would be circular would be the law "(A implies A) implies A", as that would allow one to infer A from being able to infer A from A. That, however, isn't a logical law.
Replies from: abramdemski↑ comment by abramdemski · 2024-08-06T20:10:17.623Z · LW(p) · GW(p)
I think you are interpreting me as saying the proposition , which is a statement rather than an argument. What I meant was , the argument from A to A. Although I didn't think the distinction was so important to focus on in this essay.
You can define circular logic as if you want, but I think this will be an uncharitable interpretation of most real-life arguments that people would call circular. It also doesn't fit the geometric intuition behind 'circular' well. leads back around to where it started, while is doing something else.
The wikipedia article on circular reasoning sides with me on the issue:
Replies from: Mr FregeCircular reasoning (Latin: circulus in probando, "circle in proving";[1] also known as circular logic) is a logical fallacy in which the reasoner begins with what they are trying to end with.[2] Circular reasoning is not a formal logical fallacy, but a pragmatic defect in an argument whereby the premises are just as much in need of proof or evidence as the conclusion, and as a consequence the argument fails to persuade.
↑ comment by Mr Frege · 2024-08-08T13:24:02.584Z · LW(p) · GW(p)
"Although I didn't think the distinction was so important to focus on in this essay." The distinction is important.
Do you agree that in logic, A |- A says something akin to "from a true A (a non-logical axiom), you may infer the truth of A"? If so, I don't see any circularity there; anything on the left hand side of the |- is an 'additional' axiom of your theory, hence accepted as true. Circular reasoning, by my interpretation, is where you infer/assume as true what you're in the process of trying to prove to be true. This is not the case with A |- A, in which the occurrence of A on the left of the |- means you are taking it as true to begin with. That's why I assumed you meant A -> A when you said "A implies A".
The law (A -> A) -> A would allow you to infer A from A -> A, which would, indeed, allow you to infer A without it having being established as true, doing so, rather, solely from the premise that if it were true, it would be true, thereby concluding that it is true. This is closer to the interpretation of circular reasoning I gave above.
"It also doesn't fit the geometric intuition behind 'circular' well. leads back around to where it started". I think this appeal to geometric intuition is misleading, which might explain the confusion. All A |- A it is saying is that if you already know/believe A, then you may infer it. In a formal system it may take several steps to derive ('lead to') A from A, but I don't think that circular reasoning should be tied to proof theory.
Replies from: Mr Frege↑ comment by Mr Frege · 2024-08-09T07:50:56.621Z · LW(p) · GW(p)
I think the issue might be that I'm interpreting circular reasoning as something stronger than you; ie, in the pernicious sense which explains why "The idea that circular reasoning is bad is widespread".
I suspect that according to your interpretation all valid deductive reasoning is circular in some way, circularity thus being necessary for valid deductive reasoning. In this regard, circularity would be a desirable attribute.
In contrast, my interpretation is one in which in the process of affirming a belief, one presupposes something that would require to have already affirmed the same belief; what is sometimes called "begging the question".
In this context, I don't regard A |- A (circular in your sense, but not in mine) as problematic, as it just involves inferring something that has already been affirmed.
comment by Dagon · 2024-08-05T23:50:51.556Z · LW(p) · GW(p)
While circular reasoning should not be convincing (at least not usually), it should also not be considered invalid.
I don't think that people who claim circular arguments are invalid tend to use "valid" in the rigorous logical sense, but as a synonym for "convincing" or "useful as an argument". I know I don't.
If the best evidence for a belief is circular, it's probably wrong. Honesly, on this level, all models are wrong - I should say "not terribly useful".
comment by Yoav Ravid · 2024-08-05T21:14:21.251Z · LW(p) · GW(p)
I haven't clicked this fast on a LessWrong post in a long while. I've been waiting for someone to seriously tackle this issue ever since I read the sequences six years ago. So thanks! Now I'm going to take some time to think about it :)
comment by Steven Byrnes (steve2152) · 2024-08-05T20:50:36.991Z · LW(p) · GW(p)
Yeah I agree. If someone has a bunch of beliefs, and they all hang together in a self-consistent way, that’s a reason to believe that they’re correct, other things equal. (UPDATE: I’m saying it’s a pro tanto reason—obviously it’s not a proof of correctness!)
This applies to both little things—if you offer a bunch of claims and definitions about how capacitors work, and they all hang together in a self-consistent way, that’s a reason to take those claims and definitions seriously—and big things—if you offer a whole worldview, including very basic things like how do we interpret observations and make predictions and what’s the nature of reality etc., and everything in it hangs together in a self-consistent way, then that’s a reason to take that worldview seriously.
Replies from: jmh↑ comment by jmh · 2024-08-06T01:43:28.662Z · LW(p) · GW(p)
I'm not sure I agree on this. Pretty much all logical arguments hang together in a self-consident way but that does not ensure they are true conclusions. This seems to be a confusion between valid and true.
I think what self-consistency means is that one needs to dig deeper into the details of the underlying premises to know if you have a true conclusion. The inconsistent argument just tells us we should not rely on that argument but doesn't really tell us if the conclusion is true or not.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2024-08-06T02:07:45.518Z · LW(p) · GW(p)
I edited to clarify that “reason to believe that they’re correct, other things equal” and “reason to take seriously” is meant in the sense of “a pro tanto reason” not “an incontrovertible proof”. Sorry, I thought that was obvious. (Note that it was also explained in the OP.)
To give some examples:
- If you ask a crackpot physicist and a real physicist to each define 10 electromagnetism-related terms and then make 20 substantive claims using those terms, I would bet that the crackpot has a higher probability of saying multiple things that contradict each other. (Not a 100% probability, just higher.)
- …and if the crackpot physicist really said only things that hung together perfectly and self-consistently, including after follow-up questions, then I would start to entertain possibilities like “maybe they’re describing true things but starting from idiosyncratic nonstandard definitions?” or “maybe they’re consistently describing a certain approximation to electromagnetism?” etc.
- Likewise, I would bet on myself over a biblical literalist to be able to make lots of complex claims about the nature of the universe, and humanity, etc., including follow-up questions, in a way that hangs together without anything being internally inconsistent.
comment by TAG · 2024-08-08T14:18:58.825Z · LW(p) · GW(p)
I think it’s fair to say that the most relevant objection to valid circular arguments is that they are not very good at convincing someone who does not already accept the conclusion.
I think the most relevant objection is quodlibet. Simple circular arguments be generated for any conclusion. Since they are formally equivalent, they must have equal justifcatory (probability raising) power, which must be zero. That doesn't quite mean they are invalid...it could mean there are valid arguments with no justificatory force.
@Seed Using something like empiricism or instrumentalism to avoid the Regress Problem works for a subset of questions only. For instance, questions about the correct interpretation of observations can't be answered by observations. (Logical Positivism tried to make a feature of the bug, by asserting that the questions it can't answer were never meaningful).
In a sense, there are multiple solutions to the Regress problem -- but they all involve giving something up, so there are no completely satisfactory solutions.
The Desiderata of an episteme are, roughly:-
Certainty. A huge issue in early modern philosophy which has been largely abandoned in contemporary philosophy.
Completeness. Everything is either true or false, nothing is neither.
Consistency. Nothing is both true and false.
Convergence. Everyone can agree on a single truth.
Rationalists have already given up Certainty, and might well have to give up on Convergence (single truth-ism) as well, if they adopt Cohererentism. Or Completeness , if they adopt instrumentalism.
comment by Algon · 2024-08-06T20:02:56.707Z · LW(p) · GW(p)
I have the intuition that a common problem with circular reasoning is that it's logically trivial. E.g. has a trivial proof. Before you do the proof, you're almost sure it is the case, so your beliefs practically don't change. When I ask why I believe X, I want a story for why this credence and not some other substantially different counterfactual credence. Which a logically trivial insight does not help provide.
EDIT: inserted second "credence" and "help".
comment by Seed (Donqueror) · 2024-08-05T21:31:08.356Z · LW(p) · GW(p)
I think it's fair to say that the most relevant objection to circular arguments is that they are not very good at convincing someone who does not already accept the conclusion.
All circular reasoning which is sound is tautological and cannot justify shifting expectation.
The point is, you have to live with at least one of:
No branch of this disjunction applies. Justifications for assumptions bottom out in EV of the reasoning, and so are justified when the EV calculation is accurate. A reasoner can accept less than perfect accuracy without losing their justification -- the value of reasoning bottoms out in the territory, not the map, and so "survived long enough to have the thought" and similar are implicitly contributing the initial source of justification.
Circular arguments fail to usefully constrain our beliefs; any assumptions we managed to justify based on evidence of EV will assign negative EV for circular arguments, and so there is no available source of justification from existing beliefs for adopting a circular argument, while there is for rejecting them.
Coherentism: Circular justification is allowed in some fashion.
Only insofar as a reasoner can choose not to require that anything requiring cognitive work pay rent to justify the expenditure. Optimal bounded reasoning excludes entertaining circular arguments based on expectation of wasting resources.
circular justifications seem necessary in practice
I didn't see any arguments which point to that unless you mean the regress argument / disjunction edit: or this?:
Therefore, by the rule we call conservation of expected evidence [? · GW], reasoning through a belief system and deriving a conclusion consistent with the premise you started with should increase your credence.
Two independent justifications:
- One starts with negative EV for having engaged in reasoning-that-requires-resources at all, and so the conclusion must at least pay that off to be a justified way to reason.
- A circular argument does not constitute evidence relative to the premises, so conservation of expected evidence does not prescribe an update, except perhaps an extremely small one about the consistency of (and thus EV of) the assumptions that led up to that point -- the argument is evidence that those rules didn't lead to a contradiction, but not about the conclusion.
↑ comment by abramdemski · 2024-08-06T19:08:19.686Z · LW(p) · GW(p)
circular justifications seem necessary in practice
I didn't see any arguments which point to that unless you mean the regress argument / disjunction
Yes, I agree: the essay doesn't really contain a significant argument for this point. "Seem necessary in practice" is more of an observation, a statement of how things seem to me.
The closest thing to a positive argument for the conclusion is this:
However, in retrospect I think it's pretty clear that any foundations are also subject to justificatory work, and the sort of justification needed is of the same kind as is needed for everything else. Therefore, coherentism.
And this, which is basically the same argument:
My reasons [...] are practical: I'm open to the idea of codifying excellent foundational theories (such as Bayesianism, or classical logic, or set theory, or what-have-you) which justify a huge variety of beliefs. However, it seems to me that in practice, such a foundation needs its own justification. We're not going to find a set of axioms which just seem obvious to all humans once articulated. Rather, there's some work to be done to make them seem obvious.
I also cite Eliezer stating a similar conclusion:
Replies from: DonquerorEverything, without exception, needs justification. Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops.
↑ comment by Seed (Donqueror) · 2024-08-08T00:43:26.903Z · LW(p) · GW(p)
I think it's pretty clear that any foundations are also subject to justificatory work
EV is the boss turtle at the bottom of the turtle stack. Dereferencing justification involves a boss battle.
there's some work to be done to make them seem obvious
There's work to show how justification for further things follows from a place where EV is in the starting assumptions, but not to take on EV as an assumption in the first place, as people have EV-calculatingness built into their behaviors as can be noticed to them.
Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops
I reject this! Justification results from EV calculations and only EV calculations, as trust values for assumptions in contexts.
↑ comment by abramdemski · 2024-08-06T18:51:05.667Z · LW(p) · GW(p)
All circular reasoning which is sound is tautological and cannot justify shifting expectation.
Does your perspective on this also imply that mathematical proofs should never shift one's beliefs? It sounds like you are assuming logical omniscience.
Also, it is possible for a circular argument like "A; A -> B; so, B; and also, B -> A; therefore, A" to be sound without being tautological. The implications can be contingently true rather than tautologically. The premise A can be contingently true rather than tautologically.
Replies from: Donqueror↑ comment by Seed (Donqueror) · 2024-08-07T23:02:50.429Z · LW(p) · GW(p)
One trusts proofs contextually, as a product of the trusts of the assumptions that led to it in the relevant context. Insofar as Bayesianism requires justification, it can be justified as a dependency in EV calculations.
We're not going to find a set of axioms which just seem obvious to all humans once articulated.
People understand EV intuitively as a justification for believing things, so this doesn't ring true to me.
The premise A can be contingently true rather than tautologically.
True, I should have indicated I was rejecting it on the basis of repetition. One could reject any repetition of what is already a given in a proof and not lose access to any conclusions. Repetitions contains tautologies (edit: more importantly, repetitions contains sound circular arguments), and I'm ruling out repetitions with EV as the justification. Anything updateful about an argument with circular reasoning is contained in the tree(s) formed by disallowing repetition.
↑ comment by abramdemski · 2024-08-06T19:01:28.508Z · LW(p) · GW(p)
Circular arguments fail to usefully constrain our beliefs; any assumptions we managed to justify based on evidence of EV will assign negative EV for circular arguments, and so there is no available source of justification from existing beliefs for adopting a circular argument, while there is for rejecting them.
As mentioned in AnthonyC's comment, circular arguments do constrain beliefs: they show that everything in the circle comes as a package deal. Any point in the circle implies the whole.
Replies from: Donqueror↑ comment by Seed (Donqueror) · 2024-08-07T23:30:41.544Z · LW(p) · GW(p)
Multiple argument chains without repetition can demonstrate anything a circular argument can. No beliefs are constrained when a circular argument is considered relative to the form disallowing repetition (which could avoid costly epicycles). The initial givens imply the conclusion, and they carry through to every point in the argument, implying the whole.
↑ comment by abramdemski · 2024-08-06T18:55:59.698Z · LW(p) · GW(p)
No branch of this disjunction applies. Justifications for assumptions bottom out in EV of the reasoning, and so are justified when the EV calculation is accurate. A reasoner can accept less than perfect accuracy without losing their justification -- the value of reasoning bottoms out in the territory, not the map, and so "survived long enough to have the thought" and similar are implicitly contributing the initial source of justification.
I can easily interpret this as falling into branches of the disjunction, and I am not sure how to interpret it as falling into none of the branches. It seems most naturally like a "justification doesn't always rest on further beliefs" type view ("the value of reasoning bottoms out in territory, not map").
Replies from: Donqueror↑ comment by Seed (Donqueror) · 2024-08-08T00:17:26.612Z · LW(p) · GW(p)
Some beliefs do not normatively require justification;
Beliefs have to be justified on the basis of EV, such that they fit in a particular way into that calculation, and justification comes from EV of trusting the assumptions. Justification could be taken to mean having a higher EV for believing something, and one could be justified in believing things that are false. Any uses of justification to mean something not about EV should end up dissolving; I don't think justification remains meaningful if separated.
Some justifications do not rest on beliefs
Justification rests on beliefs as inputs to EV calculations.
Some justification chains allowed to be circular
No conclusion requires its justification to be circular.
Some justification chains are allowed to be infinite and non-repeating
No infinite chains are required, they bottom out in observations as beliefs to be input into EV calculations.
No beliefs are permissible.
Beliefs are required for EV calculations.
comment by Benaya Koren · 2024-08-06T10:33:49.920Z · LW(p) · GW(p)
I think it would be helpful, when dealing with such foundational topics, to tabu "justification", "validity", "reason" and some related terms. It is too easy to stop the reduction there, and forget to check what are their cause and function in our self-reflecting epistemic algorithm.
The question shouldn't be whether circular arguments are "valid" or give me "good reason to believe", but whether I may edit the parts of my algorithm that handle circular arguments, and as a result expect (according to my current algorithm) to have stronger conviction in more true things.
Your bayesian argument, that if the claim was false the circle is likely it to end in contradiction- I find convincing, because I am already convinced to endorse this form of bayesian reasoning. Because as a normative it has properties that I have already learned to make sense according to earlier heuristics that were hopefully good. Including the heuristic that my heuristics are sometimes bad and I want to be reasonably robust to that fact. Also, that this principle may not be implemented absolutely without sacrificing other things that I care about more.
Replies from: abramdemski↑ comment by abramdemski · 2024-08-06T19:12:34.675Z · LW(p) · GW(p)
Yeah, I would have liked to dig much deeper into what in the world[1] "justification" points at, but I thought the post would get too long and complex for the simple point being made.
- ^
(I mean: what thing-in-the-world is being pointed at; what are the real phenomena behind "justification"; why we use such a concept)
↑ comment by Benaya Koren · 2024-08-06T21:17:16.239Z · LW(p) · GW(p)
Would very much like to read such a post. I have the basic intuition that it is a soft form of "witness" (as in complexity/cryptography), but it is not very developed.
comment by Seth Herd · 2024-08-05T19:30:35.431Z · LW(p) · GW(p)
The epistemological beliefs are justified in a way which fans back out to a broader picture of the world (eg, facts about physics or evolution may be invoked in the justification of epistemic practices).
Yes; I think any plausible theory of epistemics must hold that knowledge is "rhizomatic": beliefs are not justified in a chain, as classic philosophers supposed, but in branching and rejoining chains with lots of structure but without any strict hierarchy or bottom, most privileged level.
The best beliefs are the most consistent set, including consistency with actual empirical observations (like I see things work this way and I hear people express these beliefs and report other empirical results.)
Edit:
despite that, circular arguments in a tightly closed loop with no reference to empirical observations seem like so little evidence that they're almost worthless - and in most situations are actually evidence of a less-rational thinker making the arguments, which in turn makes everything they argue more suspect. That is how I think we usually take them, and it's still correct.
Pointing out "that's a circular argument!" is probably pointless only because you're spending your time trying to convince an irrational person to be rational. It's kind of like trying to wrestle your greased pig back into the pen - it seems like an exhausting waste of time until you realize that the pig likes it (or something like that - I'm gleefully misremembering that aphorism). Except that this pig doesn't like it, and neither do you, and now you're both covered in grease and mud and irritated with each other.
Or alternately, you're talking to a perfectly rational person, whose beliefs are justified by such a complex rhizomatic chain that they couldn't possibly explain them to you in a brief exchange, so they just used a circular argument to express (consciously or unconsciously) that they're not interested in trying to justify their beliefs to you right now.
Replies from: faul_sname↑ comment by faul_sname · 2024-08-05T20:40:18.465Z · LW(p) · GW(p)
despite that, circular arguments in a tightly closed loop with no reference to empirical observations seem like so little evidence that they're almost worthless
In such situations, "that argument seems to be unsupported by empirical evidence" seems to me like a better counterargument than "that argument is circular".
Replies from: Seth Herd↑ comment by Seth Herd · 2024-08-05T21:10:25.561Z · LW(p) · GW(p)
They mean the same thing, "you're not making sense", so they'll probably dislike it just as much.
Replies from: faul_sname↑ comment by faul_sname · 2024-08-05T21:21:47.639Z · LW(p) · GW(p)
Possible, although I think you probably reach that point much faster if you can establish that your conversational partner disagrees with the idea that arguments should be supported or at least supportable by empirical evidence.
comment by transhumanist_atom_understander · 2024-12-19T21:16:46.424Z · LW(p) · GW(p)
I wonder if there's also an analogy to the Gibbs sampling algorithm here.
For a believer, it will mostly bounce back and forth from "Assuming God is real, the bible is divinely inspired" and "Assuming the bible is divinely inspired, God must be real". But if these are not certainties, occasionally it must generate "Assuming God is real, the bible is actually not divinely inspired". And then from there, probably to "Assuming the bible is not divinely inspired, God is not real." But then also occasionally it can "recover", generating "Assuming the bible is not divinely inspired, God is actually real anyway." So you need that conditional probability too. But given all the conditional probabilities, the resulting chain generates the joint distribution over whether or not the bible is divinely inspired and whether or not God is real.
comment by Flying Pen and Paper (flying-pen-and-paper) · 2024-08-09T19:45:14.473Z · LW(p) · GW(p)
Avtur Chin writes:
'From A follows A' can be in two meta-stable situations; If A is false and if A is true. But typically circular logic is used to prove that A is true and ignores the other situation.
which I agree with. This can be contrasted with “From A does not follow not A”, which I believe entails A (as a false statement implies everything?).
When trying to prove a logical statement B from A, we generally have a sense of how much B resembles A, which we could interpret as a form of distance. Both A and not A resemble A very much.
I’ll formalise this mathematically in the following sense. The set of logical statements form a metric space where the metric is (lack of) resemblance. The metric is normalised so that A and not A lie in the unit ball centred at A, denoted by U. When reading a proof from premise A, we move from one point to the next, describing a walk.
Therefore, we might think of circular reasoning as an excursion (that is to say, a walk that returns to where it started) of logical statements. A proof that A implies not A is then an almost-excursion; it returns to U.
If there is reason to believe that (1) excursions and almost-excursions are comparably likely in some sense not defined (2) lots of excursions are recorded but not any almost-excursions, then this seems to lend evidence to the proposition that A does not imply not A i.e. A is true.
The issue is obviously that it’s not clear that (1) is true. However, intuitively, it does seem at least more likely to be true if conditioned on the walk becoming very distant from A before its return to U.
comment by Aleksander (Omnni) · 2024-08-09T18:30:19.342Z · LW(p) · GW(p)
I would suggest but one fundamental statement which we can accept without circular or infinite reasoning: The things which I observe reflect reality in some consistent and knowable way. (This exact wording has some problems, I ask you to ignore them) This statement can be untrue, but if it was it would be reasonable to say that we are completely uncertain on the state of reality. Thus we can create two possible mindsets, based on whether the claim is true: (1): We have no idea what is going on (2): The mindset that we generally take(too long to describe) It’s clear that (1) is completely useless at predicting the one thing that we care about: Moving towards pleasure and away from pain. Even if the claim is false, (1) will not be more effective at predicting pain than (2)(On average). If the claim has some arbitrarily small chance of being true, we expect a higher return on acting as if it is correct than if we act any other way. Thus, we should always act as if the claim is true and use the mindset (2).