Grounding self-reference paradoxes in reality

post by Fiora from Rosebloom · 2024-09-29T05:50:30.559Z · LW · GW · 3 comments

Contents

3 comments

This post is not a good intro to formal logic.

I'm bothered by how often self-reference paradoxes (e.g. "this sentence is false") are touted as gaping holes logic, glaring flaws in one of our trustiest tools for grokking reality (namely formal logic). It's obvious that something interesting is going on here; the fact that, say, Gödel's incompleteness theorem even exists provides interesting insight into the structure of first-order logic, and in this post I want to explore that insight in more detail.

However, sometimes I'll see people making arguments like "Gödel's incompleteness theorem demonstrates that a certain true theorem can't be proven within the system that makes it true; the fact that humans are capable of recognizing that the Gödel sentence is true anyway suggests that our minds transcend mere computational logic, and thus can't be emulated on computer hardware no matter how advanced." This is pretty close to a central example of the attitude I take issue with.

So what's the misunderstanding of self-reference I claim such people are making? Well, let's stay zeroed in on the Gödel theorem and the liar's paradox in particular for now.[1] One can force these types of issues to arise by constructing a logical sentence which, by assuming it to be either true or false and then applying certain "deduction rules", will inevitably lead to the conclusion that the sentence has the opposite of the truth value you initially assumed.

So for example, if you assume "this statement is false" is true, then according to whatever rules of inference are implied in the logic we apply to natural language statements, we end up with the thought that the statement must be false.[2] This new statement, that "this statement is false" is false, then gets run through our implicit rules of inference again, and produces the statement that "this statement is false" must be true, and it goes around in a circle forever.

(Gödel's proof of his incompleteness theorem, or at least the proof sketched on this Wikipedia page, is similar. The main difference is that, instead of going in a circle forever, we simply observe that we can choose to prove the Gödel sentence to be both true and false, which means we can prove a contradiction, and the buck stops there. You could totally write an automated theorem prover which did do Gödel logic in a circle forever, though, overturning the truth-value of the Gödel sentence with each cycle rather than halting and catching fire the moment the contradiction is spotted.)

So again, the self-reference issue arises when the implications of a sentence always overturn its original truth-value, no matter what that original value was assumed to be. This is a problem because it violates the law of non-contradiction; something can't be both true and false at the same time.

However, I claim that these paradoxical sentences don't present any real problem for our capacity to understand reality, or even to represent it in a formal system. I have to get a bit technical to demonstrate what I mean, sadly, so bear with me. Suppose that we have a giant table of 1s and 0s, representing the full history of reality, the full history of its physical configurations. Each column of the table represents the world as it stands at a given instant in time; each zero represents answering "no" to some question about the state of the world, and each one represents answering "yes" to some question about the state of the world. In this framework, modulo concerns about reality having infinitely many details, you should be able to exhaustively describe every state the universe will ever be in.[3]

Now, if we introduce logic gates into this framework, we can also use it to represent the laws of physics, or the rules for evolving the state of reality over time. (This isn't unlike applying rules of inference in formal logic, an observation we'll return to later.) You can imagine a massive series of logic gates connecting each column in our table to the one immediately to its right; the gates could be set up such that, by simply inputting one complete configuration of reality at the start, later configurations could be derived automatically by a cascade of mechanically triggered logic gates.[4]

Between the "bits as answers to yes-or-no questions" frame and the capacity of logic gates to evolve the values of bitstrings in arbitrary ways, I think it's clear that this formal system can in principle be used to represent the physical world exhaustively, or at least that you'd only fail due to not having access to enough physical hardware (or knowledge of the universe) to represent reality using this method. So that seems to blow a hole in the idea that the liar's paradox and/or Gödel's theorem prove that we're incapable of understanding the material universe, or even building formal systems which represent it.

Now, there is another strain of arguments for the significance of these paradoxes, which suggests that there's an incompleteness in the realm of purely abstract logic, itself presented as a domain in which our ability to find Truth matters. I'm not going to get into the weeds of the metaphysical debate surrounding this position, partly because I don't feel my view is fully developed yet. (For those who want to learn more on their own, the keyword is "mathematical Platonism".) Instead, I'm going to just assume that the material universe we normally refer to as "the real world" is the only "real" reality, and try to clarify the relationship between logic and reality within that framework.

Humans probably develop their logical faculties as a way of improving their ability to model the physical world. Lots of our world models are probably already developed by predictive processing: humans predict their own incoming sense data, and update their theory of how the world works based on errors in their predictions. However, empirically, we seem to be able to further refine our models purely by reflecting on evidence we've already gathered, for example by doing math inside out heads to predict which answer the math professor will deem to be correct. (Note that this chases out as a concrete prediction about incoming sensory experience, not just an abstract byproduct of applying mathematical laws.)

When it comes to formal logical systems in particular, we might say that insofar as they have a practical purpose at all, it ought to be to help us make effective use use limited information about the reality-table we discussed earlier (or really, more like a high-level, abstracted version of it), in the sense that we can apply rules of inference to the bits we can access to recover some of the bits we haven't yet been given.[5] For example, the "modus ponens" inference rule lets us take two sentences we know to be true, like "If it's raining, it's cloudy" and "It's raining", and conclude that "It must be cloudy" is true as well. A less classic example may be using approximate laws of physics as inference rules to predict future observations about, e.g., the positions of the planets in the night sky. In both cases, formal logic is helping us recover answers to questions about the configuration of reality. (I.e. it's not just abstract conceptual free-play)

Now, there's an interesting and important dual aspect to the tables of facts that logic helps us assemble. On one level, they're supposed to represent a physical reality where, at any given time-step, any given claim about the configuration of the world is either true or false. There's not supposed to be any of the "If it's true, then it's false"-style looping from the liar's paradox and similar; there's a fact about how the world is at one moment of time, and how it is at the next, but by the definition of time a given cell in the world-table is only ever one or zero. So in the reality that logical symbols are supposed to represent, flip-flopping situations like the liar's paradox are metaphysically impossible.

(Indeed, awareness that facts about reality have this property (consistency within timesteps) is the reason we find logical contradictions problematic in the first place!)

However, for any representation of reality that we humans might construct, that will inevitably exist across units of time. There, individual cells in the tables we might construct actually can change across time. This introduces the possibility that, for some algorithmic reason, a given cell's truth-value will begin to flip-flop. This is what's going on in the Gödel's incompleteness theorem: the deduction rules of first-order logic are such that, in the course of tracing out the logical implications of the Gödel sentence, the cell containing its truth-value will inevitably have its value flipped. If we're trying to read columns of the table as representing the world at given time-steps, this kind of flip-flopping amounts to a contradiction of the law of identity, which we rightly see as a major flaw. So it's true that these logical systems break down under the right circumstances.

But that still doesn't mean it's impossible to build a logical system that makes sense of why these self-reference paradoxes happen, and sees through them! We can still, in our minds, build a mental model whose structure resembles the world-tables I was talking about earlier, one whose starting state we would, in natural language, describe as "The Gödel sentence is being fed into a system of logical deduction". From there, we can also make use of our intuitions about physical causality to predict how that logical deduction system will react to the sentence; i.e., that system (which might be a computer or a human brain or something) will fail to end up in a physical configuration we'd classify as "containing a satisfactory proof of the Gödel sentence."

In other words, since these natural language sentences are just abstract descriptions of physical processes, we can represent those processes using something like the formal system for physics I described earlier!

My belief is that this kind of thinking is what accounts for our ability to "tell that the Gödel sentence is true", in the manner that trips up people who take the incompleteness theorem as an argument against physicalism. We don't take the Gödel sentence as a primitive and evolving it strictly through the deduction rules of first-order logic. Instead, we treat it as an abstract object in physical reality, and evolve our world-simulation according to our grasp of the laws of physics. This evolution leads a new state of the world we'd classify as "the deductive logical system has failed to prove the Gödel sentence."

(We can choose to apply the inference rules of formal logic, rather than our intuitions from physics; in doing so we end up as confused as the incompleteness theorem suggests we should be. But we can also choose to interpret the situation differently, and escape the problematic framing.)


Before closing this essay, I'd like to complain about a standard phrasing of Gödel's incompleteness theorem itself: "Gödel’s incompleteness theorem shows that in any sufficiently powerful formal system, there are true statements that cannot be proven within the system." Draw your attention to the phrase "sufficiently powerful formal system". Here, power isn't referring to what set of functions the system is capable of expressing. Indeed, some simpler systems, like propositional and Boolean logic, are Turing-complete, as I showed in an earlier footnote.[4] This means they can express an equivalent to any function that will ever be computed in practice.

Instead, here, the "power" of systems that allow for Gedelian incompleteness (such as first-order logic) refers to their capacity to represent abstract categories. This makes it somewhat more natural for humans to translate their native conceptual vocabularies into the terms of first-order logic and similar systems.

For technical reasons I won't get into, the ability to have abstract categories like this is integral to Gödel's incompleteness proof. However, even though their additional "power" does in fact make first-order logic easier to work with for some purposes, that power comes mostly from adding in features which resemble the process of human reasoning. They don't enable any computations that aren't available otherwise.

In other words, Gödel's incompleteness theorem arises out of an attempt to make logic less unwieldy, not more computationally capable. It's entirely possible to use "weaker" systems of logic to simulate the universe perfectly. This includes universes wherein, at some level of abstraction, simulated logical deduction systems are getting caught up on Gödelian incompleteness as it was originally presented. All of this is just another angle on why this type of self-reference paradox isn't a hole in our basic capacity to understand reality.

  1. ^

    We'll deal with Lob's theorem and its natural language equivalent in the appendix. [I didn't write an appendix.]

  2. ^

    Tangent: Note the affinity between "rules of inference" and the concept of "inference" in neural networks, where inference is the process of a neural network generating outputs. If human thoughts work anything like neural networks (a conjecture I argue for here), you might think of the contents of our consciousness as the "prompt", and us generating our next thoughts via an "inference" process (which then get fed back into our consciousness to produce more thoughts, not unlike how language models feed their next-token predictions back into their own context over and over). Logical statements, like "this sentence is true", are a special case of our cognitive "prompts" in general, and the "rules of inference" by which we derive additional logical sentences can be seen as a special case of the complex, neural network-like function which produces the "inferences" (future thoughts) from our cognitive "prompts" (our present thoughts) in general.

  3. ^

    I have strong opinions on the concept of infinity which make me less concerned (to put it bluntly, I think the concept of infinity is bullshit and a plague on mathematics lol). I'll present that take in a future post, though, it's too much to summarize here.

  4. ^

    As a quick proof that logic gates can be used to compute arbitrary functions, including the laws of physics: You could actually build a lookup table representing any function you wanted just using AND and NOT gates. Just have bits representing your input, one massive AND gate per output for the function, and NOT gates for whichever bits going into a given AND gate need to be set to zero for the AND gate to respond to the appropriate input value. Again, these AND gates connect to output values for your function, so by feeding each AND gate a combination of raw input bits and bits inverted by NOT gates, you can build an arbitrary lookup table.

  5. ^

    Re: abstractions, one major limit of formal logic in general is that it can't easily deal with the leaky abstractions we use in everyday life (including in the "logical" thoughts that these formal systems are based on). Ideas like "it's raining" and especially "it's cloudy" are not binary categories; at best they're gradients, and systems like first-order logic have no tractable way of expressing this. (This was a major problem for the symbolic paradigm in AI research, which often tried to use first-order logic to simulate human reasoning.) I expect the human brain deals with these vague abstractions the same way neural networks seem to, which is that "concepts" are really these super messy patterns of neural activation; however those concepts are accessed by certain combinations "tokens" in a "prompt", and those tokens are much more easily definable at the hardware level. (I'm gambling on there being a strong analogy between LLM context windows and human consciousness, or at least the neural correlates thereof; I'm claiming that consciousness is composed of something like tokens, and that that's what "prompts" our logical inferences.)

3 comments

Comments sorted by top scores.

comment by notfnofn · 2024-09-29T22:28:07.579Z · LW(p) · GW(p)

However, I claim that these paradoxical sentences don't present any real problem for our capacity to understand reality, or even to represent it in a formal system. I have to get a bit technical to demonstrate what I mean, sadly, so bear with me. Suppose that we have a giant table of 1s and 0s, representing the full history of reality, the full history of its physical configurations. Each column of the table represents the world as it stands at a given instant in time; each zero represents answering "no" to some question about the state of the world, and each one represents answering "yes" to some question about the state of the world. In this framework, modulo concerns about reality having infinitely many details, you should be able to exhaustively describe every state the universe will ever be in.[3]

Now, if we introduce logic gates into this framework, we can also use it to represent the laws of physics, or the rules for evolving the state of reality over time. (This isn't unlike applying rules of inference in formal logic, an observation we'll return to later.) You can imagine a massive series of logic gates connecting each column in our table to the one immediately to its right; the gates could be set up such that, by simply inputting one complete configuration of reality at the start, later configurations could be derived automatically by a cascade of mechanically triggered logic gates.[4]

Between the "bits as answers to yes-or-no questions" frame and the capacity of logic gates to evolve the values of bitstrings in arbitrary ways, I think it's clear that this formal system can in principle be used to represent the physical world exhaustively, or at least that you'd only fail due to not having access to enough physical hardware (or knowledge of the universe) to represent reality using this method. So that seems to blow a hole in the idea that the liar's paradox and/or Gödel's theorem prove that we're incapable of understanding the material universe, or even building formal systems which represent it.

Is the point of this whole thing that if you have a finite system that evolves according to a finite set of rules for a finite number of steps, then you can prove anything about the system?

Replies from: Fiora from Rosebloom
comment by Fiora from Rosebloom · 2024-09-30T00:04:41.097Z · LW(p) · GW(p)

that's a really good way of putting it yeah, thanks.

and then, there's also something in here about how in practice we can approximate the evolution of our universe with our own abstract predicctions well enough to understand the process by which the physical substrate which is getting tripped up by a self-reference paradox, is getting tripped up. which is the explanation for why we can "see through" such paradoxes.

Replies from: notfnofn
comment by notfnofn · 2024-09-30T01:19:14.889Z · LW(p) · GW(p)

Okay, I read to the end and I'm a little skeptical that you properly understand the incompleteness theorem. Are you aware, for instance, that the incompleteness theorem prohibits us from even proving all first-order statement the natural numbers using any consistent mathematical framework (regardless of how powerful it is)? And that it was later shown that no consistent mathematical framework can even prove the existence/non-existence of solutions to diophantine equations? The reason I ask is that you brought up the necessity of abstract categories and I'm not really sure what you meant by that. It also seemed that you might be unaware that no mathematical framework can resolve the halting problem for any Turing complete system (this is essentially a tautology). Am I misunderstanding what you meant?