The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review

post by jessicata (jessica.liu.taylor) · 2024-03-27T19:59:27.893Z · LW · GW · 37 comments

This is a link post for https://unstablerontology.substack.com/p/the-cognitive-theoretic-model-of

Contents

  Abstract
  Introduction
  On Theories, Models and False Dichotomies
  Determinacy, Indeterminacy and the Third Option
  The Future of Reality Theory According to John Wheeler
  Some Additional Principles
  The Reality Principle
  Syndiffeonesis
  The Principle of Linguistic Reducibility
  Syntactic Closure: The Metaphysical Autology Principle (MAP)
  Syntactic Comprehensivity-Reflexivity: the Mind Equals Reality Principle (M=R)
  Syntactic Coherence and Consistency: The Multiplex Unity Principle (MU)
  The Principle of Hology (Self-composition)
  Duality Principles
  The Principle of Attributive (Topological-Descriptive, State-Syntax) Duality
  Constructive-Filtrative Duality
  Conspansive Duality
  The Extended Superposition Principle
  Supertautology
  Reduction and Extension
  The Principle of Infocognitive Monism
  Telic Reducibility and Telic Recursion
  The Telic Principle
  Conclusion
None
37 comments

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially because the author is outside the normal academic system), having few citations relative to most academic works. The work is incredibly ambitious, attempting to rebase philosophical metaphysics on a new unified foundation. As a short work, it can't fully deliver on this ambition; it can provide a "seed" of a philosophical research program aimed at understanding the world, but few implications are drawn out.

In reading the work, there is a repeated sense of "what?", staring and looking at terms, and then "ohhh" as something clicks. These insights may actually be the main value of the work; at the end I still don't quite see how everything fits together in a coherent system, but there were a lot of clicks along the way nonetheless.

Many of the ideas are similar to other intellectual ideas such as "anthropics" and "acausal interaction", but with less apparent mathematical precision, such that it's harder to see exactly what is being said, and easier to round off to something imprecise and implausible.

There is repeated discussion of "intelligent design", and Langan claims that CTMU proves the existence of God (albeit with a very different conceptualization than traditional religions). From the perspective of someone who witnessed the evolution / intelligent design debate of the 90s-00s, siding with the "intelligent design" branch seems erroneous, although the version presented here differs quite a lot from more standard intelligent design argumentation. On the other hand, the "evolutionists" have gone on to develop complex and underspecified theories of anthropics, multiverses, and simulations, which bring some amount of fundamental or nearly-fundamental mind and agency back into the picture.

I didn't finish summarizing and reviewing the full work, but what I have written might be useful to some people. Note that this is a very long post.

Abstract

Perception is a kind of model of reality. Information about reality includes information about the information processor ("one's self"), which is called reflexivity. The theory identifies mental and physical reality, in common with idealism. CTMU is described as a "supertautological reality-theoretic extension of logic"; logic deals in tautologies, and CTMU somehow deals in meta-tautologies. It is based in part on computational language theory (e.g. the work of Chomsky, and type theory). Central to CTMU is the Self-Configuring Self-Processing Language (SCSPL), a language that can reflect on itself and configure its own execution, perhaps analogous to a self-modifying program. SCSPL encodes a form of "dual-aspect monism" consisting of "infocognition", integrated information and cognition. CTMU states that the universe comes from "unbounded telesis" (UBT), a "primordial realm of infocognitive potential free of informational constraint"; this may be similar to a language in which the physical universe could be "specified", or perhaps even prior to a language. CTMU features "telic recursion" involving agent-like "telors" that are "maximizing a generalized self-selection parameter", in an anthropic way that is like increasing their own anthropic probability, or "measure", in a way that generalizes evolutionary self-reproduction. It includes interpretations of physical phenomena such as quantum mechanics ("conspansion") and "temporal directionality and accelerating cosmic expansion". It also includes an interpretation of "intelligent design" as it conceives of agent-like entities creating themselves and each other in a recursive process.

Introduction

The introduction notes: "Among the most exciting recent developments in science are Complexity Theory, the theory of self-organizing systems, and the modern incarnation of Intelligent Design Theory, which investigates the deep relationship between self-organization and evolutionary biology in a scientific context not preemptively closed to teleological causation."

Complexity theory, in contrast to traditional physical reductionism, gives rise to "informational reductionism", which is foundational on information rather than, say, atoms. However, this reductionism has similar problems to other reductionisms. Separating information and matter, Langan claims, recapitulates Cartesian dualism; therefore, CTMU seeks to unify these, developing "a conceptual framework in which the relationship between mind and matter, cognition and information, is made explicit."

DNA, although a form of information, is also embedded in matter, and would not have material effects without being read by a material "transducer" (e.g. a ribosome). Reducing everything to information, therefore, neglects the material embodiment of information processors.

Intelligent design theory involves probabilistic judgments such as "irreducible complexity", the idea that life is too complex and well-organized to have been produced randomly by undirected evolution. Such probabilistic judgments rely on either a causal model (e.g. a model of how evolution would work and what structures it could create), or some global model that yields probabilities more directly.

Such a global model would have certain properties: "it must be rationally derivable from a priori principles and essentially tautological in nature, it must on some level identify matter and information, and it must eliminate the explanatory gap between the mental and physical aspects of reality.  Furthermore, in keeping with the name of that to be modeled, it must meaningfully incorporate the intelligence and design concepts, describing the universe as an intelligently self-designed, self-organizing system."

Creating such a model would be an ambitious project. Langan summarizes his solution: "How is this to be done? In a word, with language."

This recalls Biblical verses on the relation of God to language. John 1:1 states: "In the beginning was the Word, and the Word was with God, and the Word was God." (NIV). Theologian David Bentley Hart alternatively translates this as: "In the origin there was the Logos, and the Logos was present with God, and the Logos was god." The Greek term "logos" means "word" and "speech" but also "reason", "account", and "discourse".

Derrida's frequently-misunderstood "there is nothing outside the text" may have a similar meaning.

Langan continues: "Not only is every formal or working theory of science and mathematics by definition a language, but science and mathematics in whole and in sum are languages."

Formal logic as a language is a standard mathematical view. Semi-formal mathematics is more like a natural language than a formal language, being a method of communication between mathematicians that assures them of formal correctness. All mathematical discourse is linguistic but not vice versa; mathematics lacks the ability to refer to what is ill-defined, or what is empirical but indiscernible.

Science expands mathematics to refer to more of empirical reality, models of it, and elements of such. Like mathematics, science is a language of precision, excluding from the discourse sufficiently ill-defined or ambiguous concepts; this makes science unlike poetry.

Perhaps the empirical phenomena predicted by scientific discourse are not themselves language? Langan disagrees: "Even cognition and perception are languages based on what Kant might have called 'phenomenal syntax'".

Kant, famously, wrote that all empirical phenomena must appear in spacetime. This provides a type constraint on empirical phenomena, as in type theory. Finite spacetime phenomena, such as images and videos, are relatively easy to formalize in type theory. In a type theoretic language such as Agda, the type of 100 x 100 black-and-white bitmaps may be written as "Vector (Vector Bool 100) 100", where "Bool" is the type of Booleans (true/false), and "Vector A n" is a list of n elements each of type A.

AI algorithms process inputs that are formatted according to the algorithm; for example, a convolutional neural network processes a rectangular array. So, the concept of a formal language applies to what we might think of as the "raw sense data" of a cognitive algorithm, and also to intermediate representations used by such algorithms (such as intermediate layers in a convolutional neural network).

Langas conceptualizes the laws of nature as "distributed instructions" applying to multiple natural objects at once (e.g. gravity), and together as a "'control language' through which nature regulates its self-instantiations". This recalls CPU or GPU concepts, such as instructions that are run many times in a loop across different pieces of data, or programs or circuitry replicated across multiple computing units.

The nature of these laws is unclear, for example it is unclear "where" (if anywhere) they are located. There is an inherent difficulty of asking this question similar to asking, as an intelligent program running on a CPU, where the circuits of a CPU are; it is a wrong question to ask which memory register the circuits are located in, and analogously it may be a wrong question for us to ask at what physical coordinates the law of gravity is, although the CPU case shows that the "where" question may nonetheless have some answer.

Langan seeks to extend the empirical-scientific methods of physics and cosmology to answer questions that they cannot: "science and philosophy do not progress by regarding their past investments as ends in themselves; the object is always to preserve that which is valuable in the old methods while adjoining new methods that refine their meaning and extend their horizons."

In the process, his approach "leaves the current picture of reality virtually intact", but creates a "logical mirror image" or "conspansive dual" to the picture to create a more complete unified view (here, "conspansion" refers to a process of reality evolving that can be alternately viewed, dually, as space expanding or matter contracting).

On Theories, Models and False Dichotomies

Langan describes "reality theory": "In the search for ever deeper and broader explanations, science has reached the point at which it can no longer deny the existence of intractable conceptual difficulties devolving to the explanatory inadequacies of its fundamental conceptual models of reality. This has spawned a new discipline known as reality theory, the study of the nature of reality in its broadest sense... Mainstream reality theory counts among its hotter foci the interpretation of quantum theory and its reconciliation with classical physics, the study of subjective consciousness and its relationship to objective material reality, the reconciliation of science and mathematics, complexity theory, cosmology, and related branches of science, mathematics, philosophy and theology."

Common discourse uses the concept of "real" often to distinguish conceptions by whether they have "actual" referents or not, but it is not totally clear how to define "real" or how such a concept relates to scientific theories or its elements. Reality theory includes a theory of its application: since reality theory seeks to describe the "real" and is in some sense itself "real", it must describe how it relates to the reality it describes.

Over time, continuum physics has "lost ground" to discrete computational/informational physics, in part due to the increased role of computer simulations in the study of physics, and in part due to quantum mechanics. Langan claims that, although discrete models have large advantages, they have problems with "scaling", "nonlocality" (perhaps referring to how discrete models allow elements (e.g. particles) at nonzero distance from each other to directly influence each other), lack of ability to describe the "medium, device, or array in which they evolve", the "initial states", and "state-transition programming".

I am not totally sure why he considers discrete models to be unable to describe initial states or state-transition programming. Typically, such states and state transitions are described by discrete computer or mathematical specifications/instructions. A discrete physical model, such as Conway's game of life, must specify the initial state and state transitions, which are themselves not found within the evolving list of states (in this case, binary-valued grids); however, this is also the case for continuous models.

Langan also claims that discrete physical models are "anchored in materialism, objectivism, and Cartesian dualism"; such models typically model the universe from "outside" (a "view from nowhere") while leaving unclear the mapping between such a perspective and the perspectives of agents within the system, leading to anthropic paradoxes.

Langan notes that each of classical and informational reality, while well-defined, lacks the ability to "account for its own genesis". CTMU seeks to synthesize classical with quantum models, attaining the best of both worlds.

Determinacy, Indeterminacy and the Third Option

Both classical and computational models have a mix of causality and stochasticity: a fully deterministic model would fail to account for phenomena that seem "fundamentally unexplainable" such as quantum noise. While causality and stochasticity seem to exhaust all possible explanations for empirical phenomena, Langan suggests self-determinacy as a third alternative, in which "a system determines its own composition, properties, and evolution independently of external laws and structures".

This suggests cyclic time as a possible analogy, or anthropics, in which the conditions of mental representation themselves determine the empirical, material circumstances such minds find themselves in.

Langan notes cybernetic feedback (in which various entities regulate each other with positive and negative feedback, finding an equilibrium) as a possible analogy. However, he rejects this, since cybernetic feedback between entities "is meaningless where such entities do not already exist". Accordingly, "the feedback is ontological in nature and therefore more than cybernetic."

Ontological feedback is a rather confusing concept. One visualization is to imagine a map that represents a world, and itself as a part of a world with a plausible origin; whenever such a map fails to find a plausible origin of itself, it (and the world it describes) fails to exist. This is in some ways similar to anthropic self-selection.

Ontological feedback is "cyclical or recursive", but while ordinary recursion (e.g. in a recursive algorithm) runs on informational components that already exist, ontological feedback deals with components that do not yet exist; therefore, a new type of feedback is required, "telic feedback".

Langan writes: "The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement". Generalized utility may be compared to "anthropic measure" or "evolutionary measure", but it isn't exactly the same. Since some systems exist and others don't, a "currency" is appropriate, like probability is a currency for anticipations.

Unlike probabilities over universe trajectories, telic feedback doesn't match an acyclic time ordering: "In effect, the system brings itself into existence as a means of atemporal communication between its past and future". There may be some relation to Newcomblike scenarios here, in which one's existence (e.g. ability to sustain one's self using money) depends on acausal coordination across space and time.

Unlike with Newcomblike scenarios and ordinary probability theory, telic feedback deals with feedback over syntax, not just state. The language the state is expressed in, not merely the state itself, depends on this feedback. Natural languages work somewhat like this, in that the syntax of the English language depends on the state trajectory of historical evolution of language and culture over time; we end up describing this historical state trajectory using a language that is in large part a product of it.

Similarly, the syntax of our cognitive representations (e.g. in the visual cortex) depends on the state trajectories of evolution, a process that is itself described using our cognitive representations.

Even the formal languages of mathematics and computer science depend on a historical process of language and engineering; while it is tempting to say that Turing's theory of computation is "a priori", it cannot be fully a priori while being Turing's. Hence, Langan describes telic feedback as "protocomputational" rather than "computational", as a computational theory would assume as given syntax for describing computations.

The closest model to "telic feedback" I have encountered in the literature is Robin Hanson's "Uncommon Priors Require Origin Disputes", which argues that different agents must share priors as long as they have compatible beliefs about how each agent originated. The similarity is imagining that different agents create representations of the world that explain their own and others' origins (e.g. explaining humans as evolved), and these representations come together into some shared representation (which in Hanson's formulation is a shared belief state, and in Langan's is the universe itself), with agents being more "likely" or "existent" the more plausible their origin stories are (e.g. Hanson might appeal to approximately Bayesian beliefs being helpful for survival).

My own analogy for the process is to some travelers from different places finding themselves at a common location, having little language in common, forming a pidgin, having origin stories about themselves, with their stories about their origins rejected if too implausible, such that there's optimization for making their type of person seem plausible in the language.

The Future of Reality Theory According to John Wheeler

John Wheeler is a famous physicist who coined the term "black hole". Langan discusses Wheeler's views on philosophy of science in part because of the similarity of their views.

In Beyond the Black Hole, Wheeler describes the universe as a "self-excited circuit", analogized to a diagram of a U with an eye on the left branch of the U looking at the right branch of the U, representing how subject is contiguous with object. Viewing the universe as a self-excited circuit requires cognizing perception as part of the self-recognition of reality, and physical matter as informational and therefore isomorphic to perception.

Wheeler describes the universe as "participancy": the participation of observers is part of the dynamics of the universe. A participatory universe is "a 'logic loop' in which 'physics gives rise to observer participancy; observer-participancy gives rise to information; and information gives rise to physics.'"

The "participatory principle" is similar to the anthropic principle but stronger: it is impossible to imagine a universe without observers (perhaps because even the vantage point from which the universe is imagined from is a type of observer).  According to the participatory principle, "no elementary phenomenon is a phenomenon until it is an observed (or registered) phenomenon", generalizing from quantum mechanics' handling of classical states.

Wheeler considers the question of where physical laws come from ("law without law") to be similar to the question of how order comes from disorder, with evolution as an example. Evolution relies on an orderly physics in which the organisms exist, and there is an open question of whether the physical laws themselves have undergone a process analogous to evolution that may yield orderly physics from a less orderly process.

Wheeler also considers explaining how the macroscopic universe we perceive comes from the low-level information processing of quantum mechanics ("It from bit"). Such an explanation must explain "How come existence?": why do we see continuous time and space given a lack of physically fundamental time and space? It must also explain "How come the quantum?", how does quantum mechanics relate to the world we see? And finally, it must explain "How come the 'one world' out of many observer-participants": why do different agents find themselves in "the same" world rather than solipsistic bubbles?

The "It from bit" explanation must, Wheeler says, avoid three errors: "no tower of turtles" (infinite regresses which ultimately fail to explain), "no laws" (lack of pre-existing physical laws governing continuum dynamics), "no continuum" (no fundamental infinitely divisible continuum, given lack of mathematical or physical support for such an infinity), "no space or time" (space and time lack fundamental existence: "Wheeler quotes Einstein in a Kantian vein: 'Time and space are modes by which we think, and not conditions in which we live'").

Wheeler suggests some principles for constructing a satisfactory explanation. The first is that "The boundary of a boundary is zero": this is an algebraic topology theorem showing that, when taking a 3d shape, and then taking its 2d boundary, the boundary of the 2d boundary is nothing, when constructing the boundaries in a consistent fashion that produces cancellation; this may somehow be a metaphor for ex nihilo creation (but I'm not sure how).

The second is "No question? No answer", the idea that un-asked questions don't in the general case have answers, e.g. in quantum measurement the measurement being made ("question") changes future answers, so there is no definite state prior to measurement. This principle implies a significant degree of ontological entanglement between observers and what they observe.

The third is "The Super-Copernican Principle", stating that no place or time ("now") is special; our universe is generated by both past and future. It is rather uncontroversial that past observers affect present phenomena; what is rather more controversial is the idea that this isn't enough, and the present is also influenced by future observers, in a pseudo-retrocausal manner. This doesn't imply literal time travel of the sort that could imply contradictions, but is perhaps more of an anthropic phenomenon: the observations that affect the future "exist more" in some sense; observations are simultaneously summaries of the past and memories interpreted by the future. Sometimes I think my observations are more likely to come from "important" agents that influence the future (i.e. I think I'm more important than a random person), which, confusingly, indicates some influence of future observers on present measure.

The fourth is "Consciousness", stating that it's hard to find a line between what is conscious and unconscious; the word "who" archetypally refers to humans, so overusing the concept indicates anthropocentrism.

The fifth is "More is different": there are more properties of a system that is larger, due to combinatorial explosion. Quantitative differences produce qualitative ones, including a transition to "multi-level logical structures" (such as organisms and computers) at a certain level of complexity.

Langan notes: "Virtually everybody seems to acknowledge the correctness of Wheeler's insights, but the higher-order relationships required to put it all together in one big picture have proven elusive." His CTMU attempts to hit all desiderata.

Some Additional Principles

According to Langan, Descartes argued that reality is mental (rationalism), but went on to assert mind-body dualism, which is contradictory (I don't know enough about Descartes to evaluate this statement). Berkeley, an empiricist, said reality is perceptual, an intersection of mind and matter; if perception is taken out of one's conception of reality, what is left is pure subjective cognition. Langan compares eliminativism, an attempt to subtract cognition from reality, to "trying to show that a sponge is not inherently wet while immersing it in water": as cognitive entities, we can't succeed in eliminating cognition from our views of reality. (I basically agree with this.)

Hume claims causation is a cognitive artifact, substituting the "problem of induction". Langan comments that "the problem of induction merely implies that a global theory of reality can only be established by the rational methods of mathematics, specifically including those of logic." This seems like it may be a misread of Hume given that Hume argued that deductive reasoning was insufficient for deriving a global theory of reality (including causal judgments).

Kant asserted that "unprimed cognition" exists prior to particular contexts (e.g. including time, space, and causation), but also asserts the existence of a disconnected "noumenal" realm, which Langan argues is irrelevant and can be eliminated.

Scientists interpret their observations, but their interpretations are often ad hoc and lack justification. For example, it is unclear how scientists come up with their hypotheses, although cognitive science includes some study of this question, e.g. Bayesian brain theory. Langan investigates the question of attribution of meaning to scientific theories by walking through a sequence of more powerful logics.

"Sentential logic" is propositional calculus; it reasons about truth values of various sentences. Propositional tautologies can be composed, e.g. X & Y is tautological if X and Y are. Predicate logic extends propositional logic to be able to talk about a possibly-infinite set of objects using quantifiers, assigning properties to these objects using predicates. Model theory further extends predicate logic by introducing universes consistent with axiom schemas and allowing reasoning about them.

Langan claims reality theory must emulate 4 properties of sentential logic: absolute truth (truth by definition, as propositional calculus defines truth), closure (the logic being "wholly based on, and defined strictly within the bounds of, cognition and perception"), comprehensiveness (that the logic "applies to everything that can be coherently perceived or conceived"), and consistency ("designed in a way that precludes inconsistency").

While logic deals in what is true or false, reality theory deals in what is real or unreal (perhaps similar to the epistemology/ontology distinction). It must "describe reality on a level that justifies science, and thus occupies a deeper level of explanation than science itself"; it must even justify "mathematics along with science", thus being "metamathematical". To do this, it must relate theory and universe under a "dual aspect monism", i.e. it must consider theory and universe to be aspects of a unified reality.

Logicians, computer scientists, and philosophers of science are familiar with cases where truth is ambiguous: logical undecidability (e.g. Gödel's incompleteness theorem), NP completeness (computational infeasibility of finding solutions to checkable problems), Lowenheim-Skolem (ambiguity of cardinalities of models in model theory), Duhem-Quine (impossibility of testing scientific theories in isolation due to dependence on background assumptions). Langan claims these happen because the truth predicate comes apart from "attributive mappings" that would assign meaning to these predicates. He also claims that falsificationist philosophy of science "demotes truth to provisional status", in contrast to tautological reasoning in logic. (On the other hand, it seems unclear to me how to get any of science from tautologies, given the empirical nature of science.)

Langan desires to create an "extension" to tautological logic to discuss physical concepts such as space, time, and law. He notes a close relationship between logic, cognition, and perception: for example, "X | !X" when applied to perception states that something and its absence can't both be perceived at once (note that "X | !X" is equivalent to "!(X & !X)" in classical but not intuitionistic logic).

Sentential logic, however, is incomplete on its own, since it needs a logician to interpret it. Nature, on the other hand, interprets itself, having "self-processing capability". Accordingly, reality theory should include a mental component in its logic, allowing the logic to process itself, as if by an external mind but instead by itself.

Langan states his main angle of attack on the problem: "the way to build a theory of reality is to identify the properties that it must unconditionally possess in order to exist, and then bring the theory into existence by defining it to possess these properties without introducing merely contingent properties that, if taken as general, could impair its descriptive relationship with the real universe (those can come later and will naturally be subject to empirical confirmation)."

These properties will include the "3 C's": "comprehensiveness (less thorough but also less undecidable than completeness), closure, and consistency". These will correspond to three principles, the "3 M's": "M=R, MAP and MU, respectively standing for the Mind Equals Reality Principle, the Metaphysical Autology Principle, and the Multiplex Unity Principle." Briefly, M=R "dissolves the distinction between theory and universe... [making] the syntax of this theory comprehensive", MAP "tautologically renders this syntax closed or self-contained", and MU "tautologically renders this syntax, and the theory-universe complex it describes, coherent enough to ensure its own consistency".

CTMU's definitions of concepts are unavoidably recursive, perhaps similarly to mutually recursive definitions in mathematics or programming. Langan claims: "Most theories begin with axioms, hypotheses and rules of inference, extract implications, logically or empirically test these implications, and then add or revise axioms, theorems or hypotheses. The CTMU does the opposite, stripping away assumptions and 'rebuilding reality' while adding no assumptions back." This reminds of Kant's project of stripping away and rebuilding metaphysics from a foundation of what must be the case a priori.

The Reality Principle

Reality contains all and only that which is real, if something else influenced reality, it would be part of reality. As a definition this is circular: if we already accept the reality of a single thing, then reality of other things can be derived from their influence on that thing. The circularity invites some amount of ontological dispute over which foundational things can be most readily accepted as real. Langan considers an alternative definition: "Reality is the perceptual aggregate including (1) all scientific observations that ever were and ever will be, and (2) the entire abstract and/or cognitive explanatory infrastructure of perception". This definition seems to lean idealist in defining reality as a perceptual aggregate, expanding from scientific observation in the direction of mind rather than matter.

Syndiffeonesis

Langan writes: "Syndiffeonesis implies that any assertion to the effect that two things are different implies that they are reductively the same; if their difference is real, then they both reduce to a common reality and are to that extent similar. Syndiffeonesis, the most general of all reductive principles, forms the basis of a new view of the relational structure of reality."

As an example, consider apples and oranges. They're different, but what lets us know that they are different? Since they are both plants, they have DNA that can be compared to show that they are different. Also, since they both have a shape, their shapes can be compared and found to be different. Since they both have a taste, they can be tasted to tell that they are different. Each of these comparisons showing difference requires apples and oranges to have something in common, demonstrating syndiffeonesis.

This principle can be seen in type theory; generally, to compare two terms for equality, the terms must have the same type, e.g. 5 and 6 can be found to be unequal since they are both natural numbers.

The commonality is in medium and syntax: "The concept of syndiffeonesis can be captured by asserting that the expression and/or existence of any difference relation entails a common medium and syntax, i.e. the rules of state and transformation characterizing the medium." Syntax can be seen in type theory, since terms that can be compared for equality are both written in the same type theoretic syntax. Medium is less straightforward; perhaps apples and oranges both existing in spacetime in the same universe would be an example of a common medium.

Langan claims: "Every syndiffeonic relation has synetic and diffeonic phases respectively exhibiting synesis and diffeonesis (sameness and difference, or distributivity and parametric locality), and displays two forms of containment, topological and descriptive." The common medium and syntax goes with the synetic phase, while the difference relation goes with the diffeonic phase. One can imagine comparing two things by finding the smallest common "supertype" of both (e.g. fruit for apples/oranges); in this case the specification from "something" to "a fruit" is synetic (in common between apples and oranges, specifying a common medium and syntax), and the specification from "a fruit" to "apple" or "orange" is diffeonic (showing that they are different fruits).

If two things aren't expressed in the same syntax, then the fact that their syntaxes are different itself is a diffeonic relation indicating an underlying, more base-level common syntax and medium. For example, while Python programs are syntactically different from Forth programs, they are both expressed as text files. Python files and apples have even less similar syntax, but both exist in physical space, and can be displayed visually. Langan adds: "Any such syndiffeonic regress must terminate, for if it did not, there would be no stable syntax and therefore no 'relation' stable enough to be perceived or conceived."

Langan uses the notation "X ∽ Y" to indicate the common medium shared by X and Y, with smallest common supertypes possibly being an example. If X and Y are different laws (e.g. physical), then X ~ Y denotes a common set of laws that both X and Y are expressed in; for example, many different physical laws can be expressed as instances of energy conservation.

By using the ∽ operator to iteratively find a common medium for all possible perceptible and cognizable things, the universal base medium and syntax of reality is found. This is perhaps similar to a generative grammar of concepts, and is elaborated on in the SCSPL section.

The Principle of Linguistic Reducibility

Following from the discussion of a common medium of reality, Langan writes: "Reality is a self-contained form of language". It has representations of object-like individuals, space-like relations and attributes, and time-like functions and operators. Our theories of physics have these; physics is a language that can express many different specific physical theories.

Langan argues: "because perception and cognition are languages, and reality is cognitive and perceptual in nature, reality is a language as well." In typical moments, a person is aware of entities which are related and/or have functions applied to them, which could be analogized to language processing. Langan also adds, "whereof that which cannot be linguistically described, one cannot perceive or conceive", following Wittgenstein's "whereof one cannot speak, one must be silent".

Theories of everything attempt to reduce everything to a language. They point to objective matter, but such "pointing" is itself something contained in the whole, sharing structure with the theory; for example, a theory of mass may involve procedures for measuring mass, which tie the theory of mass with the objective subject matter. Such relation between theory and reality suggests the syndiffeonic relation "Language ∽ Reality".

The term "reality" can be analyzed as a linguistic construct: in what cases do words like "real" or "reality" show up, and when are these valid? Sometimes "real" shows up to indicate an inadequacy of a conception, e.g. inability to explain some empirical phenomenon, which is considered "real" unlike the predictions of the wrong theory.

Langan is optimistic about understanding reality linguistically. If we understand reality as a linguistic element, does it follow that we understand reality? It is empirically always the case that our linguistic theories are inadequate in some way, failing to predict some phenomena, or imposing a wrong ontology that has holes; but even these failures can be understood as relations of the linguistic theory to something that can be incorporated into later linguistic theories.

CTMU considers the base elements of reality to be "syntactic operators" that transform linguistic entities including themselves; reality is therefore conceptualized as a dynamical process transforming linguistic content such as theories. Insofar as our theories better approximate the real over time, there must be some sense in which reality is similar to a "syntactic operator", although the details of the theory remain to be seen.

Syntactic Closure: The Metaphysical Autology Principle (MAP)

Langan writes: "All relations, mappings and functions relevant to reality in a generalized effective sense, whether descriptive, definitive, compositional, attributive, nomological or interpretative, are generated, defined and parameterized within reality itself." As a result, reality is closed; there is no way of describing reality except in terms of anything real.

The Metaphysical Autology Principle implies this sort of closure: reality theory must "take the form of a closed network of coupled definitions, descriptions, explanations and interpretations that refer to nothing external to reality itself".

Autology is the study of one's self; reality studies itself in the sense of containing predicates about itself and informational manipulators (such as human scientists) that apply these predicates to reality. Reality theory requires a 2-valued logic distinguishing what is real or not real, e.g. it may contain the statement "a predicate of something real is real".

As an example, consider measuring the size of the universe with a unit length. With a standard ruler, it is possible to measure medium-sized objects, and with theory, it is possible to extrapolate to estimate the size of large objects such as the earth or solar system, or even the entire universe. However, the unit length (the standard ruler) is an object in the universe. There is no "view from nowhere" that contains a measuring unit that can be used to measure the universe. Reality is understood in terms of its own components.

What if there is something like a view from nowhere, e.g. an outer universe simulating ours? "If an absolute scale were ever to be internally recognizable as an ontological necessity, then this would simply imply the existence of a deeper level of reality to which the scale is intrinsic and by which it is itself intrinsically explained as a relative function of other ingredients." So we include the outer universe in "reality" and note that the outer unit is still part of reality.

An approximate Solomonoff inductor predicts the reality generating its percepts as if it's external. But, as theorists reasoning about it, we see that there's a (Solomonoff inductor, external matter, I/O relation) system, so we know that the inductor is part of reality. Then we look at ourselves looking at this system and note that our reasoning about this inductor is, too, part of reality.

Langan defines the "recurrent fallacy" to be: "The existence of the universe is given and therefore in no need of explanation." "Is given" hides what needs to be explained, which should be part of reality; explaining reality in terms of reality implies some sort of cyclicality as discussed earlier.

If the universe were inexplicable, that would imply that it came into being by magic; if there is no magic, the "five whys" must bottom out somewhere. I am less certain of Langan that there are no "magic" unexplained phenomena like fundamental randomness (e.g. in anthropics), but I understand that such explanations are inherently less satisfactory than successful deterministic ones.

Syntactic Comprehensivity-Reflexivity: the Mind Equals Reality Principle (M=R)

Langan defines the M=R principle: "The M=R or Mind Equals Reality Principle asserts that mind and reality are ultimately inseparable to the extent that they share common rules of structure and processing." This is closely related to linguistic reducibility and can be represented as "Mind ∽ Reality".

Separating mind and reality (e.g. Cartesian dualism) assumes existence of a common medium translating between them. If the soul were in another dimension connected to the Pineal gland, that dimension would presumably itself be in some ways like physical spacetime and contain matter.

Langan writes: "we experience reality in the form of perceptions and sense data from which the existence and independence of mind and objective external reality are induced". This is similar to Kant's idea that what we perceive are mental phenomena, not noumena. Any disproof of this idea would be cognitive (as it would have to be evaluated by a mind), undermining a claim of mind-independence. (I am not sure whether this is strictly true; perhaps it's possible to be "hit" by something outside your mind that is not itself cognition or a proof, which can nonetheless be convincing when processed by your mind?). Perceptions are, following MAP, part of reality.

He discusses the implications of a Kantian phenomenon/noumenon split: "if the 'noumenal' (perceptually independent) part of reality were truly unrelated to the phenomenal (cognition-isomorphic) part, then these two 'halves' of reality would neither be coincident nor share a joint medium relating them. In that case, they would simply fall apart, and any integrated 'reality' supposedly containing both of them would fail for lack of an integrated model." Relatedly, Nietzsche concluded that the Kantian noumenon could be dropped, as it is by definition unrelated to any observable phenomena.

Syntactic Coherence and Consistency: The Multiplex Unity Principle (MU)

Langan argues: "we can equivalently characterize the contents of the universe as being topologically 'inside' it (topological inclusion), or characterize the universe as being descriptively 'inside' its contents, occupying their internal syntaxes as acquired state (descriptive inclusion)."

Topological inclusion is a straightforward interpretation of spacetime: anything we see (including equations on a whiteboard) are within spacetime. On the other hand, such equations aim to "capture" the spaciotemporal universe; to the extent they succeed, the universe is "contained" in such equations. Each of these contaminants enforces consistency properties, leading to the conclusion that "the universe enforces its own consistency through dual self-containment".

The Principle of Hology (Self-composition)

Langan writes: "because reality requires a syntax consisting of general laws of structure and evolution, and there is nothing but reality itself to serve this purpose, reality comprises its own self-distributed syntax under MU". As a special case, the language of theoretical physics is part of reality and is a distributed syntax for reality.

Duality Principles

Duality commonly shows up in physics and mathematics. It is a symmetric relation: "if dualizing proposition A yields proposition B, then dualizing B yields A." For example, a statement about points (e.g. "Two non-coincident points determine a line") can be dualized to one about lines ("Two non-parallel lines determine a point") and vice versa.

Langan contrasts between spatial duality principles ("one transposing spatial relations and objects" and temporal duality principles ("one transposing objects or spatial relations with mappings, functions, operations or processes"). This is now beyond my own understanding. He goes on to propose that "Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures", which is even more beyond my understanding.

The Principle of Attributive (Topological-Descriptive, State-Syntax) Duality

There is a duality between sets and relations/attributes. The set subset judgment "X is a subset of Y" corresponds to a judgment of implication of attributes, "Anything satisfying X also satisfies Y". This relates back to duality between topological and descriptive inclusion.

Sets and logic are described with the same structure, e.g. logical and corresponds with set intersection, and logical or corresponds with set union. Set theory focuses on objects, describing sets in terms of objects; logic focuses on attributes, describing constraints to which objects conform. The duality between set theory and logic, accordingly, relates to a duality between states and the syntax to which these states conform, e.g. between a set of valid grammatical sentences and the logical grammar of the language.

Langan writes that the difference between set theory and logic "hinges on the univalent not functor (~), on which complementation and intersection, but not union, are directly or indirectly defined." It is clear that set complement is defined in terms of logical not. I am not sure what definition of intersection Langan has in mind; perhaps the intersection of A and B is the subset of A that is not outside B?

Constructive-Filtrative Duality

Construction of sets can be equivalently either additive (describing the members of the set) or subtractive (describing a restriction of the set of all sets to ones satisfying a given property). This leads to constructive-filtrative duality: "CF duality simply asserts the general equivalence of these two kinds of process with respect to logico-geometric reality...States and objects, instead of being constructed from the object level upward, can be regarded as filtrative refinements of general, internally unspecified higher-order relations."

CF duality relates to the question of how it is possible to get something from nothing. "CF duality is necessary to show how a universe can be 'zero-sum'; without it, there is no way to refine the objective requisites of constructive processes 'from nothingness'.  In CTMU cosmogony, 'nothingness' is informationally defined as zero constraint or pure freedom (unbound telesis or UBT), and the apparent construction of the universe is explained as a self-restriction of this potential."

In describing the universe, we could either have said "there are these things" or "the UBT is restricted in this way". UBT is similar to the God described in Spinoza's Ethics, an infinite substance of which every finite thing is a modification, and to the Tegmark IV multiverse.

As an application, consider death: is death a thing, or is it simply that a life is finite? A life can be constructed as a set of moments, or as the set of all possible moments restricted by, among other things, finite lifespan, death as a filter restricting those moments after the time of death from being included in life, similar to Spinoza's idea that anything finite is derived by bounding something infinite.

Conspansive Duality

There is a duality between cosmic expansion and atom shrinkage. We could either posit that the universe is expanding and the atoms are accordingly getting further apart as space stretches, or we could equivalently posit that atoms themselves are shrinking in a fixed-size space, such that the distances between atoms increase relative to the sizes of each atom.

This is an instance of an ectomorphism: "Cosmic expansion and ordinary physical motion have something in common: they are both what might be called ectomorphisms. In an ectomorphism, something is mapped to, generated or replicated in something external to it." For example, a set of atoms may be mapped to a physical spacetime that is "external" to the atoms.

Langan critiques ectomorphisms: "However, the Reality Principle asserts that the universe is analytically self-contained, and ectomorphism is inconsistent with self-containment." Since spacetime is part of reality, mapping atoms to spacetime is mapping them into reality; however, it is unclear how to map spacetime itself to any part of reality. See also Zeno's Paradox of Place.

In contrast, in endomorphism, "things are mapped, generated or replicated within themselves". An equation on a whiteboard is in the whiteboard, but may itself describe that same whiteboard; thus, the whiteboard is mapped to a part of itself.

Langan specifically focuses on "conspansive endomorphism", in which "syntactic objects are injectively mapped into their own hological interiors from their own syntactic boundaries." I am not sure exactly what this means; my guess is that it means that linguistic objects ("syntactic objects") are mapped to the interior of what they describe (what is within their "syntactic boundaries"); for example, an equation on a whiteboard might map to the interior of the whiteboard described by the equation.

Conspansion "shifts the emphasis from spacetime geometry to descriptive containment, and from constructive to filtrative processing", where physical equations are an example of filtration processing, as they describe by placing constraints on their subject matter.

In a conspansive perspective on physics, "Nothing moves or expands 'through' space; space is state, and each relocation of an object is just a move from one level of perfect stasis to another." In Conway's Game of Life, each space/state is the state of cells, a grid. Each state is itself static, and "future" states follow from "previous" states, but each particular state is static.

A Minkowsky diagram is a multidimensional graph showing events on a timeline, where time is one of the axes. Interactions between events represent objects, e.g. if in one event a ball is kicked and in another the ball hits the wall, the ball connects these events. This is similar to "resource logics" such as variants of linear logic, in which the "events" correspond to ways of transforming propositions to other propositions, and "objects" correspond to the propositions themselves.

In a quantum context, events include interactions between particles, and objects include particles. Particles don't themselves have a consistent place or time, as they move in both space and time; events, however, occur at a particular place and time. Due to speed of light limits, future events can only follow from past events that are in their past lightcone. This leads to a discrete, combinatorial, rhizomic view of physics, in which events proceed from combinations of other events, and more complex events are built from simpler earlier events. Accordingly, "spacetime evolves linguistically rather than geometrodynamically".

From a given event, there is a "circle" of possible places where future events could arise by a given time, based on the speed of light. "Time arises strictly as an ordinal relationship among circles rather than within circles themselves." Langan argues that, by reframing spacetime and events this way, "Conspansion thus affords a certain amount of relief from problems associated with so-called 'quantum nonlocality'." Locality is achieved by restricting which events can interact with other events based on those events' positions and times, and the position and time of the future interactive event. (I don't understand the specific application to quantum nonlocality.)

Properties of events, including time and place, are governed by the laws of physics. Somewhat perplexingly, Langan states: "Since the event potentials and object potentials coincide, potential instantiations of law can be said to reside 'inside' the objects, and can thus be regarded as functions of their internal rules or 'object syntaxes'." My interpretation is that objects restrict what events those objects can be part of, so they are therefore carriers of physical law. I am not really sure how this is supposed to, endomorphically, place all physical law "inside" objects; it is unclear how the earliest objects function to lawfully restrict future ones. Perhaps the first object in the universe contains our universe's original base physical laws, and all future objects inherit at least some of these, such that these laws continue to be applied to all events in the universe?

Langan contrasts the conspansive picture presented with the more conventional spacetime/state view: "Thus, conspansive duality relates two complementary views of the universe, one based on the external (relative) states of a set of objects, and one based on the internal structures and dynamics of objects considered as language processors. The former, which depicts the universe as it is usually understood in physics and cosmology, is called ERSU, short for Expanding Rubber Sheet Universe, while the latter is called USRE (ERSU spelled backwards), short for Universe as a Self-Representational Entity."

Langan claims conspansive duality "is the only escape from an infinite ectomorphic 'tower of turtles'": without endomorphism, all objects must be mapped to a space, which can't itself be "placed" anywhere without risking infinite regress. (Though, as I said, it seems there would have to be some sort of original object to carry laws governing future objects, and it's unclear where this would come from.)

He also says that "At the same time, conspansion gives the quantum wave function of objects a new home: inside the conspanding objects themselves." I am not really sure how to interpret this; the wave function correlates different objects/particles, so it's unclear how to place the wave function in particular objects.

The Extended Superposition Principle

"In quantum mechanics, the principle of superposition of dynamical states asserts that the possible dynamical states of a quantized system, like waves in general, can be linearly superposed, and that each dynamical state can thus be represented by a vector belonging to an abstract vector space": in general, wave functions can be "added" to each other, with the probabilities (square amplitudes) re-normalizing to sum to 1.

Langas seeks to explain wave function collapse without resort to fundamental randomness (as in the Copenhagen interpretation). Under many worlds, the randomness of the Born rule is fundamentally anthropic, as the uncertainty over one's future observations is explained by uncertainty over "where" one is in the wave function.

Physical Markovianism is a kind of physical locality property where events only interact with adjacent events. Conspansion ("extended superposition") allows events to interact non-locally, as long as the future events are in the light cones of the past events. Langan claims that "the Extended Superposition Principle enables coherent cross-temporal telic feedback".

Telons are "utile state-syntax relationships... telic attractors capable of guiding cosmic and biological evolution", somewhat similar to decision-theoretic agents maximizing their own measure. The non-locality of conspansion makes room for teleology: "In extending the superposition concept to include nontrivial higher-order relationships, the Extended Superposition Principle opens the door to meaning and design." Since teleology claims that a whole system is "designed" according to some objective, there must be nonlocal dependencies; similarly, in a Bayesian network, conditioning on the value of a late variable can increase dependencies among earlier variables.

Supertautology

Truth can be conceptualized as inclusion in a domain: something is real if it is part of the domain of reality. A problem for science is that truth can't always be determined empirically, e.g. some objects are too far away to observe.

Langan claims that "Truth is ultimately a mathematical concept...truth is synonymous with logical tautology". It's unclear how to integrate empirical observations and memory into such a view.

Langan seeks to start with logic and find "rules or principles under which truth is heritable", yielding a "supertautological theory". He claims that the following can be mathematically deduced: "nomological covariance, the invariance of the rate of global self-processing (cinvariance), and the internally-apparent accelerating expansion of the system."

Reduction and Extension

In reduction, "The conceptual components of a theory are reduced to more fundamental component"; in extension, the theory is "extended by the emergence of new and more general relationships among [fundamental components]." These are dual to each other.

"The CTMU reduces reality to self-transducing information and ultimately to telesis, using the closed, reflexive syntactic structure of the former as a template for reality theory." Scientific explanations need to explain phenomena; it is possible to ask "five whys", so that scientific explanations can themselves be explained. It is unclear how this chain could bottom out except with a self-explanatory theory.

While biologists try to reduce life to physics, physics isn't self explanatory. Langan claims that "to explain organic phenomena using natural selection, one needs an explanation for natural selection, including the 'natural selection' of the laws of physics and the universe as a whole."

"Syndiffeonic regression" is "The process of reducing distinctions to the homogeneous syntactic media that support them". This consists of looking at different rules and finding a medium in which they are expressed (e.g. mathematical language). The process involves "unisection", which is "a general form of reduction which implies that all properties realized within a medium are properties of the medium itself".

The Principle of Infocognitive Monism

Although information is often conceptualized as raw bits, information is self-processing because it comes with structure; a natural language sentence has grammar, as do computer programs, which generally feature an automated parser and checker.

Engineering information fields assume "the existence of senders, receivers, messages, channels and transmissive media is already conveniently given", e.g. computer science assumes the existence of a Turing complete computer. This leaves unclear how these information-processing elements are embedded (e.g. in matter).

"SCSPL" stands for "self configuring self processing language", which has some things in common with a self-modifying interpreter.

Telic Reducibility and Telic Recursion

"Telic recursion is a fundamental process that tends to maximize a cosmic self-selection parameter, generalized utility, over a set of possible syntax-state relationships in light of the selfconfigurative freedom of the universe": it is a teleological selection mechanism on infocognition, under which structures that achieve higher "generalized utility" are more likely to exist. This is perhaps a kind of self-ratification condition, where structures that can explain their own origins are more likely to exist.

It is unclear how to explain physical laws, which themselves explain other physical phenomena. Objects and laws are defined in terms of each other, e.g. mass is a property of objects and is measured due to the laws relating mass to measurable quantities. Due to this, Langan argues that "the active medium of cross-definition possesses logical primacy over laws and arguments alike and is thus pre-informational and pre-nomological in nature...i.e., telic. Telesis... is the primordial active medium from which laws and their arguments and parameters emerge by mutual refinement or telic recursion".

It is unclear how to imagine a "pre-informational" entity. One comparison point is language: we find ourselves speaking English, and referring to other languages within English, but this didn't have to be the case, the language could have been different. Perhaps "pre-informational" refers to a kind of generality beyond the generality allowing selection of different natural languages?

Telesis even comes before spacetime; there are mental states in which spacetime is poorly defined, and mathematicians and physicists have refined their notion of spacetime over, well, time. (Langan therefore disagrees with Kant, who considers spacetime a priori).

Langan contrasts two stages of telic recursion: "Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage."

It makes sense for physical laws to be determined along with initial state: among other things, states are constrained by laws, and state configurations are more or less likely depending on the laws.

It sounds like the secondary stage consists of, roughly, dynamical system or MDP-like state transitions. However, Langan goes on to say that "secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion". These views are explicitly contrasted: "The CTMU, on the other hand [in contrast to deterministic computational and continuum models of reality], is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity."

Telic recursion provides "reality with a 'self-simulative scratchpad' on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes"; one can imagine different agent-like telors "planning out" the universe between them with a shared workspace. Since telic recursion includes the subject matter of anthropics, CTMU implies that anthropics applies after the universe's creation, not just before. Langan claims that telors are "coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons)"; the notion of "competitive interference" is perhaps similar to Darwinian competition, in which organisms are more likely to exist if they can bring similar organisms about in competition with each other.

The Telic Principle

The Telic principle states: "the universe configures itself according to the requirement that it self-select from a background of undifferentiated ontological potential or telesis...The Telic Principle is responsible for converting potential to actuality in such a way as to maximize a universal self-selection parameter, generalized utility."

In science, teleology has fallen out of favor, being replaced with the anthropic principle. Anthropics is a case of teleological selection, in which the present determines the past, at least subjectively (the requirement that life exist in the universe determines the universe's initial conditions).

The Weak Anthropic Principle, which states that we must find ourselves in a universe that has observers, fails to explain why there is a multiverse from which our universe is "selected" according to the presence of observers. The multiverse view can be contrasted with a fine-tuning view, in which there is possible only a single universe that has been "designed" so as to be likely to contain intelligent life.

The Strong Anthropic Principle, on the other hand, states that only universes with intelligent life "actually exist". This makes reality non-objective in some sense, and implies that the present can determine the past. Anthropics lacks a loop model of self-causality, by which such mutual causality is possible.

We find ourselves in a self consistent structure (e.g. our mathematical and physical notation), but it could have been otherwise, since we could use a different language or mathematical notation, or find ourselves in a universe with different laws. It would therefore be an error to, employing circular reasoning, claim that our consistent structures are universal.

Langan claims that "Unfortunately, not even valid tautologies are embraced by the prevailing school of scientific philosophy, falsificationism", as tautologies are unfalsifiable. I think Popper would say that tautologies and deductive reasoning are necessary for falsificationist science (in fact, the main motivation for falsificationism is to remove the need for Humean induction).

For anthropic arguments to work, there must be some universal principles: "If the universe is really circular enough to support some form of 'anthropic' argument, its circularity must be defined and built into its structure in a logical and therefore universal and necessary way. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure." I think this is basically correct in that anthropic reasoning requires forms of reasoning that go beyond the reasoning one would use to reason about a universe "from outside"; the laws of the universe must be discoverable from inside and be consistent with such discovery.

The anthropic selection of the universe happens from "UBT": "Thus, the universe 'selects itself' from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general." Trivially, the a priori from which the universe is anthropically selected must not contain information specifying the universe, as this process is what selects this information. UBT could be compared to the Spinozan god, a substance which all specific entities (including the empirical physical universe) are modes of, or to the Tegmark IV multiverse. Telic recursion, then, must select the immanent empirical experience of this universe out of the general UBT possibility space.

The telic principles implies some forms of "retrocausality":  "In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely separated events, lets the universe 'retrodict' itself through meaningful cross-temporal feedback." An empirical observation of a given universe may be more likely not just because of its present and past, but because of its future, e.g. observations that are more likely to be remembered may be considered more likely (and will be more likely in the empirical remembered past).

Maximization of generalized utility is a kind of economic principle. The self tries to exist more; evolution implies something like this behavior at the limit, although not before the limit. This framework can't represent all possible preferences, although in practice many preferences are explained by it anyway.

Conclusion

This as far as I got in the summary and review. The remaining sections describe general computer science background such as the Chomsky hierarchy, and SCSPL, a model of reality as a self-processing language.

Overall, I think Langan raises important questions that conventional analytic philosophy has trouble with, such as what more general principle underlies anthropics, and how to integrate cognition into a physics-informed worldview without eliminating it. He presents a number of concepts, such as syndiffeonesis, that are useful in themselves.

The theory is incredibly ambitious, but from my perspective, it didn't deliver on them. This is partially because the document was hard to understand, but I'm not convinced I'd think CTMU delivers on its ambition if I fully understood it. It's an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation. In particular, I doubt that conspansion solves quantum locality problems as Langan suggests; conceiving of the wave function as embedded in conspanding objects seems to neglect correlations between the objects implied by the wave function, and the appeal to teleology to explain the correlations seems hand-wavey.

A naive interpretation of CTMU would suggest time travel is possible through telesis, though I doubt Langan would endorse strong implications of this. I've written before on anthropics and time travel; universes that don't basically factor into a causal graph tend to strongly diverge from causality, e.g. in optimizing for making lots of time turners pass the same message back without ever passing a different message back. Anthropics shows that, subjectively, there is at least some divergence from a causal universe model, but it's important to explain why this divergence is fairly bounded, and we don't see evidence of strong time travel, e.g. hypercomputation.

Despite my criticisms of the document, it raises a number of important questions for mainstream scientific philosophy, and further study of these questions and solutions to them, with more explication for how the theory "adds up to normality" (e.g. in the cause of causality), might be fruitful. Overall, I found it worthwhile to read the document even though I didn't understand or agree with all of it.

37 comments

Comments sorted by top scores.

comment by C Langan (c-langan) · 2024-04-04T00:39:33.782Z · LW(p) · GW(p)

I appreciate these intelligent attempts to engage with the CTMU. Thank you.

However, there may be a bit of confusion regarding what the CTMU is. While it is not without important physical ramifications, they are not its raison d'etre. It is not about the prediction of specific physical effects; that's empirical induction, which proves nothing about anything. It's about finding a consistent overall description of reality, mankind, and the relationship between them...something on which mankind can rely for purposes of understanding, survival, and evolution. It explains how a universe can generate itself in order to experience itself through its sensor-controllers (us), and rather than relying on murky or nebulous pseudo-concepts, it does so in the clearest possible way. (One needs a feel for philosophical reasoning and a capacity for abstraction.) 

Although its sufficiency for this purpose may not be immediately evident in the introductory 2002 paper reviewed here, it is hands-down the best theory for this purpose. It answers important questions that other theories haven't even asked. To say "I see nothing here worth my time or attention!" amounts to saying "I don’t care about the overall structure of reality, what I am, or what my proper place in reality might be. I like solving my own little problems and getting a paycheck! Now show me something I can use!" To which I can only respond, that's not the wavelength I'm on, because it's not the wavelength of greatest utility to mankind. Mankind doesn't need another techie billionaire - they're a dime a dozen, and I see nothing impressive coming out of them. I have no doubt whatsoever that I could vastly outperform any of them.

I see a few derisive comments to the effect that the CTMU fails to pass the "grammar test" for formal programming languages. (1) The CTMU is not a formal system, but a Metaformal System. That it's not a standard formal system should have been obvious from reading the 2002 paper. (2) The CTMU is not (primarily) a computational system. It's a "protocomputational system". Protocomputation is a pre-mechanical analogue of computation - this is absolutely necessary for a generative system in the CTMU sense - and so much for "well-ordering". (3) A generative grammar is simply a nonterminal substitution system that produces terminal expressions by an adaptively ordered sequence of substitutions; everything else is open. I've explained how this works in several papers. When you get around to those, maybe I'll comment some more. 

I also see comments to the effect that given competent and significant work, all I have to do is publish it in some "reputable academic journal". This, of course, is nonsense. Academic journals tend to be run by the same libelous acadummy nincompoops who have been falsely labeling me an "Intelligent Design Creationist" all these years while sneaking around like rats in the woodwork and running me down behind my back (which I have on good authority). Pardon my language, but I wouldn't publish last week’s grocery list in one of their circle-jerk outhouse rags if they paid me. I simply wouldn't feel right about giving them yet another opportunity to lie, call me names, and plagiarize me at their convenience. Most of them are vermin, and they can piss off.

The CTMU is not conjectural, but a lock. So as much as I'd like to humbly efface myself in an outpouring of false modesty, I'll merely point out that arguing with the CTMU amounts to undermining one's own argumentation, whatever it may be. People have been trying to get over on the CTMU for the last 35 or so years, and not one has ever gotten to first base. This was not an accident. If you think you see a mistake or critical inadequacy, the mistake and the inadequacy are almost certainly yours.

As far as "formal mathematics" is concerned, the entire CTMU is mathematical to the core. If it isn't so in the "formal" sense, then your "formal rules" (i.e., your axiomatization and/or rules of substitution) are inadequate to characterize the theory. (Here's a hint: the CTMU evolves by generating new axioms called "telons". The system is innately Godelian. I began publishing on this theory in the same year that Roger Penrose published "The Emperor's New Mind", at which time he hadn't said anything about undecidability that I'd missed. Since then, the gap has only grown wider.)

Thanks for your attention…and again, for your intelligent comments. 

comment by Scott Garrabrant · 2024-03-27T22:39:20.644Z · LW(p) · GW(p)

I think Chris Langan and the CTMU are very interesting, and I there is an interesting and important challenge for LW readers to figure out how (and whether) to learn from Chris. Here are some things I think are true about Chris (and about me) and relevant to this challenge. (I do not feel ready to talk about the object level CTMU here, I am mostly just talking about Chris Langan.)

  1. Chris has a legitimate claim of being approximately the smartest man alive according to IQ tests.
  2. Chris wrote papers/books that make up a bunch of words there are defined circularly, and are difficult to follow. It is easy to mistake him for a complete crackpot.
  3. Chris claims to have proven the existence of God.
  4. Chris has been something-sort-of-like-canceled for a long time. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.")
  5. Chris has some followers that I think don't really understand him. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.")
  6. Chris acts socially in a very nonstandard way that seems like a natural consequence of having much higher IQ than anyone else he has ever met. In particular, I think this manifests in part as an extreme lack of humility.
  7. Chris is actually very pleasant to talk to if (like me) it does not bother you that he acts like he is much smarter than you.
  8. I personally think the proof of the existence of God is kid of boring. It reads to me as kind of like "I am going to define God to be everything. Notice how this meets a bunch of the criteria people normally attribute to God. In the CTMU, the universe is mind-like. Notice how this meets a bunch more criteria people normally attribute to God."
  9. While the proof of the existence of God feels kind of mundane to me, Chris is the kind of person who chooses to interpret it as a proof of the existence of God. Further, he also has other more concrete supernatural-like and conspiracy-theory-like beliefs, that I expect most people here would want to bet against.
  10. I find the CTMU in general interesting, (but I do not claim to understand it).
  11. I have noticed many thoughts that come naturally to me that do not seem to come naturally to other people (e.g. about time or identity), where it appears to me that Chris Langan just gets it (as in he is independently generating it all).
  12. For years, I have partially depended on a proxy when judging other people (e.g. for recommending funding) that is is something like "Do I, Scott, like where my own thoughts go in contact with the other person?" Chris Langan is approximately at the top according to this proxy.
  13. I believe I and others here probably have a lot to learn from Chris, and arguments of the form "Chris confidently believes false thing X," are not really a crux for me about this.
  14. IQ is not the real think-oomph (and I think Chris agrees), but Chris is very smart, and one should be wary of clever arguers, especially when trying to learn from someone with much higher IQ than you.
  15. I feel like I am spending (a small amount of) social credit in this comment, in that when I imagine a typical LWer thinking "oh, Scott semi-endorses Chris, maybe I should look into Chris," I imagine the most likely result is that they will reach the conclusion is that Chris is a crackpot, and that Scott's semi-endorsements should be trusted less.
Replies from: zhukeepa, David Udell
comment by zhukeepa · 2024-04-03T00:50:52.576Z · LW(p) · GW(p)

In particular, I think this manifests in part as an extreme lack of humility.

I just want to note that, based on my personal interactions with Chris, I experience Chris's "extreme lack of humility" similarly to how I experience Eliezer's "extreme lack of humility": 

  1. in both cases, I think they have plausibly calibrated beliefs about having identified certain philosophical questions that are of crucial importance to the future of humanity, that most of the world is not taking seriously,[1] leading them to feel a particular flavor of frustration that people often interpret as an extreme lack of humility
  2. in both cases, they are in some senses incredibly humble in their pursuit of truth, doing their utmost to be extremely honest with themselves about where they're confused
  1. ^

    It feels worth noting that Chris Langan has written about Newcomb's paradox in 1989, and that his resolution involves thinking in terms of being in a simulation, similarly to what Andrew Critch has written about.

Replies from: Scott Garrabrant, mardukofbabylon
comment by Scott Garrabrant · 2024-04-03T02:17:58.740Z · LW(p) · GW(p)

I agree with this.

comment by YimbyGeorge (mardukofbabylon) · 2024-04-17T10:07:07.028Z · LW(p) · GW(p)

Thanks was looking for that link to his resolution of newcombs' paradox.

Too funny! "You are "possessed" by Newcomb's Demon, and whatever self-interest remains to you will make you take the black box only. (Q.E.D.)"

comment by David Udell · 2024-03-29T01:00:47.963Z · LW(p) · GW(p)

I believe I and others here probably have a lot to learn from Chris, and arguments of the form "Chris confidently believes false thing X," are not really a crux for me about this.

Would you kindly explain this? Because you think some of his world-models independently throw out great predictions, even if other models of his are dead wrong?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2024-03-29T01:36:00.297Z · LW(p) · GW(p)

More like illuminating ontologies than great predictions, but yeah.

comment by Wei Dai (Wei_Dai) · 2024-03-28T07:02:30.695Z · LW(p) · GW(p)

While reading this, I got a flash-forward of what my life (our lives) may be like in a few years, i.e., desperately trying to understand and evaluate complex philosophical constructs presented to us by superintelligent AI, which may or may not be actually competent at philosophy.

Replies from: Capybasilisk
comment by Capybasilisk · 2024-03-28T22:51:31.867Z · LW(p) · GW(p)

Luckily we can train the AIs to give us answers optimized to sound plausible to humans.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2024-03-29T00:42:54.181Z · LW(p) · GW(p)

I'm guessing you're not being serious, but just in case you are, or in case someone misinterprets you now or in the future, I think we probably do not want to train AIs to give us answers optimized to sound plausible to humans, since that would make it even harder to determine whether or not the AI is actually competent at philosophy. (Not totally sure, as I'm confused about the nature of philosophy and philosophical reasoning, but I think we definitely don't want to do that in our current epistemic state, i.e., unless we had some really good arguments that says it's actually a good idea.)

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-03-29T17:42:38.961Z · LW(p) · GW(p)

How else will you train your AI?

Here are some other options which IMO reduce to a slight variation on the same thing or are unlikely to work:

  • Train your AI on predicting/imitating a huge amount of human output and then prompt/finetune the model to imitate humans philosophy and hope this works. This is a reasonable baseline, but I expect it to clearly fail to produce sufficiently useful answers without further optimization. I also think it's de facto optimizing for plausiblity to some extent due to properties of the human answer distribution.
  • Train your AI to give answers which sound extremely plausible (aka extremely likely to be right) in cases where humans are confident in the answers and then hope for generalization.
  • Train your AIs to give answers which pass various consistency checks. This reduces back to a particular notion of plausiblity.
  • Actually align your AI in some deep and true sense and ensure it has reasonably good introspective access. Then, just ask it questions. This is pretty unlikely to be technically feasible IMO, at least for the first very powerful AIs.
  • You can do something like train it with RL in an environment where doing good philosophy is instrumentally useful and then hope it becomes competent via this mechanism. This doesn't solve the elicitation problem, but could in principle ensure the AI is actually capable. Further, I have no idea what such an environment would look like if any exists. (There are clear blockers to using this sort of approach to evaluate alignment work without some insane level of simulation, I think similar issues apply with philosophy.)

Ultimately, everything is just doing some sort of optimization for something like "how good do you think it is" (aka plausibility). For instance, I do this while thinking of ideas. So, I don't really think this is avoidable at some level. You might be able to avoid gaps in abilities between the entity optimizing and the entity judging (as is typically the case in my brain) and this solves some of the core challenges TBC.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2024-04-01T22:45:03.418Z · LW(p) · GW(p)

It seems that humans, starting from a philosophically confused state, are liable to find multiple incompatible philosophies highly plausible in a path-dependent way, see for example analytic vs continental philosophy vs non-Western philosophies. I think this means if we train an AI to optimize directly for plausibility, there's little assurance that we actually end up with philosophical truth.

A better plan is to train the AI in some way that does not optimize directly for plausibility, have some independent reason to think that the AI will be philosophically competent, and then use plausibility only as a test to detect errors in this process. I've written [LW · GW] in the past that ideally we would first solve metaphilosophy so we that we can design the AI and the training process with a good understanding of the nature of philosophy and philosophical reasoning in mind, but failing that, I think some of the ideas in your list are still better than directly optimizing for plausibility.

You can do something like train it with RL in an environment where doing good philosophy is instrumentally useful and then hope it becomes competent via this mechanism.

This is an interesting idea. If it was otherwise feasible / safe / a good idea, we could perhaps train AI in a variety of RL environments, see which ones produce AIs that end up doing something like philosophy, and then see if we can detect any patterns or otherwise use the results to think about next steps.

comment by justinpombrio · 2024-03-29T14:46:07.149Z · LW(p) · GW(p)

tldr; a spot check calls bullshit on this.

I know a bunch about formal languages (PhD in programming languages), so I did a spot check on the "grammar" described on page 45. It's described as a "generative grammar", though instead of words (sequences of symbols) it produces "L_O spacial relationships". Since he uses these phrases to describe his "grammar", and they have their standard meaning because he listed their standard definition earlier in the section, he is pretty clearly claiming to be making something akin to a formal grammar.

My spot check is then: is the thing defined here more-or-less a grammar, in the following sense?

  1. There's a clearly defined thing called a grammar, and there can be more than one of them.
  2. Each grammar can be used to generate something (apparently an L_O) according to clearly defined derivation rules that depend only on the grammar itself.

If you don't have a thing plus a way to derive stuff from that thing, you don't have anything resembling a grammar.

My spot check says:

  1. There's certainly a thing called a grammar. It's a four-tuple, whose parts closely mimic that of a standard grammar, but using his constructs for all the basic parts.
  2. There's no definition of how to derive an "L_O spacial relationship" given a grammar. Just some vague references to using "telic recursion".

I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.


Another fishy aspect of this section is how he makes a point of various things coinciding, and how that's very different from the standard definitions. But it's compatible with the standard definitions! E.g. the alphabet of a language is typically a finite set of symbols that have no additional structure, but there's no reason you couldn't define a language whose symbols were e.g. grammars over that very language. The definition of a language just says that its symbols form a set. (Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)


I'm really not seeing any value in this guy's writing. Could someone who got something out of it share a couple specific insights that got from it?

Replies from: zhukeepa
comment by zhukeepa · 2024-04-03T01:03:04.913Z · LW(p) · GW(p)

I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.

I think it's an attempt to gesture at something formal within the framework of the CTMU that I think you can only really understand if you grok enough of Chris's preliminary setup. (See also the first part of my comment here [LW(p) · GW(p)].)

(Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)

A big part of Chris's preliminary setup is around how to sidestep the issues around making the sets well-ordered. What I've picked up in my conversations with Chris is that part of his solution involves mutually recursively defining objects, relations, and processes, in such a way that they all end up being "bottomless fractals" that cannot be fully understood from the perspective of any existing formal frameworks, like set theory. (Insofar as it's valid for me to make analogies between the CTMU and ZFC, I would say that these "bottomless fractals" violate the axiom of foundation, because they have downward infinite membership chains.)

I'm really not seeing any value in this guy's writing. Could someone who got something out of it share a couple specific insights that got from it?

I think Chris's work is most valuable to engage with for people who have independently explored philosophical directions similar to the ones Chris has explored; I don't recommend for most people to attempt to decipher Chris's work. 

I'm confused why you're asking about specific insights people have gotten when Jessica has included a number of insights she's gotten in her post (e.g. "He presents a number of concepts, such as syndiffeonesis, that are useful in themselves."). 

Replies from: justinpombrio
comment by justinpombrio · 2024-04-03T14:48:11.019Z · LW(p) · GW(p)

"gesture at something formal" -- not in the way of the "grammar" it isn't. I've seen rough mathematics and proof sketches, especially around formal grammars. This isn't that, and it isn't trying to be. There isn't even an attempt at a rough definition for which things the grammar derives.

I think Chris’s work is most valuable to engage with for people who have independently explored philosophical directions similar to the ones Chris has explored

A big part of Chris’s preliminary setup is around how to sidestep the issues around making the sets well-ordered.

Nonsense! If Chris has an alternative to well-ordering, that's of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it.

Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.

because someone else I’d funded to review Chris’s work

If you're going to fund someone to do something, it should be to formalize Chris's work. That would not only serve as a BS check, it would make it vastly more approachable.

I’m confused why you’re asking about specific insights people have gotten when Jessica has included a number of insights she’s gotten in her post

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

Replies from: zhukeepa, zhukeepa
comment by zhukeepa · 2024-04-03T19:59:57.801Z · LW(p) · GW(p)

Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.

False! :P I think no part of his framework can be completely understood without the whole, but I think the big pictures of some core ideas can be understood in relative isolation. (Like syndiffeonesis, for example.) I think this is plausibly true for his alternatives to well-ordering as well. 

If you're going to fund someone to do something, it should be to formalize Chris's work. That would not only serve as a BS check, it would make it vastly more approachable.

I'm very on board with formalizing Chris's work, both to serve as a BS check and to make it more approachable. I think formalizing it in full will be a pretty nontrivial undertaking, but formalizing isolated components feels tractable, and is in fact where I'm currently directing a lot of my time and funding. 

"gesture at something formal" -- not in the way of the "grammar" it isn't. I've seen rough mathematics and proof sketches, especially around formal grammars. This isn't that, and it isn't trying to be.

[...]

Nonsense! If Chris has an alternative to well-ordering, that's of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it.

My claim was specifically around whether it would be worth people's time to attempt to decipher Chris's written work, not whether there's value in Chris's work that's of general mathematical interest. If I succeed at producing formal artifacts inspired by Chris's work, written in a language that is far more approachable for general academic audiences, I would recommend for people to check those out. 

That said, I am very sympathetic to the question "If Chris has such good ideas that he claims he's formalized, why hasn't he written them down formally -- or at least gestured at them formally -- in a way that most modern mathematicians or scientists can recognize? Wouldn't that clearly be in his self-interest? Isn't it pretty suspicious that he hasn't done that?" 

My current understanding is that he believes that his current written work should be sufficient for modern mathematicians and scientists to understand his core ideas, and insofar as they reject his ideas, it's because of some combination of them not being intelligent and open-minded enough, which he can't do much about. I think his model is... not exactly false, but is also definitely not how I would choose to characterize most smart people who are skeptical of Chris. 

To understand why Chris thinks this way, it's important to remember that he had never been acculturated into the norms of the modern intellectual elite -- he grew up in the midwest, without much affluence; he had a physically abusive stepfather he kicked out of his home by lifting weights; he was expelled from college for bureaucratic reasons, which pretty much ended his relationship with academia (IIRC); he mostly worked blue-collar jobs throughout his adult life; AND he may actually have been smarter than almost anybody he'd ever met or heard of. (Try picturing what von Neumann may have been like if he'd had the opposite of a prestigious and affluent background, and had gotten spurned by most of the intellectuals he'd talked to.) Among other things, Chris hasn't had very many intellectual peers who could gently inform him that many portions of his written work that he considers totally obvious and straightforward are actually not at all obvious for a majority of his intended audience. 

On the flip side, I think this means there's a lot of low-hanging fruit in translating Chris's work into something more digestible by the modern intelletual elite. 

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

Gotcha! I'm happy to do that in a followup comment. 

Replies from: justinpombrio
comment by justinpombrio · 2024-04-03T20:46:00.910Z · LW(p) · GW(p)

I think formalizing it in full will be a pretty nontrivial undertaking, but formalizing isolated components feels tractable, and is in fact where I’m currently directing a lot of my time and funding.

Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.

(I would also want to check that that math had something to do with his earlier writings.)

My current understanding is that he believes that his current written work should be sufficient for modern mathematicians and scientists to understand his core ideas

Uh oh. The "formal grammar" that I checked used formal language, but was not even close to giving a precise definition. So Chris either (i) doesn't realize that you need to be precise to communicate with mathematicians, or (ii) doesn't understand how to be precise.

Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.

Replies from: zhukeepa
comment by zhukeepa · 2024-04-03T21:43:02.755Z · LW(p) · GW(p)

Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.

(I would also want to check that that math had something to do with his earlier writings.)

I think we're on exactly the same page here. 

Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.

That's certainly been a live hypothesis in my mind as well, that I don't think can be ruled out before I personally see (or produce) a piece of formal math (that most mathematicians would consider formal, lol) that captures the core ideas of the CTMU. 

So Chris either (i) doesn't realize that you need to be precise to communicate with mathematicians, or (ii) doesn't understand how to be precise.

While I agree that there isn't very much explicit and precise mathematical formalism in the CTMU papers themselves, my best guess is that (iii) Chris does unambiguously gesture at a precise structure he has in mind, assuming a sufficiently thorough understanding of the background assumptions in his document (which I think is a false assumption for most mathematicians reading this document). By analogy, it seems plausible to me that Hegel was gesturing at something quite precise in some of his philosophical works, that only got mathematized nearly 200 years later by category theorists. (I don't understand any Hegel myself, so take this with a grain of salt.) 

comment by zhukeepa · 2024-04-19T15:24:47.462Z · LW(p) · GW(p)

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

I finally wrote one up! It ballooned into a whole LessWrong post.  [LW · GW]

comment by Mitchell_Porter · 2024-03-29T21:58:03.355Z · LW(p) · GW(p)

Many people outside of academic philosophy have written up some kind of philosophical system or theory of everything (e.g. see vixra and philpapers). And many of those works would, I think, sustain at least this amount of analysis. 

So the meta-question is, what makes such a work worth reading? Many such works boil down to a list of the author's opinions on a smorgasbord of topics, with none of the individual opinions actually being original. 

Does Langan have any ideas that have not appeared before? 

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2024-03-29T22:10:21.073Z · LW(p) · GW(p)

I paid attention to this mainly because other people wanted me to, but the high IQ thing also draws some attention. I've seen ideas like "theory of cognitive processes should be integrated into philosophy of science" elsewhere (and have advocated such ideas myself), "syndiffeonesis" seems like an original term (although some versions of it appear in type theory), "conspansion" seems pretty Deleuzian, UBT is Spinozan, "telic recursion" is maybe original but highly underspecified... I think what I found useful about it is that it had a lot of these ideas, at least some of which are good, and different takes on/explanations of them than I've found elsewhere even when the ideas themselves aren't original.

Replies from: zhukeepa
comment by zhukeepa · 2024-04-03T00:49:25.428Z · LW(p) · GW(p)

I've spent 40+ hours talking with Chris directly, and for me, a huge part of the value also comes from seeing how Chris synthesizes all these ideas into what appears to be a coherent framework. 

comment by Richard_Kennaway · 2024-03-28T19:47:58.253Z · LW(p) · GW(p)

Exploring this on the web, I turned up a couple of related Substacks: Chris Langan's Ultimate Reality and TELEOLOGIC: CTMU Teleologic Living. The latter isn't just Chris Langan, a Dr Gina Langan is also involved. A lot of it requires a paid subscription, which for me would come lower in priority than all the definitely worthwhile blogs I also don't feel like paying for.

Warning: there's a lot of conspiracy stuff there as well (Covid, "Global Occupation Government", etc.).

Perhaps this 4-hour interview on "IQ, Free Will, Psychedelics, CTMU, & God" may give some further sense of his thinking.

Googling "CTMU Core Affirmations" turns up a rich vein of ... something, including the CTMU Radio YouTube channel.

comment by zhukeepa · 2024-04-03T00:57:47.656Z · LW(p) · GW(p)

Thanks a lot for posting this, Jessica! A few comments: 

It's an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation. 

I think this is a reasonable take. My own current best guess is that the contents of the document uniquely specifies a precise theory, but that it's very hard to understand what's being specified without grokking the details of all the arguments he's using to pin down the CTMU. I partly believe this because of my conversations with Chris, and I partly believe this because someone else I'd funded to review Chris's work (who had extensive prior familiarity with the kinds of ideas and arguments Chris employs) managed to make sense of most of the CTMU (including the portions using formal notation) based on Chris's written work alone, in a way that Chris has vetted over the course of numerous three-way Zoom calls. 

In particular, I doubt that conspansion solves quantum locality problems as Langan suggests; conceiving of the wave function as embedded in conspanding objects seems to neglect correlations between the objects implied by the wave function, and the appeal to teleology to explain the correlations seems hand-wavey. 

I'm actually not sure which quantum locality problems Chris is referring to, but I don't think the thing Chris means by "embedding the wave function in conspanding objects" runs into the problems you're describing. Insofar as one object is correlated with others via quantum entanglement, I think those other objects would occupy the same circle -- from the subtext of Diagram 11 on page 28, The result is a Venn diagram in which circles represent objects and events, or (n>1)-ary interactive relationships of objects. That is, each circle depicts the “entangled quantum wavefunctions” of the objects which interacted with each other to generate it.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2024-04-03T22:20:49.572Z · LW(p) · GW(p)

Regarding quantum, I'd missed the bottom text. It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with. He says "At the same time, conspansion gives the quantum wave function of objects a new home: inside the conspanding objects themselves" which implies the wave function is somehow located in the objects.

From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle. I still don't see how to get quantum non-locality from this. The wave function can be represented as a complex valued function on configuration space; how could it be factored into a number of entanglements that only involve a small number of objects? In probability theory you can represent a probability measure as a factor graph, where each factor only involves a limited subset of variables, but (a) not all distributions can be efficiently factored this way, (b) generalizing this to quantum wave functions is additionally complicated due to how wave functions differ from probability distributions.

Replies from: zhukeepa
comment by zhukeepa · 2024-04-04T16:54:59.435Z · LW(p) · GW(p)

It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with.

This seems right to me, as far as I can tell, with the caveat that "restrict" (/ "filter") and "construct" are two sides of the same coin, as per constructive-filtrative duality. 

From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle.

I think each circle represents the entangled wavefunctions of all of the objects that generated the circle, not just some subset. 

Relatedly, you talk about "the" wave function in a way that connotes a single universal wave function, like in many-worlds. I'm not sure if this is what you're intending, but it seems plausible that the way you're imagining things is different from how my model of Chris is imagining things, which is as follows: if there are N systems that are all separable from one another, we could write a universal wave function for these N systems that we could factorize as ψ_1 ⊗ ψ_2 ⊗ ... ⊗ ψ_N, and there would be N inner expansion domains (/ "circles"), one for each ψ_i, and we can think of each ψ_i as being "located within" each of the circles. 

comment by James Camacho (james-camacho) · 2024-10-06T03:30:35.856Z · LW(p) · GW(p)

"The boundary of a boundary is zero"

 

I think this is mostly arbitrary.

So, in the 20th century Russel's paradox came along and forced mathematicians into creating constructive theories. For example, in ZFC set theory, you begin with the empty set {}, and build out all sets with a tower of lower-level sets. Maybe the natural numbers become {}, {{}}, {{{}}}, etc. Using different axioms you might get a type theory; in fact, any programming language is basically a formal logic. The basic building blocks like the empty set, or the builtin types are called atoms.

In algebraic geometry, the atom is a simplex—lines in one dimension, triangles in two dimensions, tetrahedrons in three dimensions, and so on. I think they generally use an axiom of infinity, so each simplex is infinitely small (convenient when you have smooth curves like circles), but they need to be defined at the lowest level. This includes how you define simplices from lower-dimensional simplices! And this is where the boundary comes in.

Say you have a triangle (2-simplex) [A, B, C]. Naively, we could define it's boundary as the sum of its edges:

However, if we stuck two of them together, the shared edge [A, C] wouldn't disappear from the boundary:

This is why they usually alternate sign, so

Then, since

you could also write it like

It's essentially a directed loop around the triangle (the analogy breaks when you try higher dimensions, unfortunately). Now, the famous quote "the boundary of a boundary is zero" is relatively trivial to prove. Let's remove just the two indices $A_i, A_j$ from the simplex $[A_1, A_2, \dots, A_i, \dots, A_j, \dots, A_n]$. If we remove $A_i$ first, we'd get

while removing $A_j$ first gives

The first is $-1$ times the second, so everything will zero out. However, it's only zero because we decided edges should cancel out along shared boundaries. We can choose a different system where they add together, which leads to the permanent as a measure of volume instead of the determinant. Or, one that uses a much more complex relationship (re: immanent).

I'm certainly not an expert here, but it seems like fermions (e.g. electrons) exchange via the determinant, bosons (e.g. mass/gravity) use the permanent, and more exotic particles (e.g. anyons) use the immanent. So, when people base their speculations on the "boundary of a boundary" being a fundamental piece of reality, it bothers me.

Replies from: jessica.liu.taylor, sharmake-farah, james-camacho
comment by jessicata (jessica.liu.taylor) · 2024-10-06T20:22:54.194Z · LW(p) · GW(p)

Thanks, hadn't realized how this related to algebraic geometry. Reminds me of semi-simplicial type theory.

comment by Noosphere89 (sharmake-farah) · 2024-10-06T14:51:06.820Z · LW(p) · GW(p)

A note on Russell's paradox is that the problem with the Russell set isn't that it's nonconstructive, but rather the problem is that we allowed too much freedom in asserting for every property, there is a set of things that satisfy the property, and the conventional way it's solved is by instead dropping the axiom of unrestricted comprehension, and adding the axiom of specification as well as a couple of other axioms to ensure that we have the sets we need.

Even without Russell's paradox, you can still prove things nonconstructively.

comment by James Camacho (james-camacho) · 2024-12-08T04:29:45.019Z · LW(p) · GW(p)

I did some more thinking, and realized particles are the irreps of the Poincaire group. I wrote up some more here, though this isn't complete yet:

https://www.lesswrong.com/posts/LpcEstrPpPkygzkqd/fractals-to-quasiparticles [LW · GW]

comment by romeostevensit · 2024-03-27T22:01:36.796Z · LW(p) · GW(p)

Thoughts:

Interesting asymmetry: languages don't constrain parsers much (maybe a bit, very broadly conceived), but a parser does constrain language, or which sequences it can derive meaning from. Unless the parser can extend/modify itself?

Langan seems heavily influenced by Quine, which I think is a good place to start, as that seems to be about where philosophical progress petered out. In particular, Quine's assertion about scientific theories creating ontological commitments to the building blocks they are made from 'really existing' to which Langan's response seems to be 'okay, let's build a theory out of tautologies then.' This rhymes with Kant's approach, and then Langan goes farther by trying to really get at what 'a priori' as a construct is really about.

I'm not quite sure how this squares with Quine's indeterminacy. That any particular data is evidence not only for the hypothesis you posed (which corresponds to some of Langan's talk of binary yes-no questions as a conception of quantum mechanics) but also for a whole family of hypotheses, most of which you don't know about, that define all the other universes that the data you observed is consistent with.

comment by Mateusz Bagiński (mateusz-baginski) · 2024-08-28T13:41:09.080Z · LW(p) · GW(p)

I am not totally sure why he considers discrete models to be unable to describe initial states or state-transition programming.

AFAIU, he considers them inadequate because they rely on an external interpreter, whereas the model of reality should be self-interpreting because there is nothing outside of reality to interpret it.

Wheeler suggests some principles for constructing a satisfactory explanation. The first is that "The boundary of a boundary is zero": this is an algebraic topology theorem showing that, when taking a 3d shape, and then taking its 2d boundary, the boundary of the 2d boundary is nothing, when constructing the boundaries in a consistent fashion that produces cancellation; this may somehow be a metaphor for ex nihilo creation (but I'm not sure how).

See this as an operation that takes a shape and produces its boundary. It goes 3D shape -> 2D shape -> nothing. If you reverse the arrows you get nothing -> 2D shape -> 3D. (Of course, it's not quite right because (IIUC) all 2D shapes have boundary zero but I guess it's just meant as a rough analogy.)

He notes a close relationship between logic, cognition, and perception: for example, "X | !X" when applied to perception states that something and its absence can't both be perceived at once

This usage of logical operators is confusing. In the context of perception, he seems to want to talk about NAND: you never perceive both something and its absence but you may also not perceive either. 

(note that "X | !X" is equivalent to "!(X & !X)" in classical but not intuitionistic logic)

Intuitionistic logic doesn't allow  either.[1] It allows .

Langan contrasts between spatial duality principles ("one transposing spatial relations and objects" and temporal duality principles ("one transposing objects or spatial relations with mappings, functions, operations or processes"). This is now beyond my own understanding.

It's probably something like: if you have a spatial relationship between two objects X and Y, you can view it as an object with X and Y as endpoints. Temporally, if X causes Y, then you can see it as a function/process that, upon taking X produces Y.


The most confusing/unsatisfying thing for me about CTMU (to the extent that I've engaged with it so far) is that it doesn't clarify what "language" is. It points ostensively at examples: formal languages, natural languages, science, perception/cognition, which apparently share some similarities but what are those similarities?

  1. ^

    Though paraconsistent logic does.

comment by YimbyGeorge (mardukofbabylon) · 2024-03-28T09:08:34.023Z · LW(p) · GW(p)

Falsifiable predictions?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2024-03-28T15:55:50.228Z · LW(p) · GW(p)

I don't see any. He even says his approach “leaves the current picture of reality virtually intact”. In Popper's terms this would be metaphysics, not science, which is part of why I'm skeptical of the claimed applications to quantum mechanics and so on. Note that, while there's a common interpretation of Popper saying metaphysics is meaningless, he contradicts this.

Quoting Popper:

Language analysts believe that there are no genuine philosophical problems, or that the problems of philosophy, if any, are problems of linguistic usage, or of the meaning of words. I, however, believe that there is at least one philosophical problem in which all thinking men are interested. It is the problem of cosmology: the problem of understanding the world—including ourselves, and our knowledge, as part of the world. All science is cosmology, I believe, and for me the interest of philosophy, no less than of science, lies solely in the contributions which it has made to it.

...

I have tried to show that the most important of the traditional problems of epistemology—those connected with the growth of knowledge—transcend the two standard methods of linguistic analysis and require the analysis of scientific knowledge. But the last thing I wish to do, however, is to advocate another dogma. Even the analysis of science—the ‘philosophy of science’—is threatening to become a fashion, a specialism. yet philosophers should not be specialists. For myself, I am interested in science and in philosophy only because I want to learn something about the riddle of the world in which we live, and the riddle of man’s knowledge of that world. And I believe that only a revival of interest in these riddles can save the sciences and philosophy from narrow specialization and from an obscurantist faith in the expert’s special skill, and in his personal knowledge and authority; a faith that so well fits our ‘post-rationalist’ and ‘post-critical’ age, proudly dedicated to the destruction of the tradition of rational philosophy, and of rational thought itself.

...

Positivists usually interpret the problem of demarcation in a naturalistic way; they interpret it as if it were a problem of natural science. Instead of taking it as their task to propose a suitable convention, they believe they have to discover a difference, existing in the nature of things, as it were, between empirical science on the one hand and metaphysics on the other. They are constantly trying to prove that metaphysics by its very nature is nothing but nonsensical twaddle—‘sophistry and illusion’, as Hume says, which we should ‘commit to the flames’. If by the words ‘nonsensical’ or ‘meaningless’ we wish to express no more, by definition, than ‘not belonging to empirical science’, then the characterization of metaphysics as meaningless nonsense would be trivial; for metaphysics has usually been defined as non-empirical. But of course, the positivists believe they can say much more about metaphysics than that some of its statements are non-empirical. The words ‘meaningless’ or ‘nonsensical’ convey, and are meant to convey, a derogatory evaluation; and there is no doubt that what the positivists really want to achieve is not so much a successful demarcation as the final overthrow and the annihilation of metaphysics. However this may be, we find that each time the positivists tried to say more clearly what ‘meaningful’ meant, the attempt led to the same result—to a definition of ‘meaningful sentence’ (in contradistinction to ‘meaningless pseudo-sentence’) which simply reiterated the criterion of demarcation of their inductive logic.

...

In contrast to these anti-metaphysical stratagems—anti-metaphysical in intention, that is—my business, as I see it, is not to bring about the overthrow of metaphysics. It is, rather, to formulate a suitable characterization of empirical science, or to define the concepts ‘empirical science’ and ‘metaphysics’ in such a way that we shall be able to say of a given system of statements whether or not its closer study is the concern of empirical science.

comment by jabowery · 2024-05-15T21:56:20.789Z · LW(p) · GW(p)

To what degree can the paper "Approval-directed agency and the decision theory of Newcomb-like problems" be expressed in the CTMU's mathematical metaphysics?

comment by Alex K. Chen (parrot) (alex-k-chen) · 2024-04-21T15:31:34.107Z · LW(p) · GW(p)

Related "As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

https://cosmos.art/cosmic-bulletin/2020/marco-mattei-cosmopsychism-and-the-philosophy-of-hope roon once said "we are all a giant god dream"

comment by Alex K. Chen (parrot) (alex-k-chen) · 2024-03-28T22:05:50.441Z · LW(p) · GW(p)

I view a part of this as "maximizing the probability of the world to enable "God's mind" to faithfully model reality [1] and operate at its best across all timescales". At minimum this means intelligence enhancement, human-brain symbiosis, microplastics/pollution reduction, reduction in brain aging rate, and reducing default mode noise (eg tFUS, loosening up all tied knots).

The sooner we can achieve a harmonious global workspace, the better (bc memory and our ability to hold the most faithful/error-minimizing representation will decay). There is a precipice, a period of danger where our minds are vulnerable to non-globally coherent/self deceptive thoughts that could run their own incentives to self destroy, but if we can get over this precipice, then the universe becomes more probabilistically likely to generate futures with our faithful values and thoughts.

Some trade-offs have difficult calculations/no clear answers to make (eg learning increases DNA error rates - https://twitter.com/gaurav_ven/status/1773415984931459160?t=8TChCcEfRzH60z0W1bCClQ&s=19 ) and others are the "urgency vs verifiability tradeoff" and the accels and decel debate

But there are still numerous Pareto efficient improvements and the sooner we do the Pareto efficient improvements (like semaglutide, canagliflozin, microplastic/pollution reduction, pain reduction, factoring out historic debt, QRI stuff), the higher the chances of ultimate alignment of "God's thought". It's interesting that the god of formal verification, davidad, is also concerned about microplastics

Possibly relevant people

Sam Altman has this to say:

https://archive.ph/G7VVt#selection-1607.0-1887.9

book says ""As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

Bobby azarian has a wonderful related book "romance of reality" https://www.informationphilosopher.com/solutions/scientists/layzer/

Maybe slightly related: https://twitter.com/shw0rma/status/1771212311753048135?t=qZx3U2PyFxiVCk8NBOjWqg&s=19

https://x.com/VictorTaelin?t=mPe_Orak_SG3X9f91aIWjw&s=09

https://twitter.com/AndyAyrey/status/1773428441498685569?t=sCGMUhlSH2e7M8sEPJu6cg&s=19 https://liberaugmen.com/#shock-level-3 sid mani! reducing noise: https://twitter.com/karpathy/status/1766509149297189274

[1] on some timescale, the best way to predict the future is to build it