Do we have the right kind of math for roles, goals and meaning?

post by mrcbarbier · 2022-10-22T21:28:04.935Z · LW · GW · 5 comments

Contents

  Motivation: A mental black hole
  1) Two basic directions of scientific explanation
  2) Recursing down or up
  3) The missing method
  Conclusion: Why new math
    Follow-up posts include: 
    Broader context:
None
5 comments


Preamble on LessWrong relevance: Agentic goals and values, which are of some interest [? · GW] here at LW, are a special case of a broader class of objects that have interested linguists, biologists, and social scientists for a while. This class of objects should really have a common basic mathematical language. It does not, by historical accident. As a consequence, low-hanging fruits and basic confusions abound: research relies on verbal intuitions, gets hung up on domain-specific peculiarities, and misses fundamental generalizations across systems and fields. I suspect potential for progress ranges from Shannon-level to Newton-level, and unfortunately, no Shannons or higher are currently to be found working on it.

This is a broad introduction to give context for an upcoming sequence of increasingly technical posts. You can also try to skip ahead to What should a telic science look like [LW · GW] for a different take on how to introduce the project.

Motivation: A mental black hole

The way mathematical models were introduced into the biological, cognitive and social sciences should be recognized as a great redirection: it spawned many new fascinating questions, but didn't make it any easier to answer the old questions.

The old questions all have to do with something like function, role, goal, purpose or meaning: what's the point of this organ, this gene, this word, this institution? 

We have been made to believe that these questions are either so trivial as to admit simple verbal answers, or unscientific until converted into other questions that are the proper domain of science (such as "how is this implemented?" or "how could this evolve?"). This mindset is such a strong attractor that it has become a kind of intellectual black hole: all our attempts to move away lead back there, forgetting our original intent. 

Everything I will say next is thus both exceedingly simple, and exceedingly hard - researchers who try to address this seriously tend to either go crackpot, or fall back into the black hole. It's hard to imagine what success even looks like when it has never ever happened.

1) Two basic directions of scientific explanation


The behavior of many systems can be explained from at least two directions: what's happening inside them, and what's influencing them from the outside.

Everything that we want to call function, role, goal, purpose or meaning -- a cluster that I will label telos because why not [LW(p) · GW(p)] -- is some variation on explaining a system by selection from the outside.

Pitfall #1: Telos disappears when you look at a system in isolation. If your explanation still works when you remove the object from its context, then you are asking and answering a non-telic type of question.  

Taking a book out of any context, you can ask about its "information content" in the sense of Shannon's theory: how many characters it contains, which can encode that many bits, etc. But you cannot ask about its meaning. Meaning is not a property of the book in itself. The fact that a certain sequence of characters means the story of Hamlet to you is, at the very least, an interaction between the text and you. The same signal could mean many different things for many different lifeforms. Conversely, the same meaning could be implemented in many supports - a physical book, an email, acting out the story, etc. 

In brief, there is a many-to-many relationship between what a thing is in itself, and what its role is in various contexts. Not every thing can play every role (a short book cannot convey a long meaning), but each thing has potential for a multiplicity of roles, and each role can be filled by many things. 

Confusing the thing-in-itself and the role-in-context is how systems biology and many other fields keep failing to say anything interesting about telos. A certain biological pattern is, in itself, a feedback or feedforward, a controller, a switch, a memory, etc. irrespective of context, so these labels are not and can never be functions/teloi.

2) Recursing down or up

Each of the two approaches above becomes a scientific method when we apply it recursively until we hit something we can treat as an axiomatic starting point:

Darwin's idea is a mostly typical instance of telism: when we don't understand an organ or a behavior, we go up to the organism, or even further up to the lineage or ecosystem, until we find a simple purpose like reproduction or persistence. Think also of strategy games: to explain why a chess player selected a move, we go up to the simple overall goal of winning the game. 

We can then go back down to explain the big parts within the whole, and the small parts within the big: the goals of each phase of the game, of each tactical idea within a phase, and finally the complex purpose of a move with respect to all these nested contexts.

Whichever approach works best depends on whether the nearest/strongest source of simplicity and predictability is toward the inside or the outside. For reductionism, larger systems are typically more complex; for telism, smaller systems are typically more complex: any component of your heart should ensure that your heart functions, but should also avoid killing you in any other way (e.g. making byproducts that are toxic to your liver). Constraints can pile up - or rather, pile down - until the atom, like the individual in social systems, is possibly the most complicated thing to explain as it can play many roles in many nested contexts.

Various systems that appear complex from a reductionist point of view are likely to be better understood through telism (or historicism [LW · GW]) - as our gut has been telling us for centuries regarding questions of biology, sociology, or linguistics.

3) The missing method

Everything I've called telos can be given some interpretation in well-established formal frameworks such as optimization, variational calculus, statistical mechanics, etc. What is missing is a general method telling us how to use any of these languages to answer the right type of questions.

This is where every field to date has failed to produce interesting formal theory:

i) Generality: For reductionism, the formalism of dynamical equations allows me to directly compare, say, the propagation of sound waves and the spread of diseases. For telos, no formalism tells me how the role of a word in a sentence compares to the role of an institution in society, or a neuron in the brain. In particular, nothing tells me what should be universal (like the Navier-Stokes equation for fluids) and what should be system-specific (like its coefficients depending on the fluid).

ii) Autonomy: Reductionism provides a complete description of the universe that does not ever need to invoke purpose (it may be impractical or even pragmatically impossible to compute, but it is still valid in principle). Likewise, a true telic science would be able to describe purposes and functions for anything in the universe without referring to any of the objects of reductionism.

Pitfall #2: Starting toward telism and then deviating back toward reductionism.
Optimization or evolutionary theory put a lot of emphasis on the constraints bearing on a system, but then always end up asking fundamentally reductionist questions about that system -- how is optimization or evolution implemented in components, when are they successfully achieved by certain dynamical rules, etc.

Even the select few who talk unashamedly about multi-level selection or top-down causation end up inexorably caught in the trap of "how does this fit into reductionism" rather than "how to talk about this on its own terms" as soon as they start doing math.

Conclusion: Why new math

In this post, I argued that

- There really is a broad class of non-reductionist "telic" questions for which we simply do not have the right type of formalization/math/data structures. 

- The reason it hasn't been solved yet is that it is terribly hard to think about it for more than a few minutes without being drawn back to the wrong type of questions and answers, and trapped in irrelevant technical mazes. The more of a mathy/physicsy background one has, the worse it gets.

Let's discuss these statements.

I feel it is rather easy to observe that we are lacking a general and self-sufficient theory for explaining any object by its role in ever-broader contexts, under ever-wider selection rules. As far as I know, only linguistics [LW · GW] has ever attempted to base a whole formalism on this idea, and that formalism still falls short of capturing the essence of linguistic function.

It may seem more audacious to claim that we do not even have the right math.

Many scientists do ponder related questions and use math (optimization, information theory, dynamics, statistical mechanics). Sometimes, they get interesting results. Yet, the math conspicuously does not help with the core of the question: it serves as a post hoc thing, raising and solving a whole other set of issues. The scientist must do all the heavy lifting in her brain.

Relevant if slightly tangential example: the mathematical machinery of game theory comes into play once we've decided who the agents are, and what set of actions and goals they are equipped with. Many social scientists, I believe, would argue that getting to this "starting point" is the entire purpose of their field, and everything of value is lost if we bungle these assumptions. For them, cranking out the math of subsequent agent behavior is almost always missing the point. Of course, the same social scientists produce very little general theory themselves (or might even be offended at the prospect).

This whole issue was recognized in the 1950s and 60s, among waves of cybernetics and "systems" thinking, and this recognition led to many ideas in engineering, and approximately zero progress at all in basic science. Scientific fields that claim descent from that tradition, e.g. systems biology, simultaneously advertise the problem and fall back into the black hole and answer the "wrong" questions. 

I feel that trying to answer "telic" questions with maths born of reductionism (dynamical systems, game theory, probabilities/set theory...) is as tricky as trying to build physics on top of Euclidean geometry alone. It did work for Galileo, and kudos to him. But sooner or later, we need to invent something like differentials and integrals, coordinates and vector spaces, something really geared toward the kind of questions we want to answer. This will probably involve some metamathematics that are well above my paygrade. 

In the meantime, many existing tools may turn out to be practical, like optimization or variational calculus or stat mech, but they will still tend to naturally point us in all the wrong directions. Trying to use them without getting lost will be my main purpose for the rest of this sequence.

For LW-relevant questions, such as agents and goals and values, my foremost message here will be that some deep traps might be avoided by keeping in mind a much broader class of issues, including some that are potentially much simpler and more likely to be solved on a reasonable time scale.

Follow-up posts include: 

Basic methodology/philosophy: What should a telic science look like [LW · GW]

Reference points from various fields: Telic intuitions across the sciences [LW · GW] (WIP)

Actually doing something (or starting to): Building a Rosetta stone for reductionism and telism. [LW · GW] (WIPier)

Broader context:

Historicism in the math-adjacent sciences [LW · GW]


 

5 comments

Comments sorted by top scores.

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2022-11-01T08:31:36.103Z · LW(p) · GW(p)

I agree that the mathematics of agency, goals and values must exist and mostly hasn't been found yet (though we have many parts, e.g. Von Neumann Utility theorem).

I am skeptical that this is purely an historical accident. It seems to me that this theory is sufficiently complex and requires breakthroughs in several different areas. I don't think it could have been solved in the fifties. Most relevant math was simply not developed enough.

That doesn't mean progress cannot be sped up. With the threat of AI x-risk this area of enquiry has seen renewed interest and we've seen several breakthroughs here in just the last few years. As an example, Critch's new work on Boundaries will probably play an integral part.

Replies from: mrcbarbier
comment by mrcbarbier · 2023-01-05T19:52:47.738Z · LW(p) · GW(p)

Sorry for the late reply! Do you mind sharing a ref for Critch's new work? I have tried to find something about boundaries but was unsuccessful.

As for the historical accident, I would situate it more around the 17th century, when the theory of mechanics was roughly as advanced as that of agency. I don't feel that goals and values require much more advanced math, only math as new as differential calculus was at the time. 

Though we now have many pieces that seem to aim in the right direction (variational calculus in general, John Baez and colleagues' investigations of blackboxing via category theory...), it seems more by chance than by concerted, literature-wide effort. But I do hope to build on these pieces.

Replies from: alexander-gietelink-oldenziel
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-01-06T21:30:28.049Z · LW(p) · GW(p)

Category theory was developed in the 50s from considerations in algebraic topology. Algebraic topology was an extremely technically sophisticated field already in the 50s (and has by now reached literally incredible abstract heights).

I suppose one could imagine an alternate world where Galois invents category theory but it seems apparent to me that the amount of long-term development significantly divorced from direct applications (as calculus was) was needed for category theory to spring up and mature - indeed it is still in its teenage rebellious phase in the grand scheme of things!

John Baez et al 's work on Blackboxing is uses several ideas only developed recently and though I am a fan of abstract mathematics in general and category theory in particular I hasten to say this work is only a small part of a big and as of yet mostly unfinished story. I'm optimistic that this might eventually lead to serious insights in understanding agency but at the moment it seems we are still quite far.

A more directed search for agent foundations math is needed, happening (but we need much more!) and likely to bear fruits on the medium-short term but I suspect many of the ingredients are likely things that have been developed with very different motivations in mind.

Edit: see https://www.lesswrong.com/s/LWJsgNYE8wzv49yEc [? · GW] for Critch's new work on Boundaries

Replies from: mrcbarbier
comment by mrcbarbier · 2023-01-07T09:52:08.611Z · LW(p) · GW(p)

Thanks for your thoughts and for the link! I definitely agree that we are very far from practical category-inspired improvements at this stage;  I simply wonder whether there isn't something fundamentally as simple and novel as differential equations waiting around the corner and that we are taking a very circuitous route toward through very deep metamathematics! (Baez's rosetta stone paper and work by Abramsky and Coeck on quantum logic have convinced me that we need something like "not being in a Cartesian category" to account for notions like context and meaning, but that quantum stuff is only one step removed from the most Cartesian classical logic/physics and we probably need to go to the other extreme to find a different kind of simplicity)

Replies from: alexander-gietelink-oldenziel
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-01-08T14:28:26.754Z · LW(p) · GW(p)

No problem!

Do you mean monoidal categories? I think that's what the central concept in the Abramsly-Coeke work & the Baez Rosetta stone paper is.