The Teleological Mechanism
post by Gordon Seidoh Worley (gworley) · 2021-01-19T23:58:54.496Z · LW · GW · 6 commentsContents
6 comments
I just wrote this up as a comment [LW(p) · GW(p)], but I think it deserves to be a top level post because it's an important idea. Additionally, this formulation is crisp enough that folks should be able to usefully engage with it.
In this seminal cybernetics essay a way of thinking about the concept we might variously call care, concern, telos, or purpose is laid out. This is relevant both to thinking about goal-directed behavior [? · GW] in AI and other non-human systems and to thinking about why humans do things.
I reference this concept a lot, but I've not (yet) had a good reference post to link about it. Usually I default to pointing at something about Heidegger's Sorge (tr. "care" or "concern"), but Heidegger is notoriously hard to read and lots of people don't like him. Also there's not a detailed argument for why care is so important, so I find myself trying to make the case all the time. Hopefully this will put an end to that.
So in that essay, first, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock that only changes when heated by an outside source, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside (e.g. is a plant passive because it gets its energy from the sun, or active because it uses stored energy to perform its behaviors?).
Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal and purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness and their vague definition of it. I'd instead propose we think of these terms differently. I prefer to think of purposeful behavior as that which creates a reduction in entropy within the system and its outputs and purposeless behavior does not. This doesn't quite line up with how they think about it, though, so I'm open to arguments that entropy is not the useful place to draw the line here.
They then go on to divide purposeful behavior into teleological and non-teleological behavior, by which they simply mean behavior that's the result of feedback (and they specify negative feedback) or not. In LessWrong terms, I'd say this is like the difference between optimizers [? · GW] ("fitness maximizers") and adaptation executors [? · GW].
They then go on to make a few additional distinctions that are not relevant to the present topic although do have some relevance to AI alignment relating to predictability of systems.
I'd say then that systems with active, purposeful, teleological behavior are the ones that "care", and the teleological mechanism is the aspect of the way the system functions that causes it to care. When we talk about a teleological system or being, we're talking about something that cares because it uses its own power to transform the world into some particular state it's aiming for.
To go on, most "interesting" systems are teleological ones: humans, plants, many types of machines, bacteria, evolution, AI. All have something they care about, something they use to choose that one state of the world is better than another, and that creates an important difference between them and systems that lack this feature like rocks and planets and water, which might either fail to be active, purposeful, or teleological.
So when we concern ourselves with purposeful, caring systems, I think this is what we mean: that the system has a teleological mechanism.
6 comments
Comments sorted by top scores.
comment by neotoky01 · 2021-01-21T04:55:22.955Z · LW(p) · GW(p)
Fundamentally, from the perspective of physics, what is the difference between animate matter and inanimate matter? Living things/non-living things? At which point does a non-living thing become a living thing?
https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence
↑ comment by Gordon Seidoh Worley (gworley) · 2021-01-21T14:26:09.087Z · LW(p) · GW(p)
Well, the categories of this post suggest one way we might do it that's more satisfying than the naive way we draw the boundaries [LW · GW] of "life".
We could equate life we active systems.
We could equate life with active, purposeful systems.
We could equate life with active, purposeful, teleological systems.
The test would then be to see which one is most useful to us if we use that to mean "life". Are we happy with the kinds of things that end up in the category? Does it seem natural? Or are these categories cutting at something orthogonal to what we mean by "life" and we would actually prefer to define it some other way.
I suspect the answer is that by "life" we mean something orthogonal to this classification system such that things we consider alive cut across the boundaries it draws.
Replies from: neotoky01↑ comment by neotoky01 · 2021-01-21T22:11:49.052Z · LW(p) · GW(p)
Again, what is the difference between living things and non-living things from a first principles physics perspective. At which point do atoms and molecules become "living"? At which point do atoms and molecules acquire purpose?
I understand that we use words like living and purpose to denote large movements of atoms moving in complex and interesting ways. But at no point along the way does something acquire "livingness" or "purposeness" or "spirit" or "elan vital" or "emergence" (see Eliezer Yudkowsky on emergence). If we're going to try to understand the behaviour of organisms it's necessary to see that the behaviour of organisms is just trillions of atoms and molecules behaving in very interesting ways.
↑ comment by Gordon Seidoh Worley (gworley) · 2021-01-22T01:39:41.288Z · LW(p) · GW(p)
This is asking for quite a lot of answer, but also mostly seems beside the point of this post.
If the question is just about the terms considered, "active" would seem to clearly be about how much the energy in the system is being fed to it vs. being generated by it. "Purpose", as I propose, is about thermodynamics and whether entropy is increasing or decreasing. "Teleology" is about feedback, which I wrote about here [LW · GW].
But I'm not sure what your real objection is here since nothing I wrote contradicts the idea that the world is just made up of stuff and nothing is possessed of any essential nature. Yet there is in fact some point at which the world takes on enough local structure that we place it in some category, otherwise we wouldn't be able to use words at all.
So I'm not sure how to read your objection other than you don't like the word "purpose"?
Replies from: neotoky01↑ comment by neotoky01 · 2021-01-22T03:37:08.989Z · LW(p) · GW(p)
I think part of the difficulty I have in understanding your post is the distinction between active and passive systems, and it's why I linked the article on emergence. No matter which way I think about it, I can't disentangle a living (organic) system from its surrounding environment. It's why I posed the question of what distinguishes living systems and non-living systems; when I look deep enough I still see the same fundamental rules of physics applied in each instance.
Under this view that all matter, living and non-living is indistinguishable and each running under the same laws of physics, purposefulness and purposelessness do not appear anywhere in these systems. Living objects can be regarded as atomic components moving and interacting in very interesting ways. So I cannot distinguish between purposefulness and purposelessness. Colloquially, of course, I know what you mean. You are talking about the behaviour of living organisms; but on a deeper inspection I can't find purposefulness.
Is the argument you're making that there are localized areas of space that possess lower levels of entropy, and that living systems can be generalized as simply lowering the entropy in an area of space?
↑ comment by Gordon Seidoh Worley (gworley) · 2021-01-22T04:04:46.740Z · LW(p) · GW(p)
Is the argument you're making that there are localized areas of space that possess lower levels of entropy, and that living systems can be generalized as simply lowering the entropy in an area of space?
Yes, that's part of it. As to what is "local", that's a question of where we draw boundaries, and in fact this gets at an issue fundamental to all of this, which is we have to have some way to even draw these boundaries. That's a free variable that has to be fixed, ironically, by telos. [LW · GW]