The Purpose of Purpose

post by Gordon Seidoh Worley (gworley) · 2021-05-15T21:00:20.559Z · LW · GW · 3 comments
  1. I often make reference to the category I call variously purpose, care, concern, meaning, norms, or telos (I'll just use "purpose" from here on out to point to it).
    1. I've not really explained what I mean by it in great detail.
    2. That's a problem because it's the ground of reasoning.
      1. It's the thing that bridges the gap created by the problem of the criterion [LW · GW].
    3. It's also intimately tied up with what we value, since depending on what you point at with "value" it's either that purpose underlies values or the value assessment mechanism creates purpose.
      1. I think understanding values is essential to addressing AI alignment [LW · GW].
    4. So I'm writing this post both so I can reference it and so I can work out more precisely what I mean.
  2. So what is purpose?
    1. Often a thing we are trying to achieve.
      1. That's not quite it, though, because that doesn't distinguish purpose from related concepts like desires and goals.
      2. Also doesn't necessarily have to have an object to be achieved.
      3. For example, I might have the purpose of compassion, and that purpose has an existence beyond specific instances of having compassion for a particular being.
    2. Desires
      1. I think what people usually mean by desire, wants, needs, etc. is something less abstract and more concrete than purpose.
      2. Desires might be manifestations of purpose towards specific ends.
      3. For example, perhaps I desire cake, but my purpose is to stay alive, which instrumentally requires consuming calories so metabolic processes can serve the purpose of keeping me alive.
    3. Goals
      1. Sometimes we use "purpose" as a synonym for "goal".
      2. But I think there's a meaningfully different category we can point to with "goal" that we should differentiate from the one I'm pointing to with "purpose".
      3. A goal typically can be achieved or attained and held on to.
      4. There's something that can be measured and if it satisfies some criterion then the goal is met.
        1. Put another way, missions can be accomplished.
      5. A purpose may not have an end.
      6. For example, I can just keep on caring about staying alive so long as I'm alive, but at some point I'll satisfy (or not) my goal of living for 100 years.
    4. The other terms in the category also help point the way.
      1. Care/concern
        1. Heidegger talks about what I'm calling "purpose" here as Sorge.
        2. "Care" or "Concern" are probably the best translations, although "Sorge" is cognate with English "sorrow" and "worry" is also a reasonable translation, so it contains this extra nuance of loss or possibility of loss that the English words lack.
          1. But I don't speak German so don't take my word for it.
        3. I'd argue that Sorge is exactly the category Yudkowsky points at in "Something to Protect [LW · GW]".
      2. Meaning
        1. Chapman has written a lot about meaning and meaningness.
        2. Arguably meaning is slightly higher up the philosophical stack because it's the thing produced when grounding words/symbols [LW · GW] (see [LW · GW] also [LW · GW]).
        3. But since the grounding problem [LW · GW] is an instance of the problem of the criterion [LW · GW], meaning in this light is an alternative perspective on purpose.
      3. Norms
        1. Normative behaviors/thoughts/etc. ("norms" for short) are the "ought" of morality, ethics, and aesthetics.
        2. They show up in moral philosophy and the study of metaethics [LW · GW], but also in other places like trying to infer human values from observation.
        3. The various "no free lunch [LW · GW]" theorems point to the need for norms in order to resolve enough uncertainty to know or do things.
        4. Perhaps unsurprisingly, this is just the problem of the criterion again in a not very thorough disguise, so norms are purposes seen through a particular lens [LW · GW].
      4. Telos
        1. Greek for "purpose".
        2. I like it because it's jargon in English, so you can define what you mean by it and people will hopefully be less confused because it doesn't already have much purchase in their minds.
          1. For that reason it's the word I've used a lot in the past [LW · GW] to talk about purpose.
          2. Only decided not to use it now because I'm trying to favor accessibility over not being misunderstood.
        3. It shows up most often in reference to "teleological reasoning", that is thinking of the universe as if it has a purpose [LW · GW] rather than not.
          1. For example, thinking that the world is directed by God's plan is an example of teleological thinking.
            1. But so is thinking the universe tends towards justice/progress/etc., assuming you don't simply mean this as a historical statistical trend you believe may extend into the future but also could not.
          2. Although as we'll see, maybe thinking of the universe as teleological is not so far fetched after all, just that the purpose is much more mundane and less humanistic than expected.
  3. How might we state what purpose is in a more precise way?
    1. One option is to use cybernetic models [LW · GW].
    2. Broadly speaking, we can model things that are "agentic" as feedback processes.
      1. Quick refresher on feedback. For more details I recommend "Form and Feedback in Phenomenology [LW · GW]", which I wrote a while ago and, along the way to explaining something different but related, gave a really nice, detailed explanation of how feedback works.
      2. A feedback system has several parts.
        1. A system.
        2. It's inputs and outputs.
        3. A sensor that reads the outputs.
        4. A controller that modifies or sends additional inputs to the system.
          1. In many feedback systems the sensor and controller are a single thing that serves two functions by the nature of its design.
          2. Simple examples: thermostats with bimetallic strips or mercury thermometers.
        5. A signal the sensor sends to the controller to tell it how to modify the inputs.
      3. Feedback happens when the sensor observes the outputs and sends a signal to the controller and the controller causes the system to do more or less of something, normally by regulating its input.
      4. Classic, simple examples of feedback: thermostats, steam engine governors, homeostasis
      5. Technical note: you should probably mostly be thinking about negative feedback when I talk about feedback here because, although I don't exclude positive feedback, positive feedback is generally embedded inside negative feedback systems since, if nothing else, we clearly don't live in a universe governed by runaway physical processes and rather one governed by laws that naturally impose negative feedback and create homeostasis (as a last resort, the universe runs out of energy/stuff). That the world is this way is somewhat interesting in that we probably don't get complex life in a world that isn't primarily governed by negative feedback, but that's beside the point.
    3. The interaction of the sensor, controller, and system create a target being optimized for (as opposed to a target the system is adapted to hit on its own [? · GW] in a "feedforward" manner without feedback).
    4. My deep claim is that the thing I'm pointing at with "purpose" is in fact the same thing as a feedback optimizing for a target.
  4. What evidence is there that purpose and feedback are one and the same?
    1. When we look at the world and try to find things that look like they have a purpose, we always find something that can be modeled using feedback.
      1. Sometimes this requires stepping up a level to see it though.
      2. Examples:
        1. A feedforward ML model seems to have a purpose but there's no feedback, but the feedback is just slightly hidden: it happened during the training process.
        2. A tool, like a spoon or a spatula, is not itself a feedback system (at least at the level of abstraction we are using when we talk about "spoons" and "spatulas") but was created by a feedback loop involving an human optimizing matter to serve some purpose.
        3. Fundamental physics seems to work to create our universe thanks to a bunch of interactions [LW · GW] that keep everything from flying apart.
    2. Yes, this is basically the same claim as cybernetics.
      1. Look, I get it, cybernetics gets a bad rap for a lot of reasons.
      2. It was subject to a hype cycle and then became uncool.
      3. It was also an interdisciplinary field and got quickly colonized by people who used it to justify anything.
        1. Cf. how postmodernism [LW · GW] started out saying reasonable things like "words don't point to literal truth" and quickly got overrun by people saying "external reality is meaningless".
      4. Despite all that, I stand by the basic idea of cybernetics, and would argue that cybernetics was not wrong, just that it had to be carried on within the disciplines to which it applies (which is all of them).
    3. Even if you think my claim is overreaching, to the extent you think predictive processing [? · GW] is a good model of cognition, predictive processing looks like evidence that my claim is at least true of chordates (basically animals with brain-like things roughly like humans).
  5. Why does this matter to me, a human?
    1. It's pretty well established that humans have purposes.
      1. In fact, the entire problem of teleological reasoning comes from extending the typical mind fallacy [? · GW] to the whole world, which is pretty strong evidence have this "purpose" thing runs so deep in their minds that it requires training to not project [? · GW] it onto their perceptions.
      2. But sometimes humans "lose their purpose" and have to find it again.
        1. In my model, they are dysregulated or malfunctioning feedback loops.
        2. Such people with lost purposes of course still have some purpose because they aren't dead, but they may no longer have awareness of, identify with, or endorse what purpose they are still carrying out, perhaps autonomically.
    2. Humans struggle not to see purpose everywhere, i.e. to engage in teleological thinking because there's no other kind of thinking they could do, since without purpose they wouldn't "think" so much as just do stuff.
      1. Try to imagine if it were otherwise, i.e. try to image "purpose zombies" analogous to p-zombies [? · GW] that lack phenomenal consciousness.
      2. Basically the same sorts of arguments apply here to why purpose zombies don't exist, so I refer you to the p-zombie literature.
      3. This is also my pitch for why the problem of the criterion matters: it's intimately tied up with how you think (because our thinking is grounded in purpose), and understanding that gives you the possibility of working with that reality rather than being subjected to it [LW · GW].
  6. Why does this matter to me, a person who might see the creation of transformative AI [? · GW]?
    1. AI alignment might be better framed as AI being aligned with human purposes rather than human values.
      1. Purpose might be a better way to address what Armstrong has explored as meta-preferences [LW · GW].
      2. Purposes tend not to drift [? · GW] or change as easily.
    2. TAI will likely be goal-directed [LW · GW], and purpose is the thing lying behind goals.
      1. See also "Why You Should Care About Goal-Directedness [? · GW]"
  7. Further reading by other authors
    1. On the nature of purpose [LW · GW]
    2. Where are intentions to be found? [LW · GW]

NB: I'm trying a new way of writing posts (although I've experimented [LW · GW] with similar formats [LW · GW] before, but this time I'm thinking about making it more of a thing). Although I enjoy smithing words, it's probably not the best use of my time, it's showing off my intelligence along an axis probably irrelevant to how I want to engage people, and this format is easier to skim and digest the core ideas for people short on time. On the other hand, this isn't usually how I think things out, and I often discover my own confusion as I try to write more standard prose, so maybe I'll accidentally gloss over important things here (feels like I've spent a lot less effort justifying by bare claims because in a list like this it doesn't feel like I have to). So independent of the content of the post, let me know what you think about the style.

3 comments

Comments sorted by top scores.

comment by Hopkins Stanley (Heideana) · 2021-05-15T23:18:10.231Z · LW(p) · GW(p)

I listened to a Zoom talk by Martin Burckhardt titled “Alien Logic: The Universal Machine and the Psyche of the Social Apparatus” last week.  There were some acoustic issues understanding him and I think his main point was/is that we’re machines/computers ourselves.  I’m still sorting out what he was saying and was struck by his comment on wondering why we, as a species, are so worried about controlling/mastering our creations as in the AI concerns about AI becoming smarter than us.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2021-05-16T00:46:41.994Z · LW(p) · GW(p)

If you want to explore the idea that we're machines, I feel the classic book to recommend here is Godel Escher Bach.

Replies from: Heideana
comment by Hopkins Stanley (Heideana) · 2021-05-16T13:58:29.521Z · LW(p) · GW(p)

Haha!  The one book I always say I’m going to read.  Much thanks!