Schematic Thinking: heuristic generalization using Korzybski's method

post by romeostevensit · 2019-10-14T19:29:14.672Z · LW · GW · 7 comments

Contents

7 comments

Epistemic status: exploration of some of the intuitions involved in discussions behind this post [LW · GW]at MSFP. Could be considered part of an unstructured sequence titled 'towards fewer type errors in metaphilosophy'

Summary: When moving up or down the ladder of abstraction you're forced to make choices about restriction or relaxation of domain of relevance. Don't just reason about the valid range/scope, reason about your reasoning about the valid range/scope and ask what sort of operations you'd need to be doing to make a different choice seem valid. This resolves many types of confusions and is a crux identifier for 2-place confusions.

Alfred Korzybski directs us to develop the faculty to be conscious of the act of abstracting [EA(p) · GW(p)]. This means that that one has meta cognitive awareness when one does things like engage in the substitution effect, analogical reasoning, shifting the coarse-grainedness of an argument, use of the 'to be' verb form, shifting from one Marr Level to another in mid sentence etc. One of the most important skills that winds up developed as a result of such training is much more immediate awareness of what Korzybski calls the multiordinality of words, which one will be familiar with if you have read A Human's Guide to Words [? · GW] (esp. 24 and 36 [LW · GW]) or are otherwise familiar with the Wittgensteinian shift in analytic philosophy (related: the Indeterminacy of Translation). In short, many words are underdetermined in their referents along more than one dimension, leading to communication problems both between people and internally (for an intuitive example, one can imagine people talking past each other in a discussion of causation when they are discussing different senses of Cause without realizing it).

I want to outline what one might call second order multiordinal words or maybe schematic thinking. With multiordinal words, one is aware of all the values that a word could be referring to. With schematic thinking one is also aware of all the words that could have occupied the space that word occupies. Kind of like seeing everything as an already filled out madlibs and reconstructing the unfilled out version.

This may sound needlessly abstract but you're already familiar with a famous example. One of Charlie Munger's most famous heuristics is inversion. With inversion we can check various ways we might be confused by reversing the meaning of one part of a chain of reasoning and seeing how that affects things. Instead of forward chaining we backwards chain, we prepend 'not' or 'doesn't' to various parts of the plan to construct premortems, we invert whatever just-so story a babbling philosopher said and see if it still makes sense to see if their explanation proves too much [LW · GW].

I claim that this is a specific, actionable instance of schematic thinking. The generalization of this is that one doesn't just restrict oneself to opposites, and doesn't restrict oneself to a single word at a time, though that remains an easy, simple way to break out of mental habit and see more than one possibility for any particular meaning structure.

Let's take first order indeterminacy and apply this and see what happens. To start with you can do a simple inversion of them and see what happens.

First example of first order indeterminacy: universal quantifiers

"all, always, every, never, everyone, no one, no body, none" etc

We already recognize that perverse generalizations of this form cause us problems that can often be repaired by getting specific [LW · GW]. The additional question schematic thinking has us ask is: among the choices I can make, what influences me to make this one? Are those good reasons? What if you inverted that choice (all->none, etc), or made a different one?

Second example of first order indeterminacy: modal operators

confusion of possibility and necessity, "should, should not, must, must not, have to, need to, it is necessary" etc

The additional question we ask here as we convert 'shoulds' to 'coulds' and 'musts' to 'mays' is what sorts of mental moves are we making as we do this?

Third example of first order indeterminacy: unspecified verbs

"they are too trusting, that was rude, we will benefit from that, I tried really hard"

The additional question we ask as we get more specific about what happened is 'why are we choosing this level of coarse grainedness?' After all, depending on the context someone could accuse us of being too specific, or not being specific enough. We have intuitions about when those accusations are reasonable. How does that work?

Conclusion:

This might seem a bit awkward and unnecessary. The concrete benefit it has brought me is that it gives me a starting point when I am reading or listening to a line of reasoning that strikes me as off in some way, but I can't quite put my finger on how. By seeing many of the distinctions being made to construct the argument as arbitrary and part of a space of possible distinctions I can start rephrasing the argument in a way that makes more sense to me. I then have a much better chance of making substantive critiques/cruxing (or alternatively, becoming convinced) rather than just arguing over misunderstandings the whole time. I've found many philosophical arguments hinge on pulling a switcheroo at some key juncture. I think many people intuitively pick up on this and that this is why people dismiss many philosophical arguments, and I think they are usually correct to do so.

7 comments

Comments sorted by top scores.

comment by Matt Goldenberg (mr-hire) · 2019-10-16T00:50:14.273Z · LW(p) · GW(p)

I'd love a few concrete examples of the mad libs process that you've used.

Replies from: romeostevensit, romeostevensit
comment by romeostevensit · 2019-10-16T01:44:12.586Z · LW(p) · GW(p)

The most common is when you encounter a proposed ontology for the space and you try to reconstruct the unparamterized space that the ontology is trying to parameterize. E.g. the ITN framework from EA. With normal mad libs you just need to identify the verbs and nouns. That can be a good place to start just to warm up for a philosophical argument (can also use a thesaurus to permute the argument and see what happens, like tabooing words but non specific), but ultimately you're looking for any proposed structure and trying to guess at the type that would satisfy that space if it were a blank to be filled in. So for each of Importance, Tractability, Neglectedness, you might ask what kinds of things in reality are those trying to carve out? How else might you capture some of the same things? Other examples where this comes up are the many ontologies proposed in Superintelligence (eg speed, collective, quality forms of superintelligence) or Christiano's talk at EAG slide: https://imgur.com/a/GI9g3FI

You can also ask: if it turns out this ontology is correct, what has to be true about the world for that to be so? Or conversely, which distinctions/choices are overdetermined and seemingly 'fall out of' the choice of structure and which ones seem arbitrary? Or do they seem arbitrary but you have a hard time thinking of what else could go there (promising to ponder!)

Authors are usually trying to be evocative with the naming to point at the distinction they want to make, but if you've written stuff you know this is hard and you're often not happy with your best effort at naming, just like variable naming in programming. (actually this suggests another interesting name to point at: think of words as variables instead of fixed references. Human languages aren't type safe.)

A higher order example to show that it doesn't need to only be applied at the level of words: consider the entire lesswrong website and rationalsphere as a blank. What sort of constraint is being satisfied there? There are probably a few dimensions, one interesting one (to me) is that there is surprisingly little scholarly work on documenting *methods* in philosophy and phenomenology, so there's lots of low hanging fruit to talk about.

I think something like this is what lead Rawls to be able to characterize reflective equilibrium. (also might have been related to thinking about fixed points in math)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-10-16T15:01:06.432Z · LW(p) · GW(p)
So for each of Importance, Tractability, Neglectedness, you might ask what kinds of things in reality are those trying to carve out? How else might you capture some of the same things?

Ok, can we take this example further? What specific things in reality ARE those trying to carve out? What does your thought process look like to find those things? Then, when you do find those things, what do you do with them and what specific insights does that help with?

Replies from: romeostevensit
comment by romeostevensit · 2019-10-24T23:36:56.717Z · LW(p) · GW(p)

I was thinking about why it wouldn't be easy to answer this without writing a long response and I realized it's because the concept hinges a lot on something I haven't written up yet about types of uncertainty. [LW · GW]

So a simpler example for now until I post that. Consider Bostrom's ontology of types of superintelligence: speed, collective, quality. If we want more flexibility in thinking about this area we can return to the question that this ontology is an answer to: what different kinds of superintelligence might exist? or how might you differentiate between two superintelligences? and treat these instead as brainstorming cues. With brainstorming you want to optimize for quantity of answers rather than quality, then do categorization afterwards. You might also try to figure out more forms of the question that the ontology might be an answer to.

The relation back to types of uncertainty is that you can ask about the questions and answers: what kind of uncertainty do we want to reduce by answering this question?

comment by romeostevensit · 2019-10-16T02:34:10.245Z · LW(p) · GW(p)

another example would be a class of criticisms made of things like cfar and leverage. I see this class of criticisms as something like 'failing to consider that if you don't see people doing X in the world that is evidence for the hypothesis that no one has tried X, but is also evidence for the hypothesis that people who have tried X haven't become successful enough that you ever hear about them.'

(also this whole post would probably be simplified with a few diagrams)

comment by Connor_Flexman · 2019-10-15T22:33:20.551Z · LW(p) · GW(p)

Really like this explanation, especially the third example and conclusion.

I feel like a similar mental move helps me understand and work with all sorts of not-yet-operationalized arguments in my head (or that other people make). If I think people are "too X", and then I think about what my other options to have said were there, it helps me triangulate about what thing I actually mean. I think this is much faster and more resilient to ladder-of-abstraction mistakes (as you mention) than many operationalization techniques, like trying to put numbers on things.

I think my personal mental move is less like being aware of all the things I could have said, and more like being aware that the thing I was saying was a stand-in meant to imply lots of specific things that are implausible to articulate in their own form.

Replies from: romeostevensit
comment by romeostevensit · 2019-10-15T23:15:40.437Z · LW(p) · GW(p)

Or that the cloud of things all together feels plausible but each individual thing is implausible on its own. Related to bad arguments that have more moving parts than people have working memory slots, so debates go in circles.