Pattern-botching: when you forget you understand

post by MalcolmOcean (malcolmocean) · 2015-06-15T22:58:34.954Z · LW · GW · Legacy · 18 comments

Contents

  Examples of pattern-botching
    Calmness and pretending to be a zen master
    Personality Types
    False aversions
  Taking the training wheels off of your model
    Rationality as a way of thinking
    Effective Altruism
    Creating a new (platform for) culture
  Conscious cargo-culting
None
18 comments

It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:

Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

 

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

 

(Got one?)

 

Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely.

I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

Examples of pattern-botching

So, that's pattern-botching, in a nutshell. Now, examples! We'll start with some simple ones.

Calmness and pretending to be a zen master

In my Againstness Training video, past!me tries a bunch of things to calm down. In the pursuit of "calm", I tried things like...

None of these are the desired state. The desired state is present, authentic, and can project well while speaking assertively.

But that would require actually being in a different state, which to my brain at the time seemed hard. So my brain constructed a pattern around the target state, and said "what's easy and looks vaguely like this?" and generated the list above. Not as a list, of course! That would be too easy. It generated each one individually as a plausible course of action, which I then tried, and which Val then called me out on.

Personality Types

I'm quite gregarious, extraverted, and generally unflappable by noise and social situations. Many people I know describe themselves as HSPs (Highly Sensitive Persons) or as very introverted, or as "not having a lot of spoons". These concepts are related—or perhaps not related, but at least correlated—but they're not the same. And even if these three terms did all mean the same thing, individual people would still vary in their needs and preferences.

Just this past week, I found myself talking with an HSP friend L, and noting that I didn't really know what her needs were. Like I knew that she was easily startled by loud noises and often found them painful, and that she found motion in her periphery distracting. But beyond that... yeah. So I told her this, in the context of a more general conversation about her HSPness, and I said that I'd like to learn more about her needs.

L responded positively, and suggested we talk about it at some point. I said, "Sure," then added, "though it would be helpful for me to know just this one thing: how would you feel about me asking you about a specific need in the middle of an interaction we're having?"

"I would love that!" she said.

"Great! Then I suspect our future interactions will go more smoothly," I responded. I realized what had happened was that I had conflated L's HSPness with... something else. I'm not exactly sure what, but a preference for indirect communication, perhaps? I have another friend, who is also sometimes short on spoons, who I model as finding that kind of question stressful because it would kind of put them on the spot.

I've only just recently been realizing this, so I suspect that I'm still doing a ton of this pattern-botching with people, that I haven't specifically noticed.

Of course, having clusters makes it easier to have heuristics about what people will do, without knowing them too well. A loose cluster is better than nothing. I think the issue is when we do know the person well, but we're still relying on this cluster-based model of them. It's telling that I was not actually surprised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pattern was making me doubt what I knew.

False aversions

CFAR teaches a technique called "Aversion Factoring", in which you try to break down the reasons why you don't do something, and then consider each reason. In some cases, the reasons are sound reasons, so you decide not to try to force yourself to do the thing. If not, then you want to make the reasons go away. There are three types of reasons, with different approaches.

One is for when you have a legitimate issue, and you have to redesign your plan to avert that issue. The second is where the thing you're averse to is real but isn't actually bad, and you can kind of ignore it, or maybe use exposure therapy to get yourself more comfortable with it. The third is... when the outcome would be an issue, but it's not actually a necessary outcome of the thing. As in, it's a fear that's vaguely associated with the thing at hand, but the thing you're afraid of isn't real.

All of these share a structural similarity with pattern-botching, but the third one in particular is a great example. The aversion is generated from a property that the thing you're averse to doesn't actually have. Unlike a miscalibrated aversion (#2 above) it's usually pretty obvious under careful inspection that the fear itself is based on a botched model of the thing you're averse to.

Taking the training wheels off of your model

One other place this structure shows up is in the difference between what something looks like when you're learning it versus what it looks like once you've learned it. Many people learn to ride a bike while actually riding a four-wheeled vehicle: training wheels. I don't think anyone makes the mistake of thinking that the ultimate bike will have training wheels, but in other contexts it's much less obvious.

The remaining three examples look at how pattern-botching shows up in learning contexts, where people implicitly forget that they're only partway there.

Rationality as a way of thinking

CFAR runs 4-day rationality workshops, which currently are evenly split between specific techniques and how to approach things in general. Let's consider what kinds of behaviours spring to mind when someone encounters a problem and asks themselves: "what would be a rational approach to this problem?"

In the case of a bike, we see hundreds of people biking around without training wheels, and so that becomes the obvious example from which we generalize the pattern of "bike". In other learning contexts, though, most people—including, sometimes, the people at the leading edge—are still in the early learning phases, so the training wheels are the rule, not the exception.

So people start thinking that the figurative bikes are supposed to have training wheels.

Incidentally, this can also be the grounds for strawman arguments where detractors of the thing say, "Look at these bikes [with training wheels]! How are you supposed to get anywhere on them?!"

Effective Altruism

We potentially see a similar effect with topics like Effective Altruism. It's a movement that is still in its infancy, which means that nobody has it all figured out. So when trying to answer "How do I be an effective altruist?" our pattern-matchers might pull up a bunch of examples of things that EA-identified people have been commonly observed to do.

...and this generated list might be helpful for various things, but be wary of thinking that it represents what Effective Altruism is. It's possible—it's almost inevitable—that we don't actually know what the most effective interventions are yet. We will potentially never actually know, but we can expect that in the future we will generally know more than at present. Which means that the current sampling of good EA behaviours likely does not actually even cluster around the ultimate set of behaviours we might expect.

Creating a new (platform for) culture

At my intentional community in Waterloo, we're building a new culture. But that's actually a by-product: our goal isn't to build this particular culture but to build a platform on which many cultures can be built. It's like how as a company you don't just want to be building the product but rather building the company itself, or "the machine that builds the product,” as Foursquare founder Dennis Crowley puts it.

What I started to notice though, is that we started to confused the particular, transitionary culture that we have at our house, with either (a) the particular, target culture, that we're aiming for, or (b) the more abstract range of cultures that will be constructable on our platform.

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

This shows up on much larger scales too. Val from CFAR was talking about a particular kind of fierceness, "hellfire", that he sees as fundamental and important, and he noted that it seemed to be incompatible with the kind of culture my group is building. I initially agreed with him, which was kind of dissonant for my brain, but then I realized that hellfire was only incompatible with our training culture, not the entire set of cultures that could ultimately be built on our platform. That is, engaging with hellfire would potentially interfere with the learning process, but it's not ultimately proscribed by our culture platform.

Conscious cargo-culting

I think it might be helpful to repeat the definition:

Pattern-botching is you pattern-match a thing "X", as following a certain model, but then but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

It's kind of like if you were doing a cargo-cult, except you knew how airplanes worked.

(Cross-posted from malcolmocean.com)

18 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2015-06-16T01:03:53.374Z · LW(p) · GW(p)

I love the catchy name "pattern botching". And it seems to describe rather well maybe 50% of all casual interactions. Cargo culting and correspondence bias seem to be examples of this general miscommunication. Also, this writeup seems good enough for Main.

Replies from: Lumifer
comment by Lumifer · 2015-06-16T02:11:41.990Z · LW(p) · GW(p)

Cargo cult is different -- it's not about (mis)communication at all, but rather about the inability (or unwillingness) to distinguish between outward, surface phenomena on the one hand and internal mechanisms and causal chains on the other.

But yeah, "pattern botching" is an excellent name.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-06-16T06:15:44.112Z · LW(p) · GW(p)

Glad you guys like the name. I spent quite awhile and tried out some other ones before that one stuck.

I think that classical cargo-culting is indeed quite different from pattern-botching, but when you have people who know better doing something cargo-cult-like, then that's likely an instance of pattern-botching.

Replies from: Lumifer
comment by Lumifer · 2015-06-16T14:32:03.610Z · LW(p) · GW(p)

when you have people who know better doing something cargo-cult-like

Like Feynman's description of cargo-cult science? I suspect it's mostly the result of a particular set of incentives which are structured to reward the cargo-cult visible manifestations and ignore actual work.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-06-16T15:46:36.503Z · LW(p) · GW(p)

Yeah, I was thinking about that. I think lots of people (the students, for example) don't actually know better. But among those who do, yeah, I would say that the incentives are the cause of the object-level behaviour where they're not doing real science, and the pattern-botching is the mental process where they don't notice that they're not doing real science.

comment by MalcolmOcean (malcolmocean) · 2015-06-16T06:19:15.496Z · LW(p) · GW(p)

Ooh, I'm also noticing that this seems to be connected with this List of Nuances. Like, the nuances identify lots of dimensions along which you can pattern-match (and therefore pattern-botch).

comment by [deleted] · 2015-06-16T06:09:17.458Z · LW(p) · GW(p)

Stereotyping

The only distinction is the part about why you're forgetting, but I think that aspect of pattern-botching is speculative and I'm not sure why it's important enough that you have to come up with a whole new term.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-06-16T06:12:46.318Z · LW(p) · GW(p)

Yeah, stereotyping applies to the bits that are about people, but doesn't really apply to the other examples though I think.

Replies from: None
comment by [deleted] · 2015-06-16T06:36:37.729Z · LW(p) · GW(p)

The level above stereotyping is referred to as a schema. Failing to update a schema is called disconfirmation bias.

comment by Username · 2015-06-27T22:41:19.705Z · LW(p) · GW(p)

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

I tried this as well. I spent a year not using words like "should" and "fault". I then went back to using them when I started my full-time job because those words are too useful, and I regressed, at least partially, to my old way of thinking.

Old patterns come back if you're not careful. The training wheel metaphor is likely counterproductive here.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-07-02T20:14:32.066Z · LW(p) · GW(p)

Ahh, I think that the context shift was probably a huge issue. I mostly hang out with people who also eschew "should"—either my intentional community or people who read blog posts like these ones of Nate's:

"Should" considered harmful
Your "shoulds" are not a duty
Not because you "should"

I think the training wheel metaphor isn't perfect... it's maybe more like this bicycle.

comment by 27chaos · 2015-06-18T18:03:38.993Z · LW(p) · GW(p)

Send this to main!

comment by eternal_neophyte · 2015-06-16T01:40:20.762Z · LW(p) · GW(p)

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

"Misclassification"?

Edit: changed from "category error" which is in fact something completely different.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-06-16T06:13:25.778Z · LW(p) · GW(p)

Hmmm. Yeah. Although I guess maybe the issue is that system 1 has misclassified, whereas system 2 is fine...

Replies from: eternal_neophyte
comment by eternal_neophyte · 2015-06-16T06:38:55.928Z · LW(p) · GW(p)

Sounds like a failure of the representativeness heuristic then, which isn't quite so nice a phrase as "pattern botching".

comment by ChristianKl · 2015-06-16T08:38:20.532Z · LW(p) · GW(p)

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

Maybe you tried to strawman her argument?

Replies from: JenniferRM, malcolmocean
comment by JenniferRM · 2015-06-17T17:22:25.957Z · LW(p) · GW(p)

Plausibly, something like pattern botching is "where straw man tactics come from" or at least is causally related?

System 1 is pretty good at leaping to conclusions based on sparse data (often to a conclusion that implies a known solution). That is sort of what system 1 is for. But if it does so improperly (and the error is detected) it is likely to be named as an error. I think Malcom is calling out the subset of such errors where not only is system 1 making a detectable error, but system 2 can trivially patch the problem.

A "straw man argument" is mostly referring to error in the course of debate where a weak argument is presumed by one's interlocutor. Why do they happen?

Maybe sometimes you're genuinely ignorant about your interlocutor's position and are assuming it is a dumb position wrongly. People normally argument "X, Y, Z" (where Z is a faulty conclusion) but you know Q which suggests (P & ~Z). So someone might say "X", and you say "but ~Z because Q!" and they say "of course Q, and also Y and P, where did Z come from, I haven't even said Z". And then you either admit to your mistaken assumption or get defensive and start accusing them of secretly believing Z.

The initial jump there might have have had bayesian justification because "X, Y, Z" could be a very common verbal trope. The general "guessing what people are about to say" process probably also tends to make most conversations more efficient. However it wouldn't be pattern botching until you get defensive and insist on sticking to your system 1 prediction even after you have better knowledge.

"Sticking with system 1 when system 2 knows better" (ie pattern botching) might itself have a variety of situation dependent causes.

Maybe one of the causes is involve in the specific process of sticking with theories about one's interlocutor that are false and easy to defeat? Like maybe it is a social instinct, or maybe works often enough in real discourse environments to gain benefits that it gets intermittent positive reinforcement?

If pattern botching happens at random (including sometimes in arguments about what the other person is trying to say) then it would be reasonably to treat pattern botching as a possible root cause.

If situation dependent factors cause pattern botching on people's arguments a lot more than in some other subject area, it would be reasonable to say that pattern botching is not the root cause. In that case, the key thing to focus on is probably the situation dependent causal factors. However, pattern botching might still be an intermediate cause, and the root causes might be very hard to fix while pattern botching can maybe be treated easily, and so attributing the cause to pattern botching might be functional from a pragmatic diagnostic perspective, depending on what your repair tools are like.

Personally, my suspicion is that dealing with people and ideas makes pattern botching much more likely.

comment by MalcolmOcean (malcolmocean) · 2015-06-16T15:47:47.275Z · LW(p) · GW(p)

I don't agree, but I'm gonna give you +1 for actually doing the thing :)