[Conversation Log] Compartmentalization

post by AdeleneDawner · 2011-07-30T00:51:06.630Z · LW · GW · Legacy · 20 comments

(7:40:37 PM) handoflixue: Had an odd thought recently, and am trying to see if I understand the idea of compartmentalization.
(7:41:08 PM) handoflixue: I've always acted in a way, whereupon if I'm playing WOW, I roleplay an elf. If I'm at church, I roleplay a unitarian. If I'm on LessWrong, I roleplay a rationalist.
(7:41:31 PM) handoflixue: And for the most part, these are three separate boxes. My elf is not a rationalist nor a unitarian, and I don't apply the Litany of Tarski to church.
(7:41:49 PM) handoflixue: And I realized I'm *assuming* this is what people mean by compartmentalizing.
(7:42:11 PM) handoflixue: But I also had some *really* interesting assumptions about what people meant by religion and spiritual and such, so it's probably smart to step back and check ^^
(7:43:45 PM) Adelene: I'm actually not sure what's usually meant by the concept (which I don't actually use), but that's not the guess I came up with when you first asked, and I think mine works a little better.
(7:44:50 PM) handoflixue: Then I am glad I asked! :)
(7:45:24 PM) Adelene: My guess is something along the lines of this: Compartmentalizing is when one has several models of how the world works, which predict different things about the same situations, and uses arbitrary, social, or emotional methods rather than logical methods to decide which model to use where.
(7:46:54 PM) handoflixue: Ahhhh
(7:47:05 PM) handoflixue: So it's not having different models, it's being alogical about choosing a method/
(7:47:08 PM) handoflixue: ?
(7:47:14 PM) Adelene: That's my guess, yes.
(7:47:37 PM) Adelene: I do think that it's specifically not just about having different behavioral habits in different situations.
(7:48:00 PM) Adelene: (Which is what I think you mean by 'roleplay as'.)
(7:49:21 PM) handoflixue: It's not *exactly* different situations, though. That's just a convenient reference point, and the process that usually develops new modes. I can be an elf on LessWrong, or a rationalist WOW player, too.
(7:49:53 PM) Adelene: Also, with regards to the models model, some models don't seem to be reliable at all from a logical standpoint, so it's fairly safe to assume that someone who uses such a model in any situation is compartmentalizing.
(7:50:34 PM) handoflixue: But the goddess really does talk to me during rites >.>;
(7:51:16 PM) Adelene: ...okay, maybe that's not the best wording of that concept.
(7:51:33 PM) handoflixue: It's a concept I tend to have trouble with, too, I'll admit
(7:51:36 PM) handoflixue: I... mmm.
(7:51:56 PM) handoflixue: Eh :)
(7:52:18 PM) Adelene: I'm trying to get at a more 'mainstream christianity model' type thing, with that - most Christians I've known don't actually expect any kind of feedback at all from God.
(7:53:00 PM) Adelene: Whereas your model at least seems to make some useful predictions about your mindstates in response to certain stimulii.
(7:53:20 PM) handoflixue: .. but that would be stupid >.>
(7:53:26 PM) Adelene: eh?
(7:53:50 PM) handoflixue: If they don't ... get anything out of it, that would be stupid to do it o.o
(7:54:11 PM) Adelene: Oh, Christians? They get social stuff out of it.
(7:54:35 PM) handoflixue: *nods* So... it's beneficial.
(7:54:46 PM) Adelene: But still compartment-ey.
(7:55:10 PM) Adelene: I listed 'social' in the reasons one might use an illogical model on purpose. :)
(7:55:25 PM) handoflixue: Hmmmm.
(7:56:05 PM) handoflixue: I wish I knew actual Christians I could ask about this ^^;
(7:56:22 PM) Adelene: They're not hard to find, I hear. ^.-
(7:56:27 PM) handoflixue: ... huh
(7:56:42 PM) handoflixue: Good point.
(7:57:12 PM) Adelene: Possibly of interest: I worked in a Roman Catholic nursing home - with actual nuns! - for four years.
(7:57:25 PM) handoflixue: Ooh, that is useful :)
(7:57:38 PM) handoflixue: I'd rather bug someone who doesn't seem to object to my true motives :)
(7:58:00 PM) Adelene: Not that I talked to the nuns much, but there were some definite opportunities for information-gathering.
(7:58:27 PM) handoflixue: Mostly, mmm...
(7:58:34 PM) handoflixue: http://lesswrong.com/lw/1mh/that_magical_click/ Have you read this article?
(7:58:52 PM) Adelene: Not recently, but I remember the gist of it.
(7:59:05 PM) handoflixue: I'm trying to understand the idea of a mind that doesn't click, and I'm trying to understand the idea of how compartmentalizing would somehow *block* that.
(7:59:15 PM) handoflixue: I dunno, the way normal people think baffles me
(7:59:28 PM) Adelene: *nodnods*
(7:59:30 PM) handoflixue: I assumed everyone was playing a really weird game until, um, a few months ago >.>
(7:59:58 PM) Adelene: heh
(8:00:29 PM) Adelene: *ponders not-clicking and compartmentalization*
(8:00:54 PM) handoflixue: It's sort of... all the models I have of people make sense.
(8:00:58 PM) handoflixue: They have to make sense.
(8:01:22 PM) handoflixue: I can understand "Person A is Christian because it benefits them, and the cost of transitioning to a different state is unaffordably high, even if being Atheist would be a net gain"
(8:01:49 PM) Adelene: That's seriously a simplification.
(8:02:00 PM) handoflixue: I'm sure it is ^^
(8:02:47 PM) handoflixue: But that's a model I can understand, because it makes sense. And I can flesh it out in complex ways, such as adding the social penalty that goes in to thinking about defecting, and the ick-field around defecting, and such. But it still models out about that way.
(8:02:58 PM) Adelene: Relevantly, they don't know what the cost of transition actually would be, and they don't know what the benefit would be.
(8:04:51 PM) handoflixue: Mmmm... really?
(8:05:03 PM) handoflixue: I think most people can at least roughly approximate the cost-of-transition
(8:05:19 PM) handoflixue: ("Oh, but I'd lose all my friends! I wouldn't know WHAT to believe anymore")
(8:05:20 PM) Adelene: And also I think most people know on some level that making a transition like that is not really voluntary in any sense once one starts considering it - it happens on a pre-conscious level, and it either does or doesn't without the conscious mind having much say in it (though it can try to deny that the change has happened). So they avoid thinking about it at all unless they have a really good reason to.
(8:05:57 PM) handoflixue: There may be ways for them to mitigate that cost, that they're unaware of ("make friends with an atheist programmers group", "read the metaethics sequence"), but ... that's just ignorance and that makes sense ^^
(8:06:21 PM) Adelene: And what would the cost of those cost-mitigation things be?
(8:07:02 PM) handoflixue: Varies based on whether the person already knows an atheist programmers group I suppose? ^^
(8:07:26 PM) Adelene: Yep. And most people don't, and don't know what it would cost to find and join one.
(8:07:40 PM) handoflixue: The point was more "They can't escape because of the cost, and while there are ways to buy-down that cost, people are usually ignor...
(8:07:41 PM) handoflixue: Ahhhh
(8:07:42 PM) handoflixue: Okay
(8:07:44 PM) handoflixue: Gotcha
(8:07:49 PM) handoflixue: Usually ignorant because *they aren't looking*
(8:08:01 PM) handoflixue: They're not laying down escape routes
(8:08:24 PM) Adelene: And why would they, when they're not planning on escaping?
(8:09:28 PM) handoflixue: Because it's just rational to seek to optimize your life, and you'd have to be stupid to think you're living an optimum life?
(8:10:13 PM) Adelene: uhhhh.... no, most people don't think like that, basically at all.
(8:10:30 PM) handoflixue: Yeah, I know. I just don't quite understand why not >.>
(8:10:54 PM) handoflixue: *ponders*
(8:11:02 PM) handoflixue: So compartmentalization is sorta... not thinking about things?
(8:11:18 PM) Adelene: That's at least a major symptom, yeah.
(8:11:37 PM) handoflixue: Compartmentalization is when model A is never used in situation X
(8:12:17 PM) handoflixue: And, often, when model A is only used in situation Y
(8:12:22 PM) Adelene: And not because model A is specifically designed for simulations of type Y, yes.
(8:12:39 PM) handoflixue: I'd rephrase that to "and not because model A is useless for X"
(8:13:06 PM) Adelene: mmm...
(8:13:08 PM) handoflixue: Quantum physics isn't designed as an argument for cryonics, but eliezer uses it that way.
(8:13:14 PM) Adelene: hold on a sec.
(8:13:16 PM) handoflixue: Kay
(8:16:01 PM) Adelene: The Christian model claims to be useful in lots of situations where it's observably not. For example, a given person's Christian model might say that if they pray, they'll have a miraculous recovery from a disease. Their mainstream-society-memes model, on the other hand, says that going to see a doctor and getting treatment is the way to go. The Christian model is *observably* basically useless in that situation, but I'd still call that compartmentalization if they went with the mainstream-society-memes model but still claimed to primarily follow the Christian one.
(8:16:46 PM) handoflixue: Hmmm, interesting.
(8:16:51 PM) handoflixue: I always just called that "lying" >.>
(8:17:05 PM) handoflixue: (At least, if I'm understanding you right: They do X, claim it's for Y reason, and it's very obviously for Z)
(8:17:27 PM) handoflixue: (Lying-to-self quite possibly, but I still call that lying)
(8:18:00 PM) Adelene: No, no - in my narrative, they never claim that going to a doctor is the Christian thing to do - they just never bring Christianity up in that context.
(8:19:15 PM) handoflixue: Ahhh
(8:19:24 PM) handoflixue: So they're being Selectively Christian?
(8:19:27 PM) Adelene: Yup.
(8:19:37 PM) handoflixue: But I play an elf, and an elf doesn't invest in cryonics.
(8:20:09 PM) handoflixue: So it seems like that's just... having two *different* modes.
(8:20:40 PM) Adelene: I don't think that's intrinsically a problem. The question is how you pick between them.
(8:22:08 PM) handoflixue: Our example Christian seems to be picking sensibly, though.
(8:22:11 PM) Adelene: In the contexts that you consider 'elfy', cryonics might actually not make sense. Or it might be replaced by something else - I bet your elf would snap up an amulet of ha-ha-you-can't-kill-me, fr'ex.
(8:22:26 PM) handoflixue: Heeeh :)
(8:28:51 PM) Adelene: About the Christian example - yes, in that particular case they chose the model for logical reasons - the mainstream model is the logical one because it works, at least reasonably well. It's implied that the person will use the Christian model at least sometimes, though. Say for example they wind up making poor financial decisions because 'God will provide', or something.
(8:29:48 PM) handoflixue: Heh ^^;
(8:29:55 PM) handoflixue: Okay, yeah, that one I'm guilty of >.>
(8:30:05 PM) handoflixue: (In my defense, it keeps *working*)
(8:30:10 PM) Adelene: (I appear to be out of my depth, now. Like I said, this isn't a concept I use. I haven't thought about it much.)
(8:30:22 PM) handoflixue: It's been helpful to define a model for me.
(8:30:33 PM) Adelene: ^^
(8:30:50 PM) handoflixue: The idea that the mistake is not having separate models, but in the application or lack thereof.
(8:31:07 PM) handoflixue: Sort of like how I don't use quantum mechanics to do my taxes.
(8:31:14 PM) handoflixue: Useful model, wrong situation, not compartmentalization.
(8:31:28 PM) Adelene: *nods*
(8:32:09 PM) handoflixue: So, hmmmm.'
(8:32:18 PM) handoflixue: One thing I've noticed in life is that having multiple models is useful
(8:32:32 PM) handoflixue: And one thing I've noticed with a lot of "rationalists" is that they seem not to follow that principle.
(8:33:15 PM) handoflixue: Does that make sense
(8:33:24 PM) Adelene: *nods*
(8:34:13 PM) Adelene: That actually feels related.
(8:35:03 PM) Adelene: People want to think they know how things work, so when they find a tool that's reasonably useful they tend to put more faith in it than it deserves.
(8:35:39 PM) Adelene: Getting burned a couple times seems to break that habit, but sufficiently smart people can avoid that lesson for a surprisingly long time.
(8:35:55 PM) Adelene: Well, sufficiently smart, sufficiently privileged people.
(8:37:15 PM) handoflixue: Heeeh, *nods*
(8:37:18 PM) handoflixue: I seem to ... I dunno
(8:37:24 PM) handoflixue: I grew up on the multi-model mindset.
(8:37:41 PM) handoflixue: It's... a very odd sort of difficult to try and comprehend that other people didnt...
(8:37:47 PM) Adelene: *nods*
(8:38:47 PM) Adelene: A lot of people just avoid things where their preferred model doesn't work altogether. I don't think many LWers are badly guilty of that, but I do suspect that most LWers were raised by people who are.
(8:39:16 PM) handoflixue: Mmmmm...
(8:39:38 PM) handoflixue: I tend to get the feeling that the community-consensus has trouble understanding "but this model genuinely WORKS for a person in this situation"
(8:39:58 PM) handoflixue: With some degree of... just not understanding that ideas are resources too, and they're rather privileged there and in other ways.
(8:40:16 PM) Adelene: That is an interesting way of putting it and I like it.
(8:40:31 PM) handoflixue: Yaaay :)
(8:40:40 PM) Adelene: ^.^
(8:41:01 PM) Adelene: Hmm
(8:41:18 PM) Adelene: It occurs to me that compartmentalization might in a sense be a social form of one-boxing.
(8:41:41 PM) handoflixue: Heh! Go on :)
(8:42:01 PM) Adelene: "For signaling reasons, I follow model X in situation-class Y, even when the results are sub-optimal."
(8:42:59 PM) handoflixue: Hmmmm.
(8:43:36 PM) handoflixue: Going back to previous, though, I think compartmentalization requires some degree of not being *aware* that you're doing it.
(8:43:47 PM) Adelene: Humans are good at that.
(8:43:48 PM) handoflixue: So... what you said, exactly, but on a subconscious level
(8:43:53 PM) Adelene: *nodnods*
(8:44:00 PM) Adelene: I meant subconsciously.

20 comments

Comments sorted by top scores.

comment by rwallace · 2011-07-30T05:39:06.091Z · LW(p) · GW(p)

The most compact explanation of compartmentalization I've come up with is:

  1. You are shown a convincing argument for the proposition that paraquat weedkiller is an effective health tonic.

  2. You believe the argument, you want to be healthy, and you can't think of any logical reason you shouldn't act on your belief.

  3. Nonetheless, you fail to actually drink any paraquat weedkiller.

Sure, there was in fact a flaw somewhere in the argument. But evolution wasn't able to give us brains that are infinitely good at spotting flaws in arguments. The best it could come up with was brains that do not in fact think, say and do all the things they logically should based on their declaratively held beliefs.

Replies from: handoflixue, TheOtherDave
comment by handoflixue · 2011-07-30T07:35:12.270Z · LW(p) · GW(p)

That one strikes me mostly as being a subconscious standard of proof that says "it requires X level of confidence to accept an argument as convincing, but X+n confidence before I am so convinced as to do something that I intuitively expect to kill me."

It also strikes me as eminently sensible, and thus doesn't do well at illustrating compartmentalisation as a bad thing :)

Replies from: orthonormal
comment by orthonormal · 2011-08-02T13:52:38.047Z · LW(p) · GW(p)

It was explicitly written to show the evolutionary benefit of compartmentalization, not the problems it causes. Heuristics endure because (in the ancestral environment) they worked better on average than their alternatives.

comment by TheOtherDave · 2011-07-30T11:56:12.018Z · LW(p) · GW(p)

While not disagreeing with the main thrust here, thinking of compartmentalization as something explicitly selected for may be misleading.

Not everything in an evolved system is explicitly selected for, after all. And it seems at least equally plausible to me that compartmentalization is a side-effect of a brain that incorporates innumerable mechanisms for deriving cognitive outputs (including beliefs, assertions, behaviors, etc.) from inputs that themselves evolved independently.

That is, if each mechanism is of selection-value to its host, and the combination of them is at least as valuable as the sum of them individually, then a genome that happens to combine those mechanisms is selected for. If the additional work to reconcile those mechanisms (in the sense we're discussing here) either isn't cost-effective, or just didn't happen to get generated in the course of random mutation, then a genome that reconciles them doesn't get selected for.

comment by CronoDAS · 2011-07-30T07:55:56.083Z · LW(p) · GW(p)

One reduction of "compartmentalization" is "failure to notice, or act on, inconsistent beliefs and/or desires".

For example, in G.E.B., Douglas Hofstader describes a situation in which he found himself bored while driving, so he attempted to turn on his broken radio. Another time, he and his wife, when traveling in Paris, decide that they want a hotel room on the opposite side of the building from the American embassy because they're concerned about terrorism, but when asked if they want a room with a better view of the embassy gardens, accept it - and only later realize what they ended up doing.

Hofstatder goes on to state that the question "Do my beliefs imply a contradiction?" is NP-hard even in the restricted case in which all your beliefs can be expressed as statements in propositional logic; as you add more beliefs to a system, the number of ways to potentially deduce a contradiction goes up exponentially - and, therefore, some form of compartmentalization is mathematically inevitable because nothing can do the 2^zillion computations necessary to check all the beliefs in a human's head for consistency.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-07-30T08:32:05.289Z · LW(p) · GW(p)

Of course you get to choose which beliefs go into your head to begin with. You can avoid contradictions by only putting in beliefs that are logically implied by ones you already have, for example.

Replies from: Vaniver, lucidfox
comment by Vaniver · 2011-07-30T16:43:48.597Z · LW(p) · GW(p)

Of course you get to choose which beliefs go into your head to begin with.

What? Were you ever a child?

comment by lucidfox · 2011-07-30T10:10:31.752Z · LW(p) · GW(p)

If you only accept beliefs that are implied by your existing ones, you'll never believe anything new. And as such, you'll stop updating your beliefs.

Replies from: shokwave, Oscar_Cunningham
comment by shokwave · 2011-07-30T14:49:24.798Z · LW(p) · GW(p)

Not necessarily. If you slowly develop towards logical omnisicence, you'll only accept beliefs implied by your existing ones, but you will believe some new things. You will update your beliefs on new implications from current beliefs rather than evidence, sure, but that's not such a weird concept - it ran strongly through the schools of analytic philosophy for a long time.

comment by Oscar_Cunningham · 2011-07-31T14:32:35.905Z · LW(p) · GW(p)

Right, which is why I said "for example". My point is simply that there are many fewer contradiction between our beliefs than CronoDAS' comment would suggest, since our beliefs are somewhat formed by processes that make them coherent.

comment by Nisan · 2011-07-30T07:03:42.267Z · LW(p) · GW(p)

User:AdeleneDawner talks like User:MBlume.

Replies from: shokwave, AdeleneDawner, Alicorn
comment by shokwave · 2011-07-30T14:49:53.953Z · LW(p) · GW(p)

Nisan talks like Clippy.

comment by AdeleneDawner · 2011-07-30T07:22:07.459Z · LW(p) · GW(p)

That probably needs a "when talking to handoflixue" qualifier for maximum accuracy, but I'll take it as a compliment in any case. ^.^

comment by Alicorn · 2011-07-30T17:40:45.554Z · LW(p) · GW(p)

I'm not seeing it. What prompts you to spot this resemblance?

Replies from: Nisan
comment by Nisan · 2011-07-30T19:35:17.127Z · LW(p) · GW(p)

Just *nodnods* and ^^. Actually, handoflixue uses the latter idiom as well.

comment by lucidfox · 2011-07-30T04:32:48.465Z · LW(p) · GW(p)

whereupon if I'm playing WOW, I roleplay an elf. <...> If I'm on LessWrong, I roleplay a rationalist.

Or you can roleplay a rationalist elf in WoW. :)

A long time ago, back before I quit WoW, I roleplayed an atheist draenei who refused to believe in the night elf goddess Elune. The catch here is that we players know she actually exists in the setting, because Blizzard told us so, but the characters would have no way of verifying this since she never appeared in the world in person. From my character's point of view, the magical powers that priests of Elune attributed to their goddess were actually (unknown to them) given to them by other, non-personified sources of power followed by other priests in the setting.

Replies from: handoflixue
comment by handoflixue · 2011-07-30T06:49:34.787Z · LW(p) · GW(p)

Heeeh, very cute! My first foray in to fantasy had a God of Atheists, who derived power from non-belief in other Gods :)

comment by Dreaded_Anomaly · 2011-07-30T03:07:55.359Z · LW(p) · GW(p)

In the interest of not reinventing the wheel, I'll just link to Taking Ideas Seriously, which is mostly about "failure to compartmentalize" but in service of that ends up illustrating compartmentalization fairly well, too.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T10:32:16.808Z · LW(p) · GW(p)

I think I did a pretty terrible job of writing that post, even if the spirit is roughly correct. Anna's followup posts were much better written. Here's the most relevant one.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-07-30T17:41:10.100Z · LW(p) · GW(p)

I hadn't seen that, thanks!