Compartmentalization as a passive phenomenon

post by Kaj_Sotala · 2010-03-26T13:51:08.199Z · LW · GW · Legacy · 72 comments

Contents

72 comments

We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:

If a pen is dropped on a moon, will it:
A) Float away
B) Float where it is
C) Fall to the surface of the moon

Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.

A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".

While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my "compartmentalization" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.

The world is a complicated place. One of the reasons we don't have AI yet is because we haven't found very many reliable cross-domain reasoning rules. Reasoning algorithms in general are quickly subject to a combinatorial explosion: the reasoning system might know which potential inferences are valid ones, but not which ones are meaningful in any useful sense. Most current-day AI systems need to be more or less fine-tuned or rebuilt entirely when they're made to reason in a domain they weren't originally built for.

For humans, it can be even worse than that. Many of the basic tenets in a variety of fields are counter-intuitive, or are intuitive but have counter-intuitive consequences. The universe isn't actually fully arbitrary, but for somebody who doesn't know how all the rules add up, it might as well be. Think of all the times when somebody has tried to reason using surface analogies, mistaking them for deep causes; or dismissed a deep cause, mistaking it for a surface analogy. Somebody might present us with a connection between two domains, but we have no sure way of testing the validity of that connection.

Much of our reasoning, I suspect, is actually pattern recognition. We initially have no idea of the connection between X and Y, but then we see X and Y occur frequently together, and we begin to think of the connection as an "obvious" one. For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

I suspect that often when we say "(s)he's compartmentalizing!", we're operating in a domain that's more familiar to us, and thus it feels like an active attempt to keep things separate must be the cause. After all, how could they not see it, were they not actively keeping it compartmentalized?

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible. If you don't know which cross-domain rules and reasoning patterns are valid, then building up a separate set of rules for each domain is the safe approach. Discarding as much of your previous knowledge as possible when learning about a new thing is slow, but it at least guarantees that you're not polluted by existing incorrect information. Build your theories primarily on evidence found from a single domain, and they will be true within that domain. While there can certainly also be situations calling for an active process of compartmentalization, that might only happen in a minority of the cases.

72 comments

Comments sorted by top scores.

comment by CronoDAS · 2010-03-26T19:46:47.782Z · LW(p) · GW(p)

GEB has a section on this.

In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.

Not compartmentalizing isn't just difficult, it's basically impossible.

Replies from: Strange7, BenAlbahari, wedrifid, RobinZ, BenAlbahari
comment by Strange7 · 2010-03-26T19:56:58.673Z · LW(p) · GW(p)

Reminds me of the opening paragraph of The Call of Cthulhu.

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.

comment by BenAlbahari · 2010-03-29T10:23:26.236Z · LW(p) · GW(p)

Glenn Beck:

When I sobered up I started looking at all of the things that I believed in and decided to take everything out and only put the things back in me that I knew to be true... and then I would put it back in and then I would look at all the other things that I found were true and then I would match them and if one of them didn't fit with the other then one of them had to be wrong. (source)

P.S. The trick is to use bubble sort.

comment by wedrifid · 2010-03-27T03:34:40.775Z · LW(p) · GW(p)

It took me several seconds to guess that GEB refers to Godel, Escher, Bach.

Replies from: CronoDAS
comment by CronoDAS · 2010-03-27T05:14:09.206Z · LW(p) · GW(p)

Sorry about that!

comment by RobinZ · 2010-03-26T20:18:20.843Z · LW(p) · GW(p)

I agree, save that I think Academian's proposal should be applied and "compartmentalizing" replaced with "clustering". "Compartmentalization" is a more useful term when restricted to describing the failure mode.

comment by BenAlbahari · 2010-03-27T00:55:49.089Z · LW(p) · GW(p)

Could I express what you said as:

A person is in the predicament of:

1) having a large number of beliefs
2) the mathematically impossible challenge of validating those beliefs for consistency

Therefore:

3) It is impossible to not compartmentalize

This leads to a few questions:

  • Is it still valuable to reduce, albeit not eliminate, compartmentalization?
  • Is there a fast method to rank how impactful a belief is to my belief system, in order to predict whether an expensive consistency check is worthwhile?
  • Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
  • Does probablistic reasoning change how we answer these questions?
Replies from: bogus, CronoDAS
comment by bogus · 2010-03-27T01:32:37.549Z · LW(p) · GW(p)

Does probablistic reasoning change how we answer these questions?

Edwin Jaynes discusses "lattice" theories of probability where propositions are not universally comparable in appendix A of Probability Theory: The Logic of Science. Following Jaynes's account, probability theory would correspond to a uniformly dense lattice, whereas a lattice with very sparse structure and a few dense regions would correspond to compartmentalized beliefs.

comment by CronoDAS · 2010-03-29T04:34:39.770Z · LW(p) · GW(p)

Yes, that's basically right.

As for those questions, I don't know the answers either.

Replies from: BenAlbahari
comment by BenAlbahari · 2010-03-29T06:09:32.791Z · LW(p) · GW(p)

Rationalism is faith to you then?

[EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]

Replies from: CronoDAS
comment by CronoDAS · 2010-03-29T06:21:48.872Z · LW(p) · GW(p)

I'm not sure what you mean...

Replies from: BenAlbahari
comment by BenAlbahari · 2010-03-29T07:31:47.270Z · LW(p) · GW(p)

Is it still valuable to reduce, albeit not eliminate, compartmentalization?

Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out.

Is there a fast method to rank how impactful a belief is to my belief system...

Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we're wasting our time processing only a tiny randomly chosen part of the mountain.)

Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)

Rationality, if it's actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency.

Does probablistic reasoning change how we answer these questions?

Finally, if we can't perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning.

To summarize, I want to be able to answer "yes" to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is "I don't know". For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we're talking rationality here, right?).

comment by reaver121 · 2010-03-26T15:41:23.163Z · LW(p) · GW(p)

If understand you correctly, you are saying that most people are not knowledgeable enough about the different domains in question to make any (or judge any) cross-domain connections. This seems plausible.

I can think however of another argument that confirms this but also clarifies why on Less Wrong we think that people actively compartmentalize instead of failing to make the connection and that is selection bias. Most people on this site are scientists, programmers or other technical professions. It seems that most are also consequentialists. Not surprisingly, both these facts points to people who enjoy following a chain of logic all way to the end.

So, we tend to learn a field until we know it's basic principles. For example, if you learn about gravity, you can learn just enough so you can calculate the falling speed of an object in gravitational field or you can learn about the bending of space-time by mass. It seems rather obvious to me that the second method encourages cross-domain connections. If you don't know the basic underlying principles of the domains you can't make connections.

I also see this all the time when I teach someone how to use computers. Some people build an internal model of how a computer & programs conceptually work and are then able to use most basic programs. Others learn by memorizing each step and are looking at each program as a domain on it's own instead of generalizing across all programs.

Replies from: Academian
comment by Academian · 2010-03-26T20:52:20.251Z · LW(p) · GW(p)

One of the reasons I'm in favor of axiomatization in mathematics is that it prevents compartmentalization and maintains a language (set-theory) for cross-domain connections. It doesn't have to be about completeness.

So yeah, thumbs up for foundations-encourage-connections... they are connections :)

Replies from: wnoise
comment by wnoise · 2010-03-26T20:58:09.511Z · LW(p) · GW(p)

I basically agree, but I'd advocate category theory as a much better base language than set theory.

comment by Scott Alexander (Yvain) · 2010-03-27T12:29:50.302Z · LW(p) · GW(p)

I wonder if there'd be a difference between the survey as written (asking what a pen would do on the moon, and then offering a chance to change the answer based on Apollo astronauts) vs. a survey in which someone asked "Given that the Apollo astronauts walked on the moon, what do you think would have happened if they'd dropped a pen?"

The first method makes someone commit to a false theory, and then gives them information that challenges the theory. People could passively try to fit the astronaut datum into their current working theory, or they could actively view it as an outside attack on their position which they had to defend against. Maybe if the students had given people the information about the astronauts first, the respondents would have applied the cross-domain knowledge more successfully.

But I totally sympathize with you about the occasional virtues of compartmentalization. The worst field I've ever found for this is health and medicine. You learn that some vitamin is an antioxidant, then you learn that some disease is caused by oxidation, you make the natural assumption that the vitamin would help cure the disease, and then a study comes out saying there's no relationship at all.

comment by pjeby · 2010-03-27T00:14:50.137Z · LW(p) · GW(p)

But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

Look at it this way: what evolutionary pressure exists for NOT compartmentalizing?

From evolution's standpoint, if two of your beliefs really need to operate at the same time, then the stimuli will be present in your environment at close enough to the same time as to get them both activated, and that's good enough to work for passive consistency checking. For active consistency checking, we have simple input filters for rejecting stuff that conflicts with important signaling beliefs and whatnot.

OTOH, there's no evolutionary pressure for something that sifts through your entire brain contents, generating arbitrary scenarios where two pieces of information might conflict or produce some startlingly new and useful idea.

Replies from: wedrifid
comment by wedrifid · 2010-03-27T03:31:36.704Z · LW(p) · GW(p)

For active consistency checking, we have simple input filters for rejecting stuff that conflicts with important signaling beliefs and whatnot.

And, as the situation demands, not rejecting stuff even though it conflicts with important signalling beliefs.

comment by sketerpot · 2010-03-28T19:32:53.993Z · LW(p) · GW(p)

I think part of the problem with the moon question was that it suggested two wrong answers first. How would you have answered the question if it was just "If a pen is dropped on the moon, what will happen? Explain in one sentence."

I would have shrugged and said "It will fall down, slowly." But when I saw "float away" and "float where it is", those ideas wormed their way into my head for a few seconds before I could reject them. Just suggesting those ideas managed to mess me up, and I'm someone whose mental model of motion in space is so strong that I damn near cried with joy when I watched Planetes and saw people maneuvering in zero gravity exactly the way they're supposed to. (And it turns out I'm not the only one to have this exact same reaction. Weird.)

So, I'm thinking that the wrong multiple-choice answers are responsible for a lot of the confusion, the same way most people wouldn't interpret bumps in the night as angry ghosts unless they hear that the house is haunted.

comment by jimrandomh · 2010-03-26T15:47:53.926Z · LW(p) · GW(p)

Yes, building mental connections between domains requires well-populated maps for both of them, plus significant extra processing. It's more properly treated as a skill which needs development than a cognitive defect. In the pen-on-the-moon example, knowing that astronauts can walk around is not enough to infer that a pen will fall; you also have to know that gravity is multiplicative rather than a threshold effect. And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity filming is impractical, the writers invariably come up with some sort of confusing phlebotinum (most commonly magnetic boots) to make them behave more like regular-gravity environments.

Replies from: None
comment by [deleted] · 2010-03-27T07:30:21.647Z · LW(p) · GW(p)

And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity filming is impractical, the writers invariably come up with some sort of confusing phlebotinum (most commonly magnetic boots) to make them behave more like regular-gravity environments.

I think you're on to something. I was wondering why the "heavy boots" people singled out the boots. Why not say "heavy suits" or that the astronauts themselves were heavier than pens. Didn't 2001: A Space Odyssey start the first zero-gravity scene with a floating pen and a flight attendant walking up the wall?

comment by JamesAndrix · 2010-03-26T15:34:24.795Z · LW(p) · GW(p)

I read the question as asking about THE Moon, not "a moon". The question as written has no certain answer. If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity. Above this level it will float away. The astronaut might also float away, unless he were wearing heavy boots.

Replies from: None, wedrifid
comment by [deleted] · 2010-03-27T06:02:55.678Z · LW(p) · GW(p)

Pens and heavy boots always do the same thing in any gravitational field, unless they modify it somehow, like by moving the moon. Acceleration due to gravity does not depend on the mass of the accelerated object.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-27T06:35:19.713Z · LW(p) · GW(p)

If the moon is small and spinning quickly, a space elevator only needs to be a few feet tall. In this admittedly contrived scenario, the boots will anchor the astronaut because they are going around it more slowly. The pen will float because it is actually in orbit.

To land on this moon you would achieve orbit, and then put your feet down.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-27T23:19:11.031Z · LW(p) · GW(p)

I don't think you'd be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we're talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon's surface.

Of course, since the centrifugal force at the equator due to rotation would almost exactly counteract weight due to gravity, the only way the thing could hold itself together would be tensile strength; it wouldn't take much for it to slowly tear itself apart.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-28T07:28:27.444Z · LW(p) · GW(p)

Hmm, I suppose it's too much handwaving to say it's only a few meters wide and super dense.

Replies from: Jordan
comment by Jordan · 2010-03-28T08:46:11.791Z · LW(p) · GW(p)

My rough calculation says that the density would need to be about a million times greater than Earth's, around 10^10 kg/m^3. This is too low to be a neutron star, but too high to be anything else I think. It may very well be impossible in this universe.

That's assuming uniform density though. Of course you could just have a microblackhole with a hard 1-meter-diameter shell encasing it. How you keep the shell centered is ... trickier.

Replies from: SoullessAutomaton, Baughn
comment by SoullessAutomaton · 2010-03-28T15:33:02.756Z · LW(p) · GW(p)

Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That's roughly the density of electron-degenerate matter; I'm pretty sure nothing will hold together at that density without substantial outside pressure, and since we're excluding gravitational compression here I don't think that's likely.

Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.

Replies from: wnoise
comment by wnoise · 2010-03-28T17:39:00.133Z · LW(p) · GW(p)

Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole.

This won't work for spherical shells and uniformly distributed charge for the same reason that a spherical shell has no net gravitational force on anything inside it. You'll need active counterbalancing.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-28T19:10:40.705Z · LW(p) · GW(p)

Ah, true, I didn't think of that, or rather didn't think to generalize the gravitational case.

Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.

comment by Baughn · 2010-03-28T13:26:28.800Z · LW(p) · GW(p)

Would it be possible to keep the black hole charged (use an electron gun), then manipulate electric fields to keep it centered? I don't know enough physics to tell.

Replies from: wnoise
comment by wnoise · 2010-03-28T17:41:28.902Z · LW(p) · GW(p)

Yes, this could work.

comment by wedrifid · 2010-03-27T03:15:24.889Z · LW(p) · GW(p)

If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity.

Well, even more technically, 'may be at orbital velocity, depending on where on the moon the astronaut is standing'.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-27T06:37:29.865Z · LW(p) · GW(p)

Pesky mountains.

Replies from: wnoise
comment by wnoise · 2010-03-27T06:38:56.658Z · LW(p) · GW(p)

That and varying latitude.

comment by Hook · 2010-03-26T16:08:05.800Z · LW(p) · GW(p)

Someone posted a while back that only a third of adults are capable of abstract reasoning. I've had some trouble figuring out exactly it means to go through life without abstract reasoning. The "heavy boots" response is a good example.

Without abstract reasoning, it's not possible to form the kind of theories that would let you connect the behavior of a pen and an astronaut in a gravitational field. I agree that this is an example of lack of ability, not compartmentalization. Of course, scientists are capable of abstract reasoning, so its still possible to accuse them of compartmentalizing even after considering the survey results.

Replies from: RobinZ
comment by RobinZ · 2010-03-26T19:13:14.899Z · LW(p) · GW(p)

I instantly distrusted the assertion (it falls in the general class of "other people are idiots"-theories, which is always more popular among the Internet geek crowd than they should be), and went to the linked article:

The Piagetians used what they called a clinical interview to determine which reasoning schemes a child had mastered. They posed questions of the children and then asked about how they arrived at their answers. As mentioned above, the elementary reasoning schemes (classification, etc) were what were being used.

Because each clinical interview took two or three hours, it was only possible to get data for a small number of children. Some psychologists decided to try to create a simple pencil and paper version which could then be administered to many children and thereby obtain data about broad classes of children.

This already suggests that the data should be noisy. I can think of at least two problems:

  1. The test only determines, at best, what methods the individual used to solve this particular problem - and, at worst, determines what methods the individual claims to have used to solve the problem.

  2. The accuracy of the test may be greatly reduced by the paper-and-pencil administration thereof. Any confusion which occurs by either the evaluators or takers will obscure the data.

Replies from: Hook
comment by Hook · 2010-03-26T20:34:03.440Z · LW(p) · GW(p)

The 32% number does seem low to me. Even if the number is more like two thirds of adults are capable of abstract reasoning, that still leaves enough people to explain the pen on the moon result.

Is compartmentalization applying concrete (and possibly incorrect?) reasoning to an area where the person making the accusation of compartmentalization thinks abstract reasoning should be used?

comment by byrnema · 2010-03-26T15:14:07.698Z · LW(p) · GW(p)

For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

Absolutely. Another piece of the puzzle required to understand whether the pen 'obviously' falls or not is, 'what kind of atmosphere does the moon have'? What fraction of people know that there is no atmosphere on the surface of the moon? (Do I really know this?? I think I just remember being told this, and despite being told, I'm not certain there's absolutely no atmosphere on the moon.)

Without detailed information about the atmosphere, you really don't know. On Earth, the pen floats in water, but doesn't float in air.

(And then you have the added problem that there's a high chance people will first recall the image of the flag blowing on the moon, which is unfortunate for physics.)

Replies from: bentarm, RobinZ
comment by bentarm · 2010-03-26T15:20:43.533Z · LW(p) · GW(p)

On Earth, the pen floats in water, but doesn't float in air.

This is surely also true on the moon? The relative densities of the pen and the fluid you put it in don't change depending on the gravitational field they're in.

Replies from: pengvado, byrnema
comment by pengvado · 2010-03-27T06:21:50.091Z · LW(p) · GW(p)

Gravity affects pressure affects density. To a first approximation, gases have density directly proportional to their pressure, and liquids and solids don't compress very much.

With air/water/pen the conclusion doesn't change. But an example where it does:
A nitrogen atmosphere at STP has a density of 1251 g/m^3.
A helium balloon at STP has a density of 179 g/m^3. The balloon floats.
Then reduce Earth's gravity by a factor of 10, and hold temperature constant.
The atmospheric pressure reduces by a factor of 10, so its density goes to 125 g/m^3.
But the helium can't expand likewise (assume the balloon is perfectly inelastic), so it's still 179 g/m^3. The balloon sinks.

comment by byrnema · 2010-03-26T15:26:00.627Z · LW(p) · GW(p)

Hmm. I actually don't know the relationship between gravity and buoyancy -- a moment with Google and I'd know, but in the meantime I'm in the position of relating to all those people who answered incorrectly.

comment by RobinZ · 2010-03-26T15:26:56.850Z · LW(p) · GW(p)

Another piece of the puzzle required to understand whether the pen 'obviously' falls or not is, 'what kind of atmosphere does the moon have'?

Another unobvious fact is that the force that holds up a floating object is also tied to weight - specifically, the weight of the atmosphere or liquid. Even if the atmosphere on the Moon were precisely as dense as the Earth's (it is not), the pen and the air would be lighter in the same proportion, and the pen would still fall.

Edit: i.e. what bentarm said.

comment by Morendil · 2010-03-26T15:05:47.790Z · LW(p) · GW(p)

Quite convincing, thanks. I'll want to think about it more, but perhaps it would be a good idea to toss the word out the window for its active connotations.

ISTM, though, that there is a knack for for cross-domain generalization (and cross-domain mangling) of insights, that people have this knack in varying degrees, that this knack is an important component of what we call "intelligence", in the sense that if we could figure out what this knack consists of we'd have solved a good chunk of AI. Isn't this a major reason why Hofstadter, for instance, has focused so sharply on analogy-making, fluid analogies, and so on?

(This is perhaps a clue to one thing that has been puzzling me, given Eliezer's interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making.)

Replies from: Academian, ciphergoth, komponisto
comment by Academian · 2010-03-26T15:37:58.136Z · LW(p) · GW(p)

I think what's sometimes called a "compartment" would be better called a "cluster". Learning consists of forming connections, which can naturally form distinct clusters without "barriers" causally separating them. The solution is then to simply connect the clusters (realize that the moon landing videos are relevant).

But certainly at times people erect intentional barriers to prevent connections from forming (a lawyer effortfully trying not to connect his own morals to the case), and then I would use the term "compartment". Identifying the distinction between clusters and compartments could be a useful diagnostic goal.

comment by Paul Crowley (ciphergoth) · 2010-03-28T00:48:27.602Z · LW(p) · GW(p)

(This is perhaps a clue to one thing that has been puzzling me, given Eliezer's interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making.)

I'd assumed that was because the focus was not on how to build an AGI but on how you define its goals.

comment by komponisto · 2010-03-26T17:54:15.589Z · LW(p) · GW(p)

perhaps it would be a good idea to toss the word out the window for its active connotations.

Why? It's still just as much of a flaw if it's a passive phenomenon.

To make an analogy with some literal overlap, some people are creationists because they don't know any science, and others are creationists despite knowing science. Should we avoid using the term "creationist" for the first group? I think not.

Compartmentalization is still compartmentalization, whether it's the result of specifically motivated cognition, or just an intellectual deficiency such as a failure to abstract.

(In fact, I'd venture that motivated thought sometimes keeps people from improving their intellectual skills, just as religiously-motivated creationists may deliberately avoid learning science.)

This is perhaps a clue to one thing that has been puzzling me, given Eliezer's interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making

Honestly, I think this is mainly just a result of the personalities of the folks who happen to be posting. Creativity and analogy-making were often discussed in Eliezer's OB sequences; posts by Yvain and Alicorn also seem to have this flavor.

Replies from: Morendil
comment by Morendil · 2010-03-26T18:09:42.716Z · LW(p) · GW(p)

Creativity and analogy-making were often discussed in Eliezer's OB sequences

I would appreciate, if you can think of any examples offhand, if you'd point me to them. I'll have another look-see later to check on my (possibly mistaken) impression. Just not today, I'm ODing on LW as it is. Is it just me or has the pace of top-level posting been particularly hectic lately?

Replies from: thomblake, komponisto
comment by thomblake · 2010-03-26T18:13:50.170Z · LW(p) · GW(p)

Is it just me or has the pace of top-level posting been particularly hectic lately

It is not just you

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-03-26T19:09:21.618Z · LW(p) · GW(p)

I considered delaying this post for a few days until the general pace of posting had died down a bit, but then I'm bad at delaying the posting of anything I've written.

comment by komponisto · 2010-03-26T19:46:21.196Z · LW(p) · GW(p)

I would appreciate, if you can think of any examples offhand, if you'd point me to them.

Creativity.

Analogy-making.

Replies from: Morendil
comment by Morendil · 2010-03-26T20:31:06.718Z · LW(p) · GW(p)

The second link isn't really about analogy-making as topic within AI, it's more about "analogy as flawed human thinking". (And Kaj's post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.)

The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged "creativity"; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I've learned from the Sequences, at various levels of understanding; they'd cover a variety of topics but creativity would be ranked quite low.

I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I'd expect it to be referenced more often.

Replies from: komponisto, thomblake
comment by komponisto · 2010-03-26T21:34:15.683Z · LW(p) · GW(p)

I didn't interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI -- its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that's his "angle".) In this context, I therefore interpreted your remark as "given Eliezer's interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I'm surprised there isn't more discussion of these phenomena."

I'll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it's important to keep LW from becoming "about" AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you're part of the "AI crowd" or not.

(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks -- not even Eliezer thinks about nothing else.)

Replies from: Morendil, wnoise
comment by Morendil · 2010-03-26T21:45:49.922Z · LW(p) · GW(p)

Sorry I wasn't clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj's post seems to me to hint strongly.

comment by wnoise · 2010-03-26T21:40:37.337Z · LW(p) · GW(p)

Eliezer doesn't want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.

Replies from: Jack
comment by Jack · 2010-03-26T21:45:21.009Z · LW(p) · GW(p)

Eliezer doesn't want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.

It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.

comment by thomblake · 2010-03-26T20:39:41.908Z · LW(p) · GW(p)

I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I'd expect it to be referenced more often.

Remember, there is a long tradition here, especially for EY, to usually not refer to any scholarly research.

comment by Nanani · 2010-03-29T01:18:34.941Z · LW(p) · GW(p)

Nitpick: "If a pen is dropped on A moon"
It doesn't specify Earth's moon. If a pen were dropped on say, Deimos, it might very well appear to do B) for a long moment ;) (Deimos is Mars' outermost moon and too small to retain a round shape. Its gravity is only 0.00256 m/s^2 and escape velocity is only 5.6 m/s. That means you could run off it.)

On the other hand, the word "dropped" effectively gives the game away. Things dropped go DOWN, not up, and they don't float in place. Would be better to say "released".

And now, back to our story...

comment by Vladimir_Nesov · 2010-03-27T21:04:25.943Z · LW(p) · GW(p)

I don't believe it was assumed that compartmentalization is something you actually "do" (make effort towards achieving). Making this explicit is welcome, but assuming the opposite to be the default view seems to be an error.

comment by wedrifid · 2010-03-27T03:28:44.506Z · LW(p) · GW(p)

I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. I

Many people (probably more people) make the same mistake when asked 'which falls faster, the 1 kg weight or the 20 kg weight?'. I guess this illustrates why compartmentalization is useful. False beliefs that don't matter in one field can kill you in another.

Replies from: None
comment by [deleted] · 2010-03-27T22:03:40.087Z · LW(p) · GW(p)

The example you use is in my opinion not a failure of compartmentalization but of communication.

Humans will without fail, due to possesing sufficiently opitmised time saving heuristics, always assume when talking to a nonthreatening, nondescript and polite stranger like youself that you are a regular person (the kind they normally interact with) talking about a situation that fits their usual frame of reference (taking place on a planteary surface, reasonable temperature range, normal g, one atm of preasure, oxygen present enabling combustion ect.) except when you explicitly state otherwise.

Taking two weights of different mass (all else being equal) and dropping them will not result in "neither falling faster". To realize why consider the equation for terminal velocity
[which is not considering bouyancy, Vt=squr(2gm/densityprojected area of objectdrag coefficent)].

They of course won't think about it this way, and even if they did they would note that on a "normal" distance before hitting the ground would result in t1 and t2 being about the same if not really equal.

The rather cringe worthy approximation comes when they unintentionally assume a slopiness of communication on your part (we leave out all except the most important factor when asking short questions) and that you really meant a few other things except mass are not equal (since the average things that they handle that have radically different masses from each other are rarley if ever identical in shape or volume).

The reason it is cringe worthy is not that its a bad assumption to make in their social circle. But that their soical circle is such that they don't have enough interactions like this to categorize the question under "sciencey stuff" in their head!

PS I just realized you may have mistyped and meant the old "What is heavier 10 kg of straw or 10 kg of iron?" which ilustrates the point you try to make a bit better (I actually got the wrong answer when saying my mind out right away at the tender age of 7, I realized my error a second to late to avoid the laughing of my schoolmate). But even this is either a faliure of communication or just ignorance of the concept of density.

Replies from: wedrifid, wnoise
comment by wedrifid · 2010-03-28T13:32:17.804Z · LW(p) · GW(p)

PS I just realized you may have mistyped and meant the old "What is heavier 10 kg of straw or 10 kg of iron?" which ilustrates the point you try to make a bit better

No. I meant what I wrote. The thing with the straw and or feathers is just word play, a communication problem. I am talking about an actual misunderstanding of the nature of physics.

I have seen people (science teacher types) ask the question by holding out a rock and a scrunched up piece of paper and asking which will hit the ground first when dropped. There is no sophistry - the universe doesn't do 'trick questions'. Buoyancy, friction and drag are all obviously dwarfed here by experimental error. People get the answer wrong. They expect to see the rock hit the ground noticeably earlier. Even more significantly, they are surprised when they both fall about the same speed. In fact, sometimes they go as far as to accuse the demonstrator of playing some sort of trick and insist on performing the experiment themselves.

The same kind of intuitive (mis)understanding of gravity would lead people to also guess wrong about things like what would happen on the moon.

comment by wnoise · 2010-03-27T23:48:42.036Z · LW(p) · GW(p)

Even better is the question "what weighs more, a pound of feathers, or a pound of gold?"

Gur zrgny vf yvtugre -- vg'f zrnfherq va gebl cbhaqf, juvpu unir gjryir bhaprf gb gur cbhaq engure guna fvkgrra, naq n gebl bhapr vf nccebkvzngryl gur fnzr na nibveqhcbvf bhapr.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-28T00:10:44.842Z · LW(p) · GW(p)

Feathers have lower density, so the same mass occupies greater volume, experiences greater buoyancy and weighs less.

Replies from: None
comment by [deleted] · 2010-03-28T13:01:18.378Z · LW(p) · GW(p)

Edit: I just realized a bit of bias on my part. I probably wouldn't have commented if you had used SI unit for mass [kg] even though that is just as often used in non-scientific context to mean "what the scale shows" as pounds.

I completley misread what you actually wrote and just took the "what weighs more, a pound of feathers, or a pound of gold" of the previous commenter into account.

You explicitly refer to mass, so sorry if you read the unedited comment.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-28T13:15:24.511Z · LW(p) · GW(p)

We have an ambiguity between whether the weight-measure refers to mass or to what the scales show. For two objects (gold and feathers) it is stated that one of these properties is the same, and the question is about the other property. From the context, we can't obviously disambiguate one way or the other. In such situations, assumptions are usually made to make the problem statement meaningful.

comment by sk · 2010-03-28T09:55:45.343Z · LW(p) · GW(p)

I fail to understand how compartmentalization explains this. I got the answer right the first time. And I suspect most people who go it wrong did so because of the assumptions (unwarranted) they were making - meaning if they had just looked at the question and nothing else, and if they understood basic gravity, they would've got it right. But when you also try to imagine some hypothetical forces on the surface of the moon or relate to zero gravity images seen on tv etc, and if you visualize all these before you visualize the question, you'd probably get it wrong.

comment by jhuffman · 2010-03-27T15:18:05.935Z · LW(p) · GW(p)

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains.

I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation and/or (I suppose) knowledge of the masses of notable celestial objects in our solar system.

Now, I have to admit that the correct answer wasn't obvious to me at first.

I up-voted just for you admitting this in your example, but lets talk about this. You knew the moon had gravity, but not much. Then you remember its enough for astronauts to walk on, so most likely enough to attract a pen as well.

Do you understand that the same thing would happen if you were standing on an asteroid? Yes you can stand on an asteroid - I wouldn't recommend walking or any movement at all without a tether but as long as you don't move you will stay on its surface. In this case, gravity would not be enough for you to walk even with "heavy boots".

But if you just release the pen (don't throw it or toss it at all please!) it will still fall. Every object with mass has gravity, and any two objects will be attracted even if its by a relatively weak force. Yes other celestial masses will exert influence but as long as this asteroid is not on a collision course with another body we can be reasonably sure that 3-4 feet from its surface, its gravity will be greater than any other body's.

If you don't understand gravitation, you can't really expect to answer the question correctly. As for the people who can't answer it correctly: lots of people didn't really care much for physics when they studied it, and so while they may have known this at one time (to pass a test) it did not really get integrated into their working knowledge. It may have been possible to dredge up the correct answer by asking the right question to trigger the right memory, but the fact is that they really just don't have this knowledge, even if the information is in their brain.

Replies from: wedrifid
comment by wedrifid · 2010-03-27T17:00:05.027Z · LW(p) · GW(p)

I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation

And even without a full understanding of gravitation and the nitty gritty of what causes it, it would suffice to know 'gravity is basically acceleration'.

comment by Clara (she/they) · 2024-05-29T23:40:58.850Z · LW(p) · GW(p)

The way I solved the pen on the moon question is that I remembered the famous demonstration one of the Apollo astronauts did with a feather and hammer on the moon, and didn't think there should be a meaningful difference between those objects and a pen. I could've worked out the physics, but pattern-recognition was faster and easier. 

comment by TrevinPeterson · 2010-03-28T02:28:55.758Z · LW(p) · GW(p)

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible.

If a person's knowledge is highly compartmentalized, and consists of these three facts:

  1. A human being walked across the moon.

  2. There are small rocks on the surface.

  3. The moon is a planetary body.

without any educational background, would they choose the right answer?

I believe that their is a high probability that basic intuition would lead to an accurate answer.

So what went wrong, in your case? I don't think that you can attribute it to a failure of compartmentalization. It wasn't that you didn't make connections to your prior knowledge; the problem was that you made too many and that you hadn't organized your priors into a confidence hierarchy.

Confusion occurs when tenuous connections are made and lead to an over-analysis of the question. You differ from the person in the hypothetical, because you had prior knowledge of the forces involved. Connections are only helpful when they are made from strong foundational knowledge to new applications. When you are making many connections from a condition of uncertainty, to a new problem, your intuition fails. It results in the assignment of a low confidence level, to each of many connections, while ignoring basic observations or truisms.

It seems, you were confident in the areas of physics, most applicable in this situation; enumerating the atmosphere, gravity, and mass as the most influential. You attempted to remember how these forces interacted and recognized that they had dimensions to them, that you had forgotten. The connections caused your intuition to be replaced by a humbleness, and the go to answer was a balanced combination of forces. Thus the pen would float.

It is clear that you came to the problem with much more information than my hypothetical person, armed with three foundational facts. You too had those three facts, if that was all that was in your moon compartment the intuition would have been clearer.

(I apologize for the presumptions I made in referring to your thought process. This is a situation, in which, we find ourselves frequently.)

comment by Alfred · 2010-10-09T21:30:09.084Z · LW(p) · GW(p)

Your "wrong but not obviously and completely wrong" line made me think that the "obviously and completely" part is what makes people who are well-versed in a subject demand that everyone should know [knowledge from subject] when they hear someone express obvious-and-complete ignorance or obvious-and-complete wrongness in/of said subject. I've witnessed this a few times, and usually the thought process is something like "wow, it's unfathomable that someone should express such ignorance of something that is so obvious to me. There should clearly be a class to make sure this doesn't happen." After reading what you wrote about compartmentalized knowledge and connected knowledge, this type of situation makes much more sense.