In praise of fake frameworks
post by Valentine · 2017-07-11T02:12:32.017Z · LW · GW · Legacy · 15 commentsContents
15 comments
Related to: Bucket errors, Categorizing Has Consequences, Fallacies of Compression
Followup to: Gears in Understanding
I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.
I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful.
Here I want to share why. This is for two reasons:
- I think fake framework use is a wonderful skill. I want it represented more in rationality in practice. Or, I want to know where I'm missing something, and Less Wrong is a great place for that.
- I'm building toward something. This is actually a continuation of Gears in Understanding, although I imagine it won't be at all clear here how. I need a suite of tools in order to describe something. Talking about fake frameworks is a good way to demo tool #2.
With that, let's get started.
There are two kinds of people: extroverts and introverts.
…sort of.
I mean, as I look around, it certainly looks like there's a difference between outgoing social butterflies and quiet types who mostly stay at home. Maybe it's more like a continuum rather than a binary thing. But if so, I find myself wondering if it's bimodal with rough "extrovert" and "introvert" clusters anyway.
But then I look at long lists of differences between extroverts and introverts, and I worry. What exactly do these terms mean? Is it just about how talkative and loud people are? If so, are the labels sneaking in connotations about where people "get energy" from and how action-oriented they are?
Well, it turns out that a bunch of those traits are correlated. The intuition is, in fact, picking up on something true in the world.
But.
That doesn't mean the intuition is correct.
It looks like maybe extraversion isn't bimodal. I can justify that after the fact: the Big Five verified extraversion as a correlational cluster of traits and defines introversion as "low extraversion", and a Gaussian distribution seems like a more sensible prior than a bimodal one. But I didn't think of that ahead of time. If I hadn't thought to look, I might have thought the Big Five had verified the bimodal intuition because "these traits are correlated" and "the correlation has two separable empirical clusters" were compressed into one "bucket" in my mind.
What other parts of the intuition are suspect? What else wants to sneak in under the banner of "verified"?
That's hard to know. The usual use of the term "extrovert" isn't a sharp reference to clear traits. It's more like a fuzzy cluster of impressions that loosely splat over things like Type A personality and being the "life of the party".
So we're left with a choice.
We can ignore the fuzzy intuition and just use the concepts that come from the research. OCEAN tells us what extraversion is as it exists in the world. If we want to know what other traits correlate with extraversion, we can measure that trait and a bunch of others and look. We can feed massive amounts of data to machine learning systems and let them magically tell us correlations. No guesswork required.
That seems safe.
But.
If we'd done that as a species, there would be no OCEAN. Researchers thought to develop the Big Five because of folk intuitions about personality traits.
Also, not everything has been researched, and it’s tricky to find everything that has been researched.
The whole approach is too slow. It doesn’t work as a general epistemic solution.
But we clearly can't just trust the intuition. It's predictably wrong somewhere. It makes some false things seem obviously true. And we don't get to know which seemingly true ideas are wrong ahead of time.\
So instead I suggest this:
Assume the intuition is wrong. It's fake. And then use it anyway. Let yourself wonder about and kind of believe in what makes sense to you about introverts and extroverts. Just do it in a mental sandbox of "This is all fake and made up."
You know more about people than you're conscious of. Doing this sandboxing lets you flesh out Gears for extroversion and introversion with more of your mind.
It also keeps you honest. You're already privileging hypotheses. This lets you own up to it and notice where you’re making implicit assumptions.
And maybe some of those "privileged" hypotheses are just correct. That's worth noticing when it's true. Maybe more extraverted people really do wear more decorative clothing. If that's right, then maybe you should have let your intuition influence your guesses from the start.
So in practice, while using a roadmap it's sensible to think of roads as basic somehow. Or rather, when using these maps, roads are basic.
Yes, you can reflect on it and remember that roads are made of atoms. But two points:
- That's pretty useless. That doesn't help you get from point A to point B in a new city.
- The roadmap would work even if the roads weren't made of atoms. Like roads in a video game world.
This means it's pretty silly to try to give an intensional definition of "road" in this context. If you met someone who'd never used a roadmap before, you'd point at a road near you and point at the matching part of the map and say "That thing is this line."
I think this suggests a natural way to define "ontology". I say an ontology is a set of "basic" things that you use to build a map (together with rules for how you can combine them into a map). Something is "ontologically basic" if it's an element of the ontology you're using.
Some other examples:
- In Euclidean geometry, the undefined terms "point", "line", and "plane" are the ontologically basic things, and the postulates are the rules for how to combine them. We create territories this ontology can map when extrinsically defining the undefined terms: we pretend a blackboard is a "plane", a bar of chalk dust dragged across it is a "line", and a fat dot of chalk on it is a "point". I think a lot of people talk about this backwards, like the drawings are maps helping them explore the territory of Euclidean geometry. I think they're confused about what "real" means. The drawings help us notice what territories that ontology can make maps of.
- I think classical mechanics has mass, position, and time as ontologically basic. Newton's Laws of Motion give the rest of the ontology. That's a rich enough map-building set that it can describe most movement we encounter pretty well. It falls short when modeling near-light speeds though.
- OCEAN's ontology has five "personality spectra" as basic: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. These five emerged from data. This contrasts with Myers-Briggs' four (Introversion/Extraversion, Sensing/iNtuiting, Thinking/Feeling, and Perceiving/Judging), which came from thinking about Carl Jung's theories. They both have the same ontological structure though: personality type is defined by some small number of intervals with some set of behavioral traits clustered at each end of each interval.
Ontologies make things seem real. Roads are real, right? But if time itself isn't real, or if there's only one electron in the whole universe, then what does it mean to say that a road is real?
I think people get confused like this when they switch ontologies without noticing. First roads are basic. Then we're talking about ontologies for physics, and none of those take roads as basic. One of them even challenges the idea of a thing at a higher order than an electron. Each time we switch ontologies that we connect to our experience, "real" takes on a new meaning for us.
But aren't roads really real? I mean, I walk down one every day to get to work…
…which shows how pervasive this illusion is.
If I can't switch to an ontology that doesn't have roads as basic while looking at what I was calling a "road", then I'm pretty stuck. I can't understand reductionism. I can't see with fresh eyes. I'm just rehearsing what I "know".
Likewise with people who are really into (say) Myers-Briggs. If the types in Myers-Briggs look real to you, and you can't shake that sense, then you’re stuck. It’ll seem meaningful to figure out how each person fits in that framework as though the framework is objectively true. That will make it hard for you to notice things about personality that that ontology doesn’t capture well.
…and it will highlight things that that ontology does capture well.
I learned a lot about just how different other minds can be by studying the Enneagram. It suggested Gears for people I hadn’t considered before. It helped me see my dad’s sternness as affection. I mostly don’t use that system anymore; it’s unclear whether its nine types map at all to natural clusters of people. But I still see Dad more clearly because I used the system for a while.
Switching which ontology is active feels like changing what I believe is real about what I’m experiencing. This means if I want ontological flexibility, I have to take my experience of “real” lightly. I can’t clutch too tightly to the sense that what’s “obviously real” to me right now is objectively true. And I have to be able to see something new as “obviously real”.
Like with the road. Roads are real. But I can set that aside and see it as molecules in mechanical and chemical interactions. Or as quarks in a timeless wiggling quantum soup. Or as a dreamlike projection of my pattern-matching mind.
This is easier to do if I think of these “real” roads as fake. And molecules as fake. And quarks as fake. If I remember that I’m talking about map-generators, which feels on the inside like talking about the territory.
Numbers are ontologically basic in elementary arithmetic.
But it turns out that if you take sets as basic instead, you can derive elementary numbers. Set theory is a richer ontology than that of elementary arithmetic: everything you can map with numbers, you can map at the same resolution with sets. But you can do more with sets.
Reductionism promises that all ontologies can become one. More formally: Given any finite set of ontologies that fit experience, there’s some super-ontology that fits experience and is at least as rich as every ontology in your initial set.
That’s different from saying that you know how to find it.
Quarks are real, right? Really really real? And everything real comes from quarks and how they interact. We just use labels like “evolution” and “desire” for things that are a pain to derive from the quark level.
But it would feel this way if you were wrong, too. That’s what it feels like to wear an ontology.
I suspect it’s a type error to think of an ontology as correct or wrong. Ontologies are toolkits for building maps. It makes sense to ask whether it carves reality at its joints, but that’s different. That’s looking at fit. Something weird happens to your epistemology when you start asking whether quarks are real independent of ontology.
Maybe in the secret noumenal universe, there are truly basic things. I don’t know how I can ever know about them without maps though.
Which makes me want to whack my ontologies with a sledgehammer.
If I’m only ever willing to try on ontologies that I can tell fall within a known richer super-ontology (e.g., physics), then anything that super-ontology doesn’t easily map becomes hard for me to notice.
This isn’t a challenge to reductionism. Or to physics.
It’s a challenge to assuming you already know the answer.
Fifteen years ago I learned how to “extend ki” in aikido. Ki was part of my teachers’ ontology. That didn’t make sense to my physics brain, but I went with it anyway. This gave me access to strange powers that took me over a decade to understand within my physics ontology.
I think it twists the definition of “rational” to say that I should have rejected their teachings as wrongheaded.
But it would have been bad if I had believed in ki and physics and reductionism without being confused. Eventually the ontologies needed to reconcile.
And they did. Eventually I learned enough about body mechanics and how brains model movement to understand why "moving with ki flow" worked.
But in the meantime, I still learned how to do aikido.
“Ontological flexibility” is a mouthful. I don’t like the phrase. Too many syllables.
So instead I talk about fake frameworks.
There’s a skill to trying on a crazy perspective, actually believing it while you use it, and never taking it seriously. Then you can learn whether your judgment of “crazy” is right. And you can extract value from the good parts.
There’s an open question about how to wear obviously wrong ontologies without hurting your belief system. I don’t have a better answer than “Try to sandbox.” It seems to work for me. And it’s not something I always did: I used to adopt and cling to every ontology I tried on. It made the world seem very mysterious. I don’t think I do that anymore, so I think this is a learnable skill.
And if this is the wrong skill somehow, I’d like to know what to use instead.
15 comments
Comments sorted by top scores.
comment by TheAncientGeek · 2017-07-11T09:39:09.573Z · LW(p) · GW(p)
“Ontological flexibility” is a mouthful. I don’t like the phrase. Too many syllables.
Robert Anton Wilson called it Guerilla Ontology.
comment by RomeoStevens · 2017-07-12T18:21:00.859Z · LW(p) · GW(p)
You can play with this right now and simultaneously dissolve some negative judgements. Think about the function of psychics/fortune tellers in poor communities. What do you think is going on there phenomenologically when you turn off your epistemic rigor goggles? Also try it with prayer. What might you conclude about prayer if you were a detached alien? Confession is a pretty interesting one too. What game theoretic purpose might it be serving in a community of 150 people? I've found these types of exercises pretty valuable. Especially the less condescending I manage to be.
comment by imoatama · 2018-07-10T13:36:49.662Z · LW(p) · GW(p)
I think this may actually be my favourite post on LessWrong. It enabled me to reconcile much of David Chapman's writing, which I've always enjoyed and gotten a lot out of, with my take-aways from the sequences and my overall perspective on rationality.
comment by Kaj_Sotala · 2017-07-12T10:49:27.830Z · LW(p) · GW(p)
Nice. I had a kind-of related thought before, of world-models as tools, where you think of concepts and world-models as literally being tools that work for some situations and not others, rather than necessarily trying to develop any single coherent world-model (though of course if you do have a broader more coherent world-model, then that's a tool with its uses too).
It's been a while since I read it, but IIRC Model Zero is a book that takes a similar perspective.
comment by turchin · 2017-07-13T12:35:08.355Z · LW(p) · GW(p)
Maybe try Bayesian approach - where you have several ontologies with different probability weights? In that case, you could work with each ontology as if it is real, but take its predictions with high discount, and also use the predictions to update relative weights of the ontologies.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-15T13:49:33.704Z · LW(p) · GW(p)
Different ontologies are useful in different contexts. If you model airplanes with Newton's laws you don't get a benefit by applying some probability to them and another probability to the formula of general relativity.
Replies from: turchin↑ comment by turchin · 2017-07-15T13:58:24.460Z · LW(p) · GW(p)
Surely if you know which ontologies are true and in which context you currently are.
For example, one could give 50 per cent to the probability that he lives in the real world and 50 per cent that he lives in a computer simulation.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-15T14:34:22.866Z · LW(p) · GW(p)
I'm not sure what you mean with "which ontologies are true". If you take the example of Valentine's ki, that ontology allowed Valentine for a long time to do things that he couldn't do without the ontology. Putting probabilities on ki being the true ontology misses the point.
Replies from: turchin↑ comment by turchin · 2017-07-15T15:35:17.350Z · LW(p) · GW(p)
If he assigned 50 per cent probability to the ki ontology, he still could do everything which is required by ki-ontology, but a) divide expected gains on 2 b) updated probability of the ki ontology depending if it works or not, and also based on explanation power of other ontologies for the same set of the evidence.
comment by entirelyuseless · 2017-07-11T14:11:59.806Z · LW(p) · GW(p)
"I suspect it’s a type error to think of an ontology as correct or wrong."
Indeed. I mentioned that recently.
"This isn’t a challenge to reductionism."
Try harder, and you can make it into a pretty good one.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2017-07-12T18:22:21.548Z · LW(p) · GW(p)
Lossy compression isn't telos free though.
comment by casebash · 2017-07-11T09:24:52.840Z · LW(p) · GW(p)
I wrote a post on a similar idea recently - self-conscious ideologies (http://lesswrong.com/r/discussion/lw/p6s/selfconscious_ideology/) - but I think you did a much better job of explaining the concept. I'm really glad that you did this because I consider it to be very important!
comment by TAG · 2020-07-24T11:50:10.242Z · LW(p) · GW(p)
Reductionism promises that all ontologies can become one. More formally: Given any finite set of ontologies that fit experience, there’s some super-ontology that fits experience and is at least as rich as every ontology in your initial set
Reductionism doesn't work like category theory, where everything relates to everything else, without any thing being special or fundamental.
It's not just the promise that all maps can be related together, somehow, it's the expectation that the resulting superstructure will be an inverted pyramid with fundamental physics at the bottom.
It doesn't have to work, it can be stymied in various ways, and one of the ways it can be stymied is if one cannot establish truth beyond usefulness.