I need some help debugging my approach to informal models and reasoning
post by Arkanj3l · 2013-10-30T22:10:45.071Z · LW · GW · Legacy · 2 commentsContents
2 comments
I'm having trouble understanding the process I should use when I am considering new models as they might apply to old data, like memories. This is primarily when reasoning with respect to qualitative models, like those that come out of development psychology, business, or military strategy. These models can be either normative or descriptive, but the big trait that they all seem to share is that they were all conceptualized with reference to the inside view more than the outside view - they were based on either memories or intuition, so they will have a lot of implicit internal structure, or they will have a lot of bullshit. Re-framing my own experiences as a way of finding out whether these models are useful is thus reliant on system one more than system two. Unfortunately now we're in the realm of bias.
My concrete examples of models that I am evaluating are (a) when I am attempting to digest the information contained in the "Principles" document (as discussed here) and for which situations the information might apply in; (b) learning Alfred Adler's "individual psychology" from The Rawness, which also expands the ideas and (c) the mighty OODA loop.
When I brought up the OODA loop during a meetup with the Vancouver Rationalists I ended up making some mistakes regarding the "theories" from which it was derived, adding the idea of "clout" to my mental toolkit. But it also makes me wary that my instinctive approach to learning about qualitative models such as this might have other weaknesses.
I asked at another meetup, "What is the best way to internalize advice from books?" and someone responded with thinking about concrete situations where the idea might have been useful.
As a strategy to evaluate the truth of a model I can see this backfiring. Due to the reliance on System One in both model structuring and model evaluation, hindsight bias is likely to be an issue, or a form of Forer effect. I could then make erroneous judgements on how effectively the model will predict an outcome, and use the model in ineffective ways (ironically this is brought up by the author on The Rawness). In most cases I believe that this is better than nothing, but I don't think it's good enough either. It does seem possible to be mindful of the actual conceptual points and just wait for relevance, but the reason why we reflect is so that we are primed to see certain patterns again when they come up, so that doesn't seem like enough either.
As a way of evaluating model usefulness I can see this go two ways. On one hand, many long-standing problems exist due to mental ruts, and benefit from re-framing the issue in light of new information. When I read books I often experience a linkage between statements that a book makes and goals that I have, or situations I want to make sense of (similar to Josh Kaufman and his usage of the McDowell's Reading Grid). However, this experience has little to do with the model being correct.
Here are three questions I have, although more will likely come up:
- What are the most common mistakes humans make when figuring out if a qualitative model applies to their experiences or not?
- How can they be worked around, removed, or compensated for?
- Can we make statements about when "informal" models (i.e. not specified in formal language or not mappable to mathematical descriptions other than in structures like semantic webs) are generally useful to have and when they generally fail?
- etc.
2 comments
Comments sorted by top scores.
comment by passive_fist · 2013-10-31T03:55:31.342Z · LW(p) · GW(p)
Maybe an information-theoretic viewpoint would be useful.
The problem with such qualitative models is that they don't communicate a well-defined algorithm but instead define a distribution over a set of algorithms, with the exact distribution dependent on the reader's model of the world (as you have been seeing). It's not a limitation of you or a limitation of the model per se, it's more a limitation of language. Language allows us to communicate complex concepts compactly, assuming that the party we are communicating with has some model of the world that is similar to ours. This comes at a price: the more complicated the concept, the more it has to be made vague during transmission, and the more it becomes sensitive to smaller variations in differences between world models.
I'd guess the most common mistake people make when trying to internalize qualitative models is: They don't ask about them! Language is interactive and when something is vague we seek clarification. It's what you should do, and do much more often than you're probably currently doing. Ultimately, you only have a finite amount of information to go on, and a linear increase in information results in an exponential decrease of your probability space. Every qualitative model comes with hidden 'baggage' which is the creator's world model. Without knowing what this model is, there's nothing you can do. You can either guess it or ask about it, and asking gives you far more information.
Replies from: Arkanj3l↑ comment by Arkanj3l · 2013-12-31T20:16:39.238Z · LW(p) · GW(p)
just ask
It's difficult when the creators are dead, or otherwise unaccessible (like busy hedge fundies). The next best thing are students who were mentored under the creator of the paradigm and are considered experts, but then the same check has to be applied to them on whether or not the ideas can be discussed. Overall I like the approach, it might still be possible to find journals, biographies or interviews with the originator of the viewpoint, as these are likely to contain some form of inquiry.