Teaching Introspection

post by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-01T01:10:34.491Z · LW · GW · Legacy · 31 comments

Contents

31 comments

As Yvain pointed out in his recent post The Limits of Introspection, humans are not naturally good at inferring our cognitive processes. We resort to guessing with plausible-sounding stories about ourselves, and we aren’t very accurate.

I was reminded of this recently while teaching a swimming lesson. (You'll understand later why this reminded me.) A recurring problem that I’ve noticed with both children and adults is that it isn’t obvious to them what their bodies are doing. Feet go in strange directions, hands fail to lift above the water, and they literally can’t describe what it feels like. It’s pretty much impossible for a novice swimmer to watch the instructor demonstrate front crawl and then imitate it perfectly–muscular control isn’t that perfect. That’s why there are swimming instructors: because it’s very, very hard to learn swimming (or dance, or soccer, or a martial art) by reading a book, even if that book has illustrated diagrams. Two friends reading the book together and watching each other’s attempts in the pool would probably do better, but that’s still a case, metaphorically, of the blind leading the blind. Most sports have instructors and coaches who are, relatively speaking, experts. (I competed at the regional level in swimming for something like five years and trained five to seven times a week the whole time, which pretty much qualifies me to teach eight-year-olds. An Olympic coach would need a much higher level of mastery.)

The most basic thing a coach provides that the two friends practicing together don’t have is relevant feedback. I watch a young swimmer demonstrating her front crawl, and I can immediately chunk my observations into “what’s done properly” and “what’s done wrong” and translate the latter category into “things to change.” And the easiest way to learn perfect front crawl isn’t to do it over and over again with tiny changes, but to practice exaggerated and simplified “drills” that teach particular fragments of muscle memory. Faced with a given stroke problem, I can look over a list of about eight different front crawl drills to find the one best suited for fixing it. To place some objective measure on the improvements, I can time my swimmers or count their strokes per length The coaches of more elite swimmers have even fancier tools in their hands: videotaping, fins and hand paddles, and the flume, basically a wind tunnel in the water. (I wish I had one of these in my basement!) All to provide better feedback: even Olympic-level swimmers don’t automatically know what their bodies are doing wrong or what needs to be fixed. (I’m assuming this is true of sports other than swimming, too.)

Granted, human muscles do start out under some voluntary control. A baby learns how to walk with no instruction, only the feedback of trial and error. (And of seeing adults walk? I seem to remember reading that some feral children crawl on hands and knees, and seem to prefer this method to walking.) But even apparently involuntary skills can be learned, with the help of creative technology. With biofeedback, people can control their blood pressure and anxiety levels and apparently various other processes . The parallel should be obvious here. Introspection, like physical coordination, is only imperfectly under conscious control…but there is some control. That’s what consciousness is: self-awareness. Most people are aware that they have emotions, and that they make decisions because of their emotions, i.e. “I didn’t mean it, I just did it because I was angry!” Likewise, most people are aware of their likes and dislikes. It’s only a small leap to recognize that these kinds of preferences are malleable facts about the state of the brain, not immutable facts about the outside world. People do succeed in wrestling with their uncooperative minds, fighting akrasia and making deliberate and reasoned decisions.

Nevertheless, most people aren’t even at the same level, metaphorically speaking, as a non-swimmer trying to learn from diagrams in a book. The literature on cognitive biases and Alicorn's sequence on luminosity are a start on the ‘book of introspection’ and some of the Less Wrong groups that meet in person are trying to help each other master these skills. The various schools of meditation are arguably about teaching introspection, and clinical psychology could be seen the same way. Is it possible to go further? Olympic coaches have probably maxed out how fast an unmodified human can swim; your technique can't be any better than perfect; but I would like to think that we haven’t even scratched the limits of how well a completely unmodified human brain can understand itself. As far as I know, most traditions of meditation are just that: traditions, often ancient, that don’t accommodate recent discoveries about the brain and about thought processes. And psychology is limited by the focus on fixing ‘problems’ and returning patients to ‘normal.’ (And if you are ‘normal’, you don’t need a psychologist!) But everyone is affected equally by our apparently-innate inability to notice what our brains are really up to, and normal isn't a very ambitious standard. 

What does a cognitive bias feel like? I can’t look back on my actions and say “yeah, I’m pretty sure I said Tide was my favourite detergent because I was still thinking about oceans and moons.” Or at least, I can’t do that automatically. But if a scientist can predict that participants in an experiment will choose Tide when thinking about oceans and moons, then I can predict that about myself, too, and look back on all my decisions, trying to infer what factors were present at the time that could have primed my choice. It’s still a guess, but it’s an informed, useful one. And with practice, with an expert instructor to point out what you’re doing right and what you’re doing wrong, maybe a given cognitive bias does feel like something recognizable. Maybe the hidden secrets of your thought processes would become transparent and obvious. The next problem is finding instructors who are sufficiently advanced, and teaching exercises to use. The repetitive and level-based nature of video games would make them ideal as “thinking drills" training "neural memory" instead of "muscle memory."

I don't know enough to guess at the specifics of what this kind of school might look like, but I would definitely take lessons in introspection if they were available…I can’t really see a downside. Finding out that my decisions were due more often to random factors unconnected to to the Great Story That Is My Life might be unflattering, but it's equally awful whether I know about it or not, and knowing gives me a chance to fix those decisions that might otherwise turn out damagingly irrational. Anyone, or any group of people, willing to take on the task of becoming expert instructors in this field would hugely help those of us who have trouble learning procedural skills from books. 

31 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2011-08-01T07:14:54.231Z · LW(p) · GW(p)

The problem with teaching introspection is that you may only be teaching the subject to experience what you expect them to experience. Psychologists in the late 1800's and early 1900's tried to develop protocols where experimental subjects were taught how to report the contents of their consciousness, but this was not very successful. The results of a lab using a specific introspection protocol could not be easily replicated in a lab using another protocol, IIRC. Even having the subjects simply report their experience with no interpretation at all is difficult:

It was never wholly true that introspection was photographic and not elaborated by inferences or meanings. Reference to typical introspective researches from Titchener's laboratory establishes this point (28, 58, 25, 64, 59, 16, 31). There was too much dependence upon retrospection. It could take twenty minutes to describe the conscious content of a second and a half and at the end of that period the observer was cudgeling his brain to recall what had actually happened more than a thousand seconds ago, relying, of course, on inference. At the Yale meeting of the APA in 1913, J. W. Baird with great enthusiasm arranged for a public demonstration of introspection with the trained observers from his laboratory at Clark, but the performance was not impressive. Introspection with inference and meaning left out as much as possible becomes a dull taxonomic account of sensory events which, since they suggest almost no functional value for the organism, are peculiarly uninteresting to the American scientific temper.

Classical introspection, it seems to me, went out of style after Titchener's death (1927) because it had demonstrated no functional use and therefore seemed dull, and also because it was unreliable. Laboratory atmosphere crept into the descriptions, and it was not possible to verify, from one laboratory to another, the introspective accounts of the consciousnesses of action, feeling, choice, and judgment. It is not surprising, therefore, that Killpe, Watson and Wertheimer, all within a decade (1904-1913), reacted vigorously against the constraints of this idealistic but rigid pedantry.

On the other hand, modified introspective methods have been making somewhat of a comeback recently:

In no period, however, were introspective methods entirely abandoned by psychologists, and in the last few decades, they have begun to make something of a comeback, especially with the rise of the interdisciplinary field of “consciousness studies” (see, e.g., Jack and Roepstorff, eds., 2003, 2004). Ericsson and Simon (1984/1993; to be discussed further in Section 4.2.3 below) have advocated the use of “think-aloud protocols” and immediately retrospective reports in the study of problem solving. Other researchers have emphasized introspective methods in the study of imagery (Marks 1985; Kosslyn, Reisbert, and Behrmann 2006) and emotion (Lambie and Marcel 2002; Barrett et al. 2007).

Beeper methodologies have been developed to facilitate immediate retrospection, especially by Hurlburt (1990; Hurlburt and Heavey 2006; Hurlburt and Schwitzgebel 2007) and Csikszentmihalyi (Larson and Csikszentmihalyi 1983; Hektner, Schmidt, and Csikszentmihalyi 2007). Traditional immediately retrospective methods required the introspective observer in the laboratory somehow to intentionally refrain from introspecting the target experience as it occurs, arguably a difficult task. Hurlburt and Csikszentmihalyi, in contrast, give participants beepers to wear during ordinary, everyday activity. The beepers are timed to sound only at long intervals, surprising participants and triggering an immediately retrospective assessment of their “inner experience”, emotion, or thoughts in the moment before the beep.

Introspective or subjective reports of conscious experience have also played an important role in the search for the “neural correlates of consciousness”; (as reviewed in Rees and Frith 2007; see also Varela 1996). One paradigm is for researchers to present ambiguous sensory stimuli, holding them constant over an extended period, noting what neural changes correlate with changes in subjective reports of experience. For example, in “binocular rivalry” methods, two different images (e.g., a face and a house) are presented, one to each eye. Participants typically say that only one image is visible at a time, with the visible image switching every few seconds. In “early” visual areas—that is, brain areas relatively early in the flow of visual processing (such as V1)—researchers tend to find evidence that both images are being processed and changes in neural activity tend not to be temporally coupled with reported changes in visual experience. In contrast, in areas further downstream—such as frontal and parietal areas—changes in neural activity do appear to be temporally associated with reported changes in conscious experience (Lumer, Friston, and Rees 1998; Tong et al. 1998; Kreiman, Fried, and Koch 2002 Moutoussis and Zeki 2002; Tong, Meng, and Blake 2006; though see Polonsky et al. 2000), and so also, perhaps, are large-scale changes in neural synchronization or oscillation (Tononi et al. 1998; though see Kamphuisen, Bauer, and van Ee 2008). Another version of the ambiguous sensory stimuli paradigm involves presenting the subject with an ambiguous figure such as the Rubin faces-vase figure. Using this paradigm, researchers have found neuronal changes both in early visual areas and in later areas, as well as changes in widespread neuronal synchrony, that correspond temporally with subjective reports of flipping between one way and another of seeing the ambiguous figure (Kleinschmidt et al. 1998; Rodriguez et al. 1999; Ilg et al. 2008; Parkkonen et al. 2008). In masking paradigms, stimuli are briefly presented then followed by a “mask”. On some trials, subjects report seeing the stimuli, while on others they don't. In trials in which the subject reports that stimulus was visually experienced, researchers have tended to find higher levels of activity through at least some of the downstream visual pathways as well as spontaneous electrical oscillations near 40 Hz (Dehaene et al. 2001; Summerfield, Jack, and Burgess 2002; Del Cul, Baillet, and Dehaene 2007; Quiroga et al. 2008). However, the proper interpretation of these results remains contentious (Noë and Thompson 2004; Overgaard, Sandberg, and Jensen 2008; Tononi and Koch 2008).

Replies from: Curiouskid
comment by Curiouskid · 2011-10-30T13:50:53.578Z · LW(p) · GW(p)

Interesting studies. I've noticed that the Uncertainty Principle applies to the "use of “think-aloud protocols” and immediately retrospective reports in the study of problem solving." It's not really the same as normal introspection.

comment by Will_Newsome · 2011-08-01T06:01:13.507Z · LW(p) · GW(p)

Introspection considered questionable, reflection considered absolutely mandatory.

Sometimes you can elicit the difference by first asking yourself why you believe/do something, then asking yourself how you came to believe/do something. "Why didn't I upvote this article? Well because it doesn't have qualities X, Y, Z and J. Oh, how did I come to not upvote this article? Well I never really actually considered that course of action, so it's not like I ever had the option to, now that I think about it." Notice the difference between moral justification/reasons and explanatory justification/reasons. Everything has reasons, but many reasons pretend to be the "true" reasons when they're not. When you downvote a LW comment or post ask what caused you to downvote it and try to answer honestly, rather than introspecting on why you downvoted it. They can be the same question but it's surprising how often they're not. Likewise for all negative social judgments: ask the self-reflective question, How did I come to be so contemptuous of this person/idea/group?, not the other-focused Why is this person/idea/group so contemptible? It's really important that you frame such questions the right way if you want your reflection to not accidentally spit out pleasant-sounding "introspection". Doing this regularly makes rationalization or cloaked signalling look obvious, both from yourself and others, and serves as a basis for still further reflection.

Replies from: Will_Newsome, Swimmer963
comment by Will_Newsome · 2011-08-01T06:29:57.644Z · LW(p) · GW(p)

It's for this reason---(is it it? yes, I think it is)---that I find "What caused you to believe what you believe?" to be a much better fundamental question of rationality than the moral-justification-priming "Why do you believe what you believe?". Same with "What caused you to work on what you're working on?" rather than "Why are you working on what you're working on?" (or variations thereof), at least on a timescale greater than months.

Anyway, all of this falls before the milestone that is Cached Selves, which used to be the second or third highest upvoted post on LW but has slipped since then. I can't help but think some of Less Wrong's purposes must've been lost---but maybe it's just a matter of taste.

Replies from: Wilka
comment by Wilka · 2011-08-02T11:39:00.875Z · LW(p) · GW(p)

This reminds me of http://lesswrong.com/lw/vk/back_up_and_ask_whether_not_why/ but I prefer the phrasing "What caused you to X" over "Should I X" and it feels like an easier question to get into the habit of asking.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-01T20:57:34.728Z · LW(p) · GW(p)

Introspection considered questionable, reflection considered absolutely mandatory.

To me "reflection" means just thinking about something you already know, and "introspection" adds the act of digging around in your mind for something you don't yet know. But it's happened to me before that the complex concepts in my head, tied to a given word, are different than the concepts that that word encodes for other people. Is this another of those times?

comment by pjeby · 2011-08-01T19:43:44.541Z · LW(p) · GW(p)

And the easiest way to learn perfect front crawl isn’t to do it over and over again with tiny changes, but to practice exaggerated and simplified “drills” that teach particular fragments of muscle memory.

This reminds me of something that happened in my early years of teaching mind hacking: I noticed that some people were way better at applying the techniques than others, and then began discovering that it was a function of lower-level introspection skills I didn't yet know how to teach. (For example, some people were just better at "shutting up and listening" or not adding interpretations onto their experiences.)

Faced with a given stroke problem, I can look over a list of about eight different front crawl drills to find the one best suited for fixing it. To place some objective measure on the improvements, I can time my swimmers or count their strokes per length.

I certainly wish I had technology that specific: what I have now are mostly mnemonics, rules of thumb, and individual coaching feedback. Objective measures are particularly hard to come by, though I suppose I have a couple of them.

Replies from: jsalvatier, NancyLebovitz, Swimmer963
comment by jsalvatier · 2011-08-01T19:56:58.122Z · LW(p) · GW(p)

care to list them ?

Replies from: pjeby
comment by pjeby · 2011-08-02T16:08:41.045Z · LW(p) · GW(p)

The objective measures? Primarily, the change in response to a cue thought, and the experience of surprise. A person who isn't surprised at least sometimes by their introspection isn't obtaining any new information, and a person whose behavior doesn't change in ways that surprise them hasn't changed their spontaneous behavior. A lack of change in autonomous responses to a stimulus is likewise an indication that no actual self-modification has occurred.

comment by NancyLebovitz · 2011-08-02T15:39:54.163Z · LW(p) · GW(p)

Gendlin's Focusing-- taking time to feel what comes to mind, then finding satisfying words to describe it, strikes me as quite powerful.

Replies from: pjeby
comment by pjeby · 2011-08-02T16:25:57.580Z · LW(p) · GW(p)

Gendlin's Focusing-- taking time to feel what comes to mind, then finding satisfying words to describe it, strikes me as quite powerful.

Interesting. That's the Litany of Gendlin Gendlin, isn't it? His Wikipedia page about levels of knowing sounds a heck of a lot like some of what I've been teaching.

The wikipedia description of "focusing" sounds like a subset of what I teach with respect to RMI, since it describes only a "felt" sense, and SEEDs usually involve more than just a feeling. Still, I'll agree with the part that describes there being a "something" that people do that is externally observable. I can certainly tell by a person's voice tone, and I've learned to do it from word choices as well, so that I can read what someone emails me or posts on a forum and tell whether they're doing it or not.

I'm definitely going to check out some of his work, as it sounds like there's overlap and perhaps he's found some things I missed. It also always helps to have other people's books I can recommend, instead of having to figure out how to write them myself. ;-)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-08-02T18:32:52.641Z · LW(p) · GW(p)

Gendlin's Thinking at the Edge takes Focusing into cognitive work.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-03T01:26:27.403Z · LW(p) · GW(p)

and then began discovering that it was a function of lower-level introspection skills I didn't yet know how to teach.

Have you experimented or played around with ways of teaching these lower-level skills?

Replies from: pjeby
comment by pjeby · 2011-08-03T03:58:28.745Z · LW(p) · GW(p)

Have you experimented or played around with ways of teaching these lower-level skills?

Yes. As a practical matter, it's more like teaching people what to stop doing than what to do - i.e. to stop talking over their experience and speculating about it. Some people are worse about doing that than others; you have to stop them a lot before they "get it".

More recently, I've been teaching people my SEED mnemonic, and it seems to help them realize what they're supposed to be paying attention to, but I don't have any real empirical data on that. I'd have to get a bunch of untrained people and test how quickly they were able to stop abstracting experiences, having split them into a control and experimental group... and then I'd still have no way to blind myself, unless somebody else taught them about SEEDs.

comment by apophenia · 2011-08-01T17:18:10.507Z · LW(p) · GW(p)

Meditation seemed useful to me. Other forms of "introspection" (cognitive biases, direct querying of "what would I do in situation X" in my brain, psychology) were more like "extrospection"--I'd infer my thoughts by my behavior. Meditation seemed to have a shorter inferential distance. I don't have a good non-introspective reason to believe this, although it did seem to get me over procrastination for the first time in weeks, and helped me graduate. I'll find out whether this continues to hold true as I resume meditation.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-01T20:53:05.853Z · LW(p) · GW(p)

I like the word "extrospection." Learning better "extrospection" is probably just as useful in its own way as learning better introspection, and you're right that it uses different skills. It would be neat if it were possible to somehow link the two..

comment by handoflixue · 2011-08-02T17:57:22.373Z · LW(p) · GW(p)

Hmmm, oddly, I mostly taught myself DDR. I had a friend who taught me the basics when she introduced me, and I've watched someone better than me two or three times (but mostly just gone "wow, that was impressive"). I've noticed I can often improve by just practicing, but sometimes I realize something specific is tripping me up. I'll go through a song that evokes that a few times, study it, and eventually identify my mistake. Then I'll do some easier songs to practice the corrected muscle memory, and finally test myself on the song that was tripping me up. I wouldn't claim to be amazing at DDR, and it might just be that it's somehow a different skill set, but I actually seem to do pretty well at learning it.

The only trained physical exercise I've done before DDR is swimming, which I was taught at a young enough age to have almost no memories, so I wouldn't assume I have a "generalized Learn Sports" module. (I remember exactly how the instructor described the side stroke, because it immediately clicked with me and I never had any trouble with it once it was explained)

comment by handoflixue · 2011-08-02T18:04:26.551Z · LW(p) · GW(p)

While I really like this post, I down-voted it because it doesn't seem to offer any conclusions. It's really wonderfully written, but I don't think LessWrong needs more wonderfully written descriptions of the problem :)

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-03T01:25:20.744Z · LW(p) · GW(p)

You're probably right. I guess I was hoping someone else who knows more than I do would have a solution.

Replies from: handoflixue
comment by handoflixue · 2011-08-03T03:14:38.680Z · LW(p) · GW(p)

Alas, the question of "how do we teach ___" seems to be a fairly big open question around here. I've been interested in it myself as well, but haven't come up with a ton of concrete ideas yet.

comment by lessdazed · 2011-08-01T07:57:45.469Z · LW(p) · GW(p)

“I didn’t mean it, I just did it because I was angry!”

One illustrative example of cause and effect within an environment is the relationship between genotype and phenotype within an environment. To say an aspect of an animal is "just because" of an individual feature of its genotype or environment (or the absence of a feature), is either untrue or assumes for the counterfactual that an endless list of other features and absences is held constant.

"I did it partially because I was angry, were I not angry, I wouldn't have done it," doesn't have the same ring to it though, does it?

If ever it feels like something is just because of something else, that is the first sign that introspection has failed!

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-01T20:55:20.782Z · LW(p) · GW(p)

"I did it because I was angry" is still a more accurate explanation of cause and effect than "I did it because X is an idiot and deserved it" or "I did it because it's obvious that was what I should do, anyone could see that." You're right that "I did it partially because I was angry, were I not angry, I wouldn't have done it" is an even better illustration of cause and effect, but I was using the example to refer to more naive people who aren't very introspective, but who can't avoid doing it a bit.

comment by mtraven · 2011-08-01T02:54:20.660Z · LW(p) · GW(p)

I think you miss the point of the linked article, which is not that we are "not very good" at introspection, but that introspection is literally impossible. We don't have any better access to our own brain processes than we do to a random persons. We don't have little instruments hooked up to our internal mental mechanisms telling us what's going on. I fear that people who think they do are somewhat fooling themselves.

That doesn't mean we can't have models of ourselves, or think about how the brain works, or notice patterns of mental behavior and make up better explanations for them, and get better at that. But I think calling it introspection is misleading and begs the question, as it conjures up images of a magic eye that can be turned inward. We don't have those.

Replies from: rwallace, lessdazed, Swimmer963, jimmy
comment by rwallace · 2011-08-01T09:28:35.701Z · LW(p) · GW(p)

That's not entirely true. The various kinds of rationalization, for example, each have their own distinct feeling once I learned to recognize when I was doing them. I suspect the analogy to biofeedback training is a good one; I would guess the experience of e.g. learning how to control your blood pressure, is a similar sort of thing.

comment by lessdazed · 2011-08-01T03:19:15.776Z · LW(p) · GW(p)

We don't have any better access to our own brain processes than we do to a random persons

The point of the linked article is that when naively thinking that we are good at introspection, we fail at it. For example, "When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them."

That only weakly implies "We don't have any better access to our own brain processes than we do to a random persons." We not only don't know how trained people can do, we don't even know how untrained people who would agree they are subject to biases would do!

If you define introspection as magically perfectly accurate self-knowledge gleaned without thinking, or even training, that is idiosyncratic.

Replies from: mtraven
comment by mtraven · 2011-08-01T17:46:55.963Z · LW(p) · GW(p)

I exaggerated a bit. The points I was trying to make: we can only weakly introspect; the term "introspection" is misleading (I think "reflection", mentioned by another commenter, is better); we are in a strong sense strangers to ourselves, and our apparent ability to introspect is misleading.

I am only a dabbler in meditation and Buddhism, but I think an actual Buddhist would NOT characterize meditation as introspection. The point of it is not to have a self more aware of itself, but to reveal the illusory nature of the self (I'm sure that is a drastic oversimplification, at best).

Replies from: lessdazed, mtraven
comment by lessdazed · 2011-08-01T19:59:10.304Z · LW(p) · GW(p)

I agree that "reflection" is the best term for what people can do. It does make sense to associate the strongest term, "introspection", with the strongest belief, the naive one.

comment by mtraven · 2011-08-01T17:59:44.162Z · LW(p) · GW(p)

After posting that I felt even more unsure about my assertion about Buddhism and introspection than I had indicated, so did some Googling...here's some support from an actual Buddhist, though I'm guessing there is a wide variety of opinion on this question.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-01T20:59:33.941Z · LW(p) · GW(p)

but that introspection is literally impossible.

To me, it seemed like the article said only that we were unexpectedly bad at introspection when actually trying it in practice, not that it was impossible for anyone ever to do any kind of introspection.

comment by jimmy · 2011-08-01T06:47:35.412Z · LW(p) · GW(p)

We can't open the box and see what is inside directly, but we do have more info than we do about other people. We have partial access to the outputs of different parts of the brain.

We can simulate how we'd respond in circumstances in addition to the circumstances we actually find ourselves in. Of course, we can think we're simulating what we'd actually do but actually simulate what we think we should do, but that's a self deception problem and not a problem fundamental to introspection.

For example, I can ask someone "Why did you buy that car?" and they can answer the first thing that comes to mind (which may be wrong, and may be selected because it makes them sound good), or they can think "hmm, would I have felt the urge to buy the car if it was not blue? No? I guess color was important"

comment by Curiouskid · 2011-10-30T13:33:57.261Z · LW(p) · GW(p)

In swimming, you can point to Michael Phelps and say "try to imitate him as closely as possible". There is a "right" way to swim. However, rationality isn't this way. There is no zero-sum rationality game. It is constantly improving. And the only way it can improve is by self-experimentation in rationality. Ultimately, I think that Schopenhauer said it best:

Reading is merely a surrogate for thinking for yourself; it means letting someone else direct your thoughts. Many books, moreover, serve merely to show how many ways there are of being wrong, and how far astray you yourself would go if you followed their guidance. You should read only when your own thoughts dry up, which will of course happen frequently enough even to the best heads; but to banish your own thoughts so as to take up a book is a sin against the holy ghost; it is like deserting untrammeled nature to look at a herbarium or engravings of landscapes.

Ultimately, if the field of rationality is to advance, then people must be doing self-experimentation (introspection) to advance it.

I do think that you can learn from other people who are "further along the path" than you. For example: I've learned on my own that when I'm depressed, I think the same irrational thoughts. However, it wasn't until I memorized the appropriate responses (as opposed to synthesizing them again each time the thought came up) that my irrational thought patterns were immediately recognized as irrational. I could have learned this from somebody else, but considering how much self-help is tailored to a non-rational audience, I think it's just better to develop your own methods. However, I think the rationality boot camps sound interesting (as the advice is tailored to people like us).