Becoming Unusually Truth-Oriented

post by abramdemski · 2020-01-03T01:27:06.677Z · score: 99 (39 votes) · LW · GW · 15 comments

Contents

  Memory
    Tip of the Tongue
    Remembering Dreams
    Remembering Events
    False Memories
    Gendlin's Focusing
    Remembering Ideas
  Truth-Oriented Thinking
    Developing Ideas
    Inner Sim
    Motivated Cognition
    Correcting Yourself
    Explaining Things to Others
    Gears Thinking
    Understanding Others
None
15 comments

This is a post on "the basics" -- the simplest moment-to-moment attitudes one can take to orient toward truth, without any special calculations such as Fermi estimates or remembering priors to avoid base-rate neglect. At the same time, it's something almost everyone can fruitfully work on (I suspect), including myself.

Somewhat similar to track-back meditation [LW · GW].

Memory

Tip of the Tongue

The central claim here is that there's a special art associated with what you do when something is "on the tip of your tongue" and you can't quite remember it. Most people have the skill to some extent, but, it can be sharpened to a fine point.

Improved memory helps you become truth-oriented in a fact-oriented, detail-oriented sense. It works against inaccuracy. It also works against misspeaking, and thus propagating falsehoods.

Remembering Dreams

I first explicitly noticed the effectiveness of this technique for remembering dreams. When I wake up, I often have only one significant memory from my dreams. However, when I focus on the memory, explicitly naming each detail I can recall, and gently waiting for more, I can often unfold the memory into far, far more than I initially thought I could remember.

Sometimes I don't even remember any images from the dream at all, but have a vague sense of the dream (excitement, peace, more complicated emotions). I can still sometimes recall much more if I explicitly describe the left-over feeling to myself in as much detail as possible, and sit with it patiently waiting for more.

Think of it as forming a better relationship with your memory. It's easier to wait patiently when you've had several experiences where it's paid off. Explicitly processing details of what you've remembered lets your memory know you're interested, helping to keep it engaged in searching for more (and, potentially, training it to retain more).

Eventually, if you're better calibrated, you won't have to wait 5 minutes trying fruitlessly if you really don't think you will remember. But in order to be well-calibrated about that, you have to try it sometimes.

You might be worried about confabulation. I'll talk more about that later.

Remembering Events

My claim is that this technique generalizes to any memory. Dreams might be a good practice case, especially if you don't have too many cognitively demanding distractions in the morning.

But you can try the same thing with anything. Someone I knew with especially good memory told me that he thought this was most of his skill; he might have started out with slightly above-average memory, at some point he started taking pride in his reputation for good memory. This prompted him to put effort into it, rehearsing memories much more than he otherwise would. People would then remark on his good memory, further reinforcing the behavior.

Conversations, and interactions with people generally, might make a good practice case. Many people already re-visit conversations mentally over and over (perhaps thinking of things they wish they'd said). You can treat these the same way as dreams, trying to recall as much detail as you can each time you think of them.

Of course, rehearsing certain memories again and again might not be a good thing. Watch whether you're worsening any mental problems such as depression. It may be good to couple this practice with staring into regrets and other emotionally balancing techniques, so that rehearsing memories is useful rather than intensifying emotional damage from those memories.

False Memories

Some studies about memory may give you pause.

Unfortunately, forgetting is also a thing, so making memories last longer by avoiding them doesn't seem to be an option. Rehearsal is necessary for sharper memory.

Still, false memories seem like a significant concern. Memories just seem real. If false memories are really common and easy to create, what are we supposed to do about that?

I think the situation isn't really hopeless. I think most false memories are more like mistaken inferences. I might be sure I put my keys in my pants pocket, where I always put them. But then I might eventually recall that I put them somewhere else yesterday. What seemed like a memory was actually an inference.

As long as you're aware of these issues, I would expect that gently tugging on memories to recall more details would improve things rather than lead to more confabulation.

In my opinion, the critical turning point should be: if you are good enough at working with your memories that you have started to see through some of your own false memories, then you can start to become more confident in your judgements of which memories are real or false.

I could be wrong, of course. This is a critical question in how good/important the overall practice is.

Gendlin's Focusing

There's an obvious similarity between what I'm describing and Gendlin's Focusing [LW · GW]. I similarly gently interact with a "felt sense" and try to name it, and iterate the process to get more detail. However, the "felt sense" is not especially located in my body the way it's described in Gendlin's focusing. It's possible that body sensations are actually involved at a subconscious level.

In any case, you may find the "gentle tugging" kind of stance useful for untangling emotions, not just recalling memories. Also, learning Focusing might help with memory and the other things I'm describing in this post?

The connection to Focusing also supports the idea that you can tell between true memories and confabulations by checking the degree of "fit" -- you have a felt sense, then you describe the thing explicitly (which gives you a better "handle" for it), then you ask yourself whether the name "resonates" with the felt sense. This is like a reality check (a theme I'll return to in the inner-sim section). But of course this isn't particularly reassuring unless you already believe that Focusing is uncovering (as opposed to confabulating) information.

Remembering Ideas

I tend to place a high value on remembering ideas. A forgotten idea is like a little death. I generally prefer the conversation norm of pausing if someone has forgotten an idea, possibly for a significant amount of time, so they can try and recover it. Ideas are important.

This habit gave me a lot of practice with tip-of-the-tongue type recollection and the "gentle tugging" technique. Practicing this stuff seems quite important for being able to do it when you need it. So I think giving yourself significant time to try and remember forgotten ideas is quite valuable if only as practice.

I think a similar sort of mental motion is involved in developing ideas, as well. Let's move on from the memory section...

Truth-Oriented Thinking

Developing Ideas

When you have an idea, you start with a kind of "pointer" -- a felt sense which says that there should be a think in a particular direction. You can unpack the pointer by explicitly naming things about it, checking for "fit" with the felt sense. The more you name, the easier it is to pull more details out.

Sometimes it turns out that the idea really doesn't make any sense at all; the things with the best "fit" don't actually do anything good when you explicitly spell them out. Then the felt sense changes.

To me, it feels like the felt sense traces out natural "pathways" across a "landscape" which you're exploring. An idea might be a pointer which leads to a dead end, but there's still "really a path there" -- you had it, which must mean that it was a natural thought to have in some sense. I take interest not just in what's true, but what the natural development of certain ideas is. This kind of attitude helps you explore alternative pathways.

Gendlin describes his notion of Focusing as involved in scientific research. It's not just about emotions. I think I'm describing the same thing here.

Inner Sim

CFAR teaches a class on "inner sim", the intuitive expectations you have. When you try to balance one object on top of another, you have an intuition about whether it will fall. If someone tells you something, you might have an intuition about whether they're lying. You can't necessarily unpack these intuitions very well. Nor are they perfectly accurate. But they are quite useful.

The surprising thing is that it seems many people don't naturally make use of their inner sims as much as they could. Let's say you're at work, and you come up with a plan for completing a project within a week. The words "planning fallacy" might come to mind, but let's set that aside and ask a different question -- does your inner sim really expect the project to be done in a week? This kind of question can give useful information surprisingly often. And if your inner sim doesn't think the plan will work, you can try and ask yourself questions like why it will fail.

So, once you've developed an idea via the methodology in the previous section, another thing you can do is ask your inner sim about the idea. Is it true? Is it real? Can it work? What do you actually expect?

Using gentle tugging for idea development is just as good for creating fact or fiction, so you have to add this kind of reality check.

Also, communicating with the inner sim can be a lot like communicating with memory. You can gently sit with the question "what do I actually expect?" and see what comes up. And you similarly want to try and explicitly name what comes up; each detail of your expectations which you explicitly name can help pull more out.

Motivated Cognition

Just like we worried about false memories, we might worry about motivated cognition. Does asking your inner sim really provide a truth check? Does following your felt sense create a bias in what ideas you develop?

In my experience, if I'm caught up in motivated cognition, it is literally harder to remember things which go against what I'm saying -- it seems like I just don't remember them. But the same memory techniques which I've mentioned do help. I might not want to say the contrary facts once I recall them, but I can at least consciously decide that.

Similarly, I think the inner-sim checks are indeed useful in combating motivated cognition. Is it true? Is it real? What do I actually expect? What do I actually think? Giving yourself a little pause to sit with these questions can make you change your mind during an argument in a number of seconds (in my experience).

Correcting Yourself

For any of this to work during a conversation, I think you have to up your willingness to correct yourself. The thing is, if you notice during a conversation that you gave a false account of events, then there's going to be some consistency bias making you favor the version you've already said, and maybe some cognitive dissonance around not thinking of yourself as someone who gives false accounts.

It's not too uncommon for me to describe something with a nice "narrative logic" to it, and then remember some facts which don't fit the narrative. These additional facts may not even improve the other person's understanding of the situation -- the narrative is optimized to explain things in an understandable way, whereas the corrected detail isn't. But nowadays I try to mention them "for my own sanity" even if it doesn't make the conversation better.

If I don't do this, memory checks and reality checks will often feel counterproductive in conversations. If I'm unwilling to abandon my narrative verbally, then correcting it internally is a wasted motion which just generates a narrative/fact divide which I then have to track.

Explaining Things to Others

Just as explicitly naming things within your own head can help you pull detail out, once you think you understand something, explaining it to someone else can help pull a whole lot more detail out. This is probably true for memory, too.

It's not even necessarily about the interaction with the other person. Just trying to write something for someone else (and then never sharing it) can be similarly useful, whether it's a specific audience or a broad one. The need to bridge the inferential gap makes many more details feel relevant, which didn't feel relevant when you were explaining it to yourself.

Naturally, communicating an idea to another person is also great for uncovering problems.

This goes back to the reason why the overall technique I'm discussing works at all. Explicitly naming details of a memory helps to unpack it because what you know you know is different than what you know. You have a kind of mental illusion that you're remembering a whole conversation, but you're not really fitting all those details in short-term memory, which means you're not successfully pulling on all the associations. Similarly, you might think you understand something, but be unable to really explain all the details.

Gears Thinking

Gears-level thinking [LW · GW] is like unpacking an idea with exceptionally high standards about whether you really understand it. I mentioned that explaining things to others is helpful because you "pull on" details which you wouldn't ordinarily pull on, since you think you understand them. Gears thinking doesn't literally pull on "everything", but it pulls on a lot more.

I'm afraid that someone will read that and kind of nod along without getting it. I'm not talking about just generally having higher standards. I'm talking about the moment-to-moment experience of thinking. I'm saying there's a mental stance you can take where you "stop being lazy about your thinking" -- you don't re-check really solid things like 1+1=2, but you aren't satisfied with a thought until you've really gotten all the details in a significant sense.

The question you ask isn't whether something is true; the question you ask is exactly why it's true. No matter how confident you are that, say, a theorem you're using holds, you want the proof. You're trying to see all the pieces and how they fit together.

It's like pulling out a moth-eaten map and looking at the holes, trying to fill them in. Maybe you can't fill them in right away; maybe you have to make a voyage across the sea. It's hard. But you want those details; you want the map to be complete, not just "good enough".

Understanding Others

There's a closely related mental stance which I call "ask all the questions". You might think, from the kind of Focusing-like habits I've been describing, that you have to turn within to get the answers. But your focusing object can also be outside of you.

You can orient this toward typical social small-talk. What cognitive habits lead someone to ask questions like "what school did you go to" or "do you have any siblings"? You could have a mental list of standard questions you ask people in social settings. But a different way, which I think is more efficient, is to focus on your "picture" of the person (sort of mentally rehearsing it) and asking questions to fill in the gaps.

Something which surprised me when I tried this attitude on was how self-centred it felt. You're still looking at your map for holes. And, you're kind of dominating the conversation, in terms of steering. But, you can bring in the gentle/patient attitude I keep talking about.

You can do the same for topics other than small talk. Maybe you are trying to understand how someone things about X. What many people do is focus mainly on their own picture of X, and let what the other person says kind of land in that map, focusing questions on problems. And that's useful. But you can also focus on your map of their map. (This might start out being a copy of your map, since you might assume that they mostly think about X like you and just have some different details. But the cognitive operation is already different; you bring your attention to the places least likely to be the same as for you.)

Again I want to emphasize that I'm talking about a moment-to-moment stance. Not occasionally thinking "what's my map of their map?" during a conversation. Focusing on it primarily, letting it drive most of your questions.

This can be a good way of absorbing technical subjects from people.

15 comments

Comments sorted by top scores.

comment by Raemon · 2020-01-03T21:28:18.194Z · score: 18 (6 votes) · LW(p) · GW(p)

Pedagogical note: I initially bounced off this in part because I felt a bit wary about the "remembering dreams" point didn't really distinguish "remembering dreams" vs "confabulating dreams after the fact." You discuss related issues in the False Memories section, but I'd have personally benefited from having a sentence or so flagging that at the time.

(I have some sense, from my personal epistemic vantage point of 'almost everything in this post is stuff I'm familiar with and endorse, but the dream thing was something I don't have experience with an apriori assigned nontrivial probability to being wrong, so it felt like a weird thing to open with. But, I think if I were new to the site most of the rest of the post would be in roughly the same reference class so I'm not sure the distinction is relevant)

comment by Davidmanheim · 2020-01-27T06:49:11.652Z · score: 4 (2 votes) · LW(p) · GW(p)

On rereading, I really liked the second portion of Abrams post, but I strongly second this comment, and think it does not go far enough - on two fronts.

First, my understanding is there is a disagreement in the literature, and that dreams aren't things that get recalled, but it's unclear exactly how much is post-facto confabulation, and how much is your brain inferring details that weren't imagine at the time. Given that, I think that recalling dreams is a misleading example, and regardless of which interpretation is correct, would be a worrying model for recall.

Second, I agree with the personal epistemic issue, since I am largely aphantasic, and rarely "remember" dreams. That said, the reason I reacted strongly against the opening passage is a combination of my personal inside-view inability to relate, and my understanding that the example is misleading, especially compared to how useful the remainder of the post is. That means that I was turned off from the post early on, and had to reread the second half to reevaluate whether I was unfairly dismissive. I decided that I was, but it was a high bar, and if I hadn't known that the author was usually really insightful, I wouldn't have done so.

comment by abramdemski · 2020-01-30T18:12:34.845Z · score: 19 (5 votes) · LW(p) · GW(p)

It sounds like both of you are people who don't have experience with remembering dreams, so an opening which for me seemed very relatable didn't land. Raemon flags his comment as 'pedagogical note' and David calls recalling dreams a 'misleading example'.

But is there more to it than a starting example that didn't connect? David brings up research which ambiguously suggests there's a lot of confabulation around dreams. (I'm interested in references.) I had an in-person conversation with someone who read my post and thought the confabulation problem was more broadly damning.

My inside view is that if this were a very serious problem, I'd kind of be screwed. I'm having difficulty taking the position very seriously, because this is such a basic mental move. Of course at some level I'm saying "try doing more of this" and the question is "does doing more of this make things worse?" -- in the world where that's the case, we don't want to practice this mental motion or encourage it.

Objectively, dreams are kind of a worst-case scenario, since it isn't possible to check the reality. Subjectively, though, dreams seem to me like a really good case: I don't always know what I made up later vs what really occurred in the dream, but I (subjectively) know when I don't know. I often catch myself adding details, and can either tease out what was made up vs what was really there, or conclude that I can't do so.

I initially wrote this post with the idea of starting a sequence on "rationality as a practice", IE, trying to dig into things like this which are moment-to-moment habits of thought which one can work to improve no matter one's current level of skill. Now my feeling is that this sort of thing is bottlenecked on empirical evidence. I would like to know whether the stuff I propose actually increases or decreases confabulation.

comment by Raemon · 2020-02-06T20:35:50.372Z · score: 6 (3 votes) · LW(p) · GW(p)

I had a lot of uncertainty about whether this ranged from "broadly damning" to "confusing and didn't land." I hadn't thought about it very much. I have enough uncertainty about whether it might be actively misleading and epistemically harmful that I wouldn't endorse this as an intro-to-truth-post until someone I trusted had looked at / thought about it more carefully. (I think it's fine as a "random ideas from Abram in 2020" post, but my impression is you had aspirations towards it serving as a good self-contained-intro-to-rationality)

I don't really know how to actually test if the dream thing works. It seems like it could work fine, or could be pure confabulation. But because it's a domain that I think is particularly illegible, I currently think that yeah, it's pretty important to either figure out how to actually check if it works, or remove the dream thing and rework the post pretty significantly. (i.e., that's the sort of thing I'd expect to want before voting highly on it in the 2020 Review)

comment by abramdemski · 2020-02-07T22:59:04.193Z · score: 2 (1 votes) · LW(p) · GW(p)
(I think it's fine as a "random ideas from Abram in 2020" post, but my impression is you had aspirations towards it serving as a good self-contained-intro-to-rationality)

Ah. Yeah, I guess I conceived of this as pretty solidly somewhere between those extremes, but the title could be misleading towards the second.

I don't think of it as a collection of random ideas interesting to me right now. I do think of it as a coherent thing. But I certainly don't intend it to be an introduction to rationality or even the subtopic of truth-orientedness in rationality.

comment by Kaj_Sotala · 2020-02-18T10:12:50.025Z · score: 5 (2 votes) · LW(p) · GW(p)

FWIW, when I have done similar practice on real-life memories rather than dreams, I have sometimes checked my recollection of past events with other people who were there, and they have agreed with my account. Of course they could be influenced by my recollection, but I have sometimes recalled details which I have reason to believe that they would otherwise remember much better than me. For example, a friend showed me an episode of a TV series that she had seen several times before, but which I had not. The next day I used this kind of a technique to bring up details about the plot which I didn't remember initially, and she confirmed that I remembered them correctly.

So if the technique seems to provide accurate recall rather than confabulation in a non-dream context, it would seem like a reasonable default guess that it would provide accurate recall in a dream context as well.

comment by Davidmanheim · 2020-02-06T18:49:46.952Z · score: 2 (1 votes) · LW(p) · GW(p)

I like the idea of such a sequence, I'm just less sure that the first half of this post belongs, because of the uncertainty about whether it's training yourself to invent rather than recall details. As you said, they are a worst-case scenario.

On the other hand, while I think it would be valuable to answer the empirical question, here there is significant uncertainty about the impact, and I suspect it would require reasonably large-N, properly conducted randomized trials rather than personal insight.

comment by abramdemski · 2020-02-07T23:02:43.820Z · score: 4 (2 votes) · LW(p) · GW(p)

The way I currently see it, the second half of the post is more like an assortment of things, which are all tied together by the fact that they elaborate the basic mental movement in the first half of the post. So a post which was just the second half doesn't seem especially coherent to me.

comment by Kaj_Sotala · 2020-02-18T10:15:35.687Z · score: 3 (1 votes) · LW(p) · GW(p)

Could you just take the description of the technique and discuss it in the context of recalling non-dream-related memories? As you note yourself, exactly the same steps seem to work for e.g. recalling events from the previous day.

comment by Davidmanheim · 2020-02-09T07:40:02.400Z · score: 2 (1 votes) · LW(p) · GW(p)

That's fine as an issue about writing this as just the second half, but my point was that the ideas in the second half do seem useful, and worth figuring out a good way to present these ideas as a "`rationality as a practice" sequence.

That is, most of them AREN'T ambiguous or bottle-necked by evidence in the same way. There is an argument that they can help, and little reason I see to worry that they are harmful, so they are worth having more people try. Once that is done, and they report their subjective impressions to see what works for people, others can consider if and how it can be validated with more rigorous evidence.

comment by abramdemski · 2020-02-14T22:56:44.373Z · score: 8 (2 votes) · LW(p) · GW(p)

I should have phrased my previous comment as a question -- what do you see as valuable about the second half without the first half?

Maybe I can mostly answer that question for myself, though.

  • "Developing Ideas" is related to the first half, but it's in an inventive frame -- so confabulation is much less of a concern. (But we might similarly doubt it and ask for empirical support.)
  • "Inner Sim" has separate validation presumably (although I haven't looked into this).
  • The motivated cognition section?
  • "Correcting Yourself" has a pretty obvious story about why it should be useful.
  • Explaining things to others is very generally observed to be useful, and the connection I make to the first half could be seen as spurious or at least not particularly important.
  • The question of how to do gears thinking more and better is just pretty important all around. But I think my particular remarks are not any more empirically validated than the memory stuff I mentioned.
  • Understanding others -- same as gears. Important, but not a lot to back up my remarks.

I think what I'm going to do is post a question [LW · GW] about what could/should go into such a sequence.

comment by romeostevensit · 2020-01-03T17:50:16.888Z · score: 14 (5 votes) · LW(p) · GW(p)

I've found conversations with significant pauses to easily be 10x the quality of other conversations.

comment by Jan_Kulveit · 2020-02-18T11:32:22.068Z · score: 7 (3 votes) · LW(p) · GW(p)

My best guess gears-level model of what's going on here

  • the "predictive processing engine" has quite rich model of the world / people / histories / ... whatever
  • somewhat special domain into which it is "predicting" are thoughts / concepts / language / "the voice in your head" (somewhat overlapping with "S2")
  • with "words on top of your tongue", the PP system is trying to find a structure in the "thinking/verbal" domain which would be fitting the "PP" structure (many people have pretty specific sense of prediction error if they are missing the right word, which drops when they find it / the word "fits")
  • generally directing attention toward such interface can greatly increase it's throughput/precision

And so here are some caveats

  • This isn't as directly grounded in reality as it may seem
  • The nature of PP is such that model adjustment will be going on both sides (e.g. if I'll be looking at cloud shapes in the sky, and some cloud will start resonating with the concept/word Stegosaurus, my perception will change all the way down toward noticing plate-resembling parts of the cloud, etc.)
  • In particular with probing in more detail, the PP machinery will generally be able to generate more details; in case of memories, as you note, the problem is they they are mostly output of generative word model inside your head, not of the external world; if your generative world is precise enough and your attention was focused on something while experiencing it, the recall could be quite reliable
  • The relation of the language/concept space with reality is somewhat complicated... notice that in the above given example with clouds the concept of Stegosaurus is a result of pretty impressive and big cultural computation which happened almost entirely outside of your head

So... while I generally like most of the specific advice, I don't think truth-orientated thinking is a good label. In my view what's a necessary ingredient for truth orientation, missing here, are strong links between anything happening inside the brain and "the rest of the reality".

comment by habryka (habryka4) · 2020-01-20T23:58:22.181Z · score: 6 (4 votes) · LW(p) · GW(p)

Promoted to curated: I think this post ties together a large varieties of ideas and concepts I think are quite important, and does so in a very practical manner that I think is broadly undersupplied. I do think I was a bit confused what the goal of the post was, and would have benefitted from a bit more context-setting at the beginning. 

comment by Bucky · 2020-01-04T21:01:49.954Z · score: 2 (1 votes) · LW(p) · GW(p)

What seemed like a memory was actually an inference.

This is a great phrasing of something I notice happening to me.

My best solution so far is to notice that this might be a problem and admit it (even when I feel confident I’m right). This weakens the link between my social standing and whether the memory is correct, which lowers my bias towards remembering self-advantageously.