Deep and obvious points in the gap between your thoughts and your pictures of thought

post by KatjaGrace · 2024-02-23T07:30:07.461Z · LW · GW · 6 comments

Some ideas feel either deep or extremely obvious. You’ve heard some trite truism your whole life, then one day an epiphany lands and you try to save it with words, and you realize the description is that truism. And then you go out and try to tell others what you saw, and you can’t reach past their bored nodding. Or even you yourself, looking back, wonder why you wrote such tired drivel with such excitement.

When this happens, I wonder if it’s because the thing is true in your model of how to think, but not in how you actually think.

For instance, “when you think about the future, the thing you are dealing with is your own imaginary image of the future, not the future itself”.

On the one hand: of course. You think I’m five and don’t know broadly how thinking works? You think I was mistakenly modeling my mind as doing time-traveling and also enclosing the entire universe within itself? No I wasn’t, and I don’t need your insight.

But on the other hand one does habitually think of the hazy region one conjures connected to the present as ‘the future’ not as ‘my image of the future’, so when this advice is applied to one’s thinking—when the future one has relied on and cowered before is seen to evaporate in a puff of realizing you were overly drawn into a fiction—it can feel like a revelation, because it really is news to how you think, just not how you think a rational agent thinks.

6 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2024-02-23T16:02:43.073Z · LW(p) · GW(p)

Despite some problems with the dual process model, I think of this as a S1/S2 thing.

It's relatively easy to get an insight into S2. All it takes is a valid argument that convinces you. It's much harder to get an insight into S1, because that requires a bunch of beliefs to change such that the insight becomes an obvious facet of the world rather than a linguistically specified claim.

We might also think of this in terms of GOFAI. Tokens in a Lisp program aren't grounded to reality by default. A program can say bananas are yellow but that doesn't really mean anything until all the terms are grounded. So, to extend the analogy, what's happening when an insight finally clicks is that the words are now grounded in experience and in some way made real, whereas before they were just words that you could understand abstractly but were't part of your lived experience. You couldn't embody the insight yet.

For what it's worth, this is a big part of what drew me to Buddhist practice. I had plenty of great ideas and advice, but no great methods for making those things real. I needed some practices, like meditation, that would help me ground the things that were beyond my ability to embody just by reading and thinking about them.

Replies from: cubefox
comment by cubefox · 2024-02-25T01:57:12.902Z · LW(p) · GW(p)

Something like the dual process model applied to me in early 2020. My "rational self" (system 2) judged it likely that the novel coronavirus was no longer containable at this point. That we would get a catastrophic global pandemic, like the Spanish flu. Mainly because of a chart I saw on Twitter that compared 2003 SARS case number growth with nCov case number growth. The amount of confirmed cases was still very small, but it was increasing exponentially. Though my gut feeling (system 1) was still judging a global pandemic as unlikely. After all, something like that never happened in my lifetime, and getting double digits of new infections per day didn't yet seem worrying in the grand scheme of things. Exponential growth isn't intuitive. Moreover, most people, including rationalists on Twitter, were still talking about other stuff. Only some time later did my gut feeling "catch up" and the realization hit like a hammer. I think it's important not to forget how early 2020 felt.

Or another example: I currently think (system 2) that a devastating AI catastrophe will occur with some significant probability. But my gut feeling /system 1 still says that everything will surely work differently from how the doomers expect and that we will look naive in hindsight, just as a few years ago nobody expected LLMs to produce oracle AI that basically solves the Turing test, until shortly before it happened.

Those are examples of the system 1 thinking: The situation still looks fairly normal currently, so it will stay normal.

comment by romeostevensit · 2024-02-23T07:57:50.876Z · LW(p) · GW(p)

Another hypothesis: the moment of compression feels amazing, because you need to deeply understand something about a phenomenon to compress it. The zip file feels mundane and doesn't include the insight of building new frontiers in your compression library.

Replies from: StartAtTheEnd, Leviad
comment by StartAtTheEnd · 2024-02-23T16:39:56.419Z · LW(p) · GW(p)

This seems true. The Eureka feeling is pretty good, to the point that some people get slightly addicted to looking for insights (hence the concept "insight porn"). Even if you figure out something amazing, this feeling tends to fade away, even though the value of the discovery remains the same.

But I think this is a different idea than "knowing something vs internalizing it", and that the difficulty of communicating wisdom is yet another idea.

I think that wisdom maps to words just fine, but in a reductive way, such that the words don't map back to the wisdom. The words can be thought of as a hash of the wisdom. So it's recognizable to you, but to those who have never made the insight, the words are like a pointer (programming term) leading to nothing

comment by noggin-scratcher · 2024-02-23T12:39:45.853Z · LW(p) · GW(p)

You’ve heard some trite truism your whole life, then one day an epiphany lands and you try to save it with words, and you realize the description is that truism

Reminds me of https://www.lesswrong.com/posts/k9dsbn8LZ6tTesDS3/sazen [LW · GW]