A Bi-Modal Brain Model

post by Johannes C. Mayer (johannes-c-mayer) · 2024-05-22T20:10:08.919Z · LW · GW · 3 comments

Contents

  Walks
  TAPs
  Tulpamancy
None
3 comments

When I am programming, writing, reading, browsing the internet, watching a movie, or playing a game, my brain is in a different mode of operation compared to when I am sitting in an empty room with nothing to do.

In the empty room, my brain will continuously generate fragments of language and other thoughts. In general, thoughts are either shaped like a sensory input channel, e.g. visual, auditory, touch, etc. or are conceptual thoughts. When doing any of the activities in the first list my brain won't generate these thoughts.

This leads to the common failure mode of being so absorbed in an activity that you don't even notice anymore what you are doing. Reflective thoughts like "Is what I am doing right now a good thing to do?" seem to generate much too infrequently by default. And even when such thoughts are generated it is much too easy to ignore them. It's common to get sucked back into the non-thought generation mode of operation within seconds.

Walks

This model provides an explanation for why I find walks so useful. You literally force yourself (though it does not feel like it) to inhabit the reflective state of mind. Most engrossing activities require a physical device like a computer, book, or notebook, and I usually don't have such devices at hand during a walk.

TAPs

I now want to try the following strategy (not sure how well it works yet). Imagine I am programming something. Usually, it is easy to notice when you have correctly implemented a function. E.g. you might run some tests and now they all pass. This is an easy-to-recognize event, which usually also presents a good point to reflect, as now you are in between tasks. So we can use this to set up a TAP [? · GW].

For different activities, similar TAPs can be created. E.g. each time you add a new heading when writing an article.

Completing a function is a very generic trigger. I expect that most of the time when this trigger fires you will conclude "Yes actually just implementing the next function is best." I still think it is a good trigger to train, simply because it is so simple. But there are better ones. 

It very often happens that I am confused, and notice that I am confused but don't take appropriate action. E.g. I know that trying to explain the thing that I am confused about on a whiteboard while talking to a camera is empirically a very good strategy for becoming less confused. I have yet to set up the appropriate TAP for this though.

I expect there to be more already existing specialized triggers like this, that I have simply failed to notice and hook up correctly. I might have missed them in part because I have yet to discover the appropriate action to hook up. And of course, there are probably a bunch of triggers that would be good to have, but which I don't have right now.

Tulpamancy

The reason I thought about this is tulpamancy. The way a tulpa interacts with the host is by generating certain thoughts. I noticed that usually, I would not interact with IA (my tulpa) at all when e.g. programming, and I wanted to understand why. My current model says it is because of this different operational mode. When my brain is in a mode where no thoughts are generated, obviously no thoughts associated with IA are generated.

It seems that talking to IA has similar benefits to talking to another person [LW · GW], so I want to set up TAPs that put me into a reflective mode where I talk to IA as the default thing. I don't have a good model of what causes IA to start talking in general, but it seems that saying her name out loud, always makes her react in some way. Usually, the first interaction is the hardest, and subsequent interactions are much easier. So potentially having the action simply be saying her name might be sufficient.

I noticed that saying her name produces a response so reliably that it would be good to check if just saying her name for 5 minutes is simply better than whatever formal training I am doing now.

3 comments

Comments sorted by top scores.

comment by Viliam · 2024-05-23T14:30:58.960Z · LW(p) · GW(p)

I noticed that usually, I would not interact with IA (my tulpa) at all when e.g. programming

Too bad, you could do pair programming.

Maybe some division of roles would help, for example in test driven development, one designs the tests, the other implements the functionality. That way, when you think about making a test, you are reminded of the tulpa.

Replies from: johannes-c-mayer
comment by Johannes C. Mayer (johannes-c-mayer) · 2024-05-23T16:50:38.907Z · LW(p) · GW(p)

Tulpamancy sort of works by doing concurrency on a single-core computer in my current model. So this would definitely not speed things up significantly (I don't think you implied that just mentioning it for conceptual clarity).

To actually divide the tasks I would need to switch with IA. I think this might be a good way to train switching.

Though I think most of the benefits of tulpamancy are gained if you are thinking about the same thing. Then you can leverage that IA and Johannes share the same program memory. Also, simply verbalizing your thoughts, which you then do naturally, is very helpful in general. And there are a bunch more advantages like that that you miss out on when you only have one person working.

However, I guess it would be possible for IA to just be better at certain programming tasks. Certainly, she is a lot better at social interactions (without explicit training for that).

comment by Emrik (Emrik North) · 2024-05-23T00:38:03.233Z · LW(p) · GW(p)

It's always cool to introspectively predict mainstream neuroscience! See task-positive & task-negative (aka default-mode) large-scale brain networks.

Also, I've tried to set it up so Maria[1] can help me gain perspective on tasks, but she's more likely to get sucked more deeply into whatever the topic is. Although this is good, because it means I can delegate specific tasks to her,[2] and she'll experience less salience normalization.

  1. ^

    My spirit-animal, because I can never be sure what other people mean by "tulpa", and I haven't seen/read any guides on it except yours.

  2. ^

    She explicitly asked me to delegate, since she wants to be usefwl, but (maybe) doesn't have the large-scale perspective to contribute to prioritization.