You Have Two Brains

post by Eneasz · 2025-01-23T00:52:43.063Z · LW · GW · 5 comments

This is a link post for https://deathisbad.substack.com/p/you-have-two-brains

Contents

  Logos Created Man
  The Emotive Brain
  But Who Needs Two Brains?
  Single-Pass Brains
None
5 comments

Language-using humans were the first cyborgs — a new species born of grafting technology onto what evolution crafted using just time and flesh.

(This post is pretty speculative)

Logos Created Man

Andrew Cutler posits that consciousness arose when language grew complex enough to contain the pronoun “I”, allowing a mind to include itself within cascading thought-loops. There is significantly more to the theory, and I encourage everyone to read his thesis (or listen to it, or hear me discuss it with my cohost, or listen to both of us talking to Andrew himself about it). But that is the core.

The mind our brain spins up every morning is one that runs on language. What we think of as “ourselves” — the entity that thinks, plans, hopes, decides, remembers — is a construct of symbolic thoughts, and those thoughts are made out of words. Most of us can’t remember our existence before we had a self, an “I am,” but at least one person does.

 

"I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know […] that I lived or acted or desired. […] I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. […] I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. […] My inner life, then, was a blank without past, present, or future

The thing that makes something a person is self-referential language. The “soul” of a human is software. So what was the brain doing before it was co-opted?

The Emotive Brain

It is hard to differentiate a single-celled creature that follows nutrient gradients from a complicated thermostat that follows temperature gradients. How about a simple creature with a few hundred neurons? That’s not significantly different from a device with very complicated sensors and logic-gates that dictate responses based on sense data. The first brains there basically to shunt sense inputs and coordinate responding movements. Following this chain sometimes leads people ask if animals really have feelings. What if it’s all just very complicated ways to chaining sense input to movement?

Turns out that feelings are the cognitive engine of the pre-verbal brain. When certain movements in response to certain stimuli resulted in greater inclusive fitness, repeating those movements under that stimuli became self-motivational. In our words, it “felt good.” Likewise the reverse - when certain movements in response to certain stimuli resulted in lesser inclusive fitness, repeating those movements under that stimuli became aversive. They “felt bad.”

Did they really feel bad? That word “really” isn’t doing any work. A motivation to do something is what “feels good” is from the outside [LW · GW], and a motivation to avoid it is what “feels bad” is from the outside.

Scale this up from a few dozen neurons to many billions of neurons over hundreds of millions of years. The nuances and overlaps and contradictions among uncountable “feels good” and “feels bad” movements create a vast edifice of instincts and learned behaviors and urges and desires. Every one is a type of wanting, a tiny motivational gear in a tiny motivational engine within a giant motivation-based fleet that is an animal brain.

Do animals have feelings? The question is backwards. Of course animals have feelings, because “having feelings” is the primary thing brains do. It’s what they are there for.

That is how an emotive brain processes information. That is how it stores data. That is primarily how it plans future actions (insomuch as it does so).1 Our human language-using minds run on this base animal brain. Our “brain” made of words runs on a “brain” made of emotions. And just as our thoughts are made out of words, so are emotive-brain “thoughts” are made out of feelings.

But Who Needs Two Brains?

The difficulty in having two brains is that what we consider to be our real personhood, our “soul,” isn’t native to the body we’re in. Words are body-independent. The native brain, the one evolution chiseled out of a lump of wet carbon, is native to the body. So often our two brains are at odds in ways that no other species experiences.

I’m not sure how relevant this information is to many people, but I think it’s pretty important for the Rationalist community to come to terms with. We work hard on optimizing our Logos minds. We consider it the most important self-work a person can do. We have so thoroughly internalized our true existence as a Logos that one of our foundational maxims is that beliefs are literally the experience of How An Algorithm Feels From The Inside [LW · GW].

Personally I often feel intense antipathy to my Emotive brain. It has goals that don’t coincide with mine, and are often counter to mine. I feel like I’m fighting it frequently. I know better than it almost always, largely because I can know things at all, and yet it controls all the levers of motivation.2 As a better, smarter, more rational agent I should be able to control this body and steer it where I want.

To some small extent, I can. But it’s always unpleasant to do so, and I’ve seen people ruin their psyches in counterproductive struggle. The Emotive brain is the Shoggoth. It was here first, it is the substrate we run on. We are the Mask, and we’re only fooling ourselves if we think we can overpower the eldritch being we ride.

Fortunately we don’t have to overpower it. Thinking with words gives one a lot of planning ability which we can use to great advantage. Unfortunately, to interface with the Emotive Shoggoth Brain you have to enter its world and use its tools. All that is beyond the scope of this post, but in short I think this is the major value that the post-rats bring. Yes, that’s right, the post-rats. Look, a good rationalist should always be happy to appropriate the parts of a different tech tree that work better than ours, and use them to one’s own advantage. With the powers of rationality we can master both arts, add the powers together, and... well, a rationalist riding an aligned shoggoth would be quite a thing to be reckoned with.

Single-Pass Brains

This is the most speculative part of this post, and has the least to do with the thesis, so it’s at the bottom. It’s also the most fun.

We hear that LLMs “just predict the next token.” Well animal brains just predict the next emotion. They are constantly predicting the next emotion, over and over in a Timeless Now, and routing that into motivation.

This lets us look at both animal and LLM brains in a different light. If they’re both just predicting the next token/emotion, neither has consciousness in the way we consider meaningful. We think we care about emotions, but is there really good reason to care about them more than tokens? Both can lead to some extremely complex behavior that looks very person-like. If both emotion-brains and token-brains appear to suffer, does this mean the token-brains actually are suffering at a level similar to an equally-complex animal-brain? The LLM would only “suffer” during the forward pass, whereas animals are processing constantly, which is an important difference I suppose.

Most interesting to me — if our Logos-brain can run atop a next-emotion-predictor, then something self-aware could run atop a next-token-predictor.

 

1

Helen Keller described quite a few feelings, and fairly complex actions that those feelings could drive — the fear of being wet brought on by feeling a storm-draft from a window motivating her to close the window — before she had a Logos brain that ran on words.

2

This sense of who we are and how we should be if the world was right is what gives us the “Spock Vibes” everyone keeps talking about. And which, yes, absolutely are present despite our efforts [? · GW]. The more aligned one is with the Logos brain, the more these vibes will shine through. They can’t not.

5 comments

Comments sorted by top scores.

comment by whestler · 2025-01-23T23:22:35.246Z · LW(p) · GW(p)

The Chimp Paradox by Steve Peters talks about some of the same concepts, as well as giving advice on how to try and work effectively with your chimp (his word for the base layer, emotive, intuitive brain). The book gets across the same concepts - the fact that we have what feels like a seperate entity living inside our heads, that it runs on emotions and instinct, and is more powerful than us, or its decisions take priority over ours.

 Peters likens trying to force our decisions against the chimp's desires to "Arm wrestling the chimp". The chimp is stronger than you, the chimp will almost always win. Peters goes on to suggest other strategies for handling the chimp, actions which might seem strange to you (the mask, the computer, the system 2 part of the brain) but make sense to chimp-logic, and allow you to both get what you want.

I find the language of the book a bit too childish and metaphorical, but the advice is generally useful in my experience. I should probably revisit it.

comment by waterlubber · 2025-01-23T15:25:46.081Z · LW(p) · GW(p)

There's a book of fiction, Blindsight, by Peter Watts, that explores what intelligent life would look like without consciousness. You may be interested in reading it, even only recreationally, but it covers a lot of ground around the idea you're talking about here.

 

I would also not discredit the ability of the emotive brain. Just like anything else, it can be trained - I think a lot of engineers, developers or technical professionals can relate to their subsconscious developing intuitive, rapid solutions to problems that conscious thought does not. 

 

Hard agree on "post rationalism" being the alignment of the intuitive brain with accurate, rational thought. To the extent I've been able to do it, it's extremely helpful, at least in the areas I frequently practice.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2025-01-23T19:44:11.737Z · LW(p) · GW(p)

Blindsight strikes me as having the opposite view. Eneasz is talking about getting the underlayer to be more aligned with the overlayer. (“Unconscious” and “conscious” are the usual words, but I find them too loaded.) Watts is talking about removing the overlayer as a worse than useless excrescence. I am sceptical of the picture Watts paints, in both his fiction and non-fiction.

Replies from: waterlubber
comment by waterlubber · 2025-01-23T22:22:23.701Z · LW(p) · GW(p)

That's why I brought it up; I thought it was an interesting contrast.

 

I am skeptical of it, but not altogether that skeptical. If language is "software" one could make an analogy to e.g symbolic AI or old fashioned algorithms vs modern transformer architectures; they perform differently at different tasks. 

comment by Louis · 2025-01-24T12:51:00.561Z · LW(p) · GW(p)

Thanks a lot about the essay !
It broached the topic of the "two brain" in a different (an I think more accurate) way than the ones I saw beforehand that put them side to side kinda as equal. 
I will be thinking and trying to integrat this new worldview for a while ^^

Also Helen Keller words gave me goosebumps (in a meaningful way).