Neuroscience things that confuse me right now

post by Steven Byrnes (steve2152) · 2021-07-26T21:01:19.109Z · LW · GW · 5 comments

Contents

  1. Layout of autonomic reactions / “assessments” in amygdala, mPFC, ACC, etc.
    1.1 What’s with S.M.?
    1.2 The lesions that cause pain asymbolia are in the wrong place
  2. A few things about the superior colliculus
    2.1 Connections from neocortex sensory processing to superior colliculus
    2.2 Connections from superior colliculus to neocortex sensory processing
    2.3 Learning in the superior colliculus
  3. Why are there (a few) dopamine receptors in primary visual cortex?
  4. Learning-from-scratch-ism in motor cortex
  5. Every brainstem-to-telencephalon neuromodulator signal besides dopamine and acetylcholine: what do they do?
None
5 comments

(Quick poorly-written update, probably only of interest to neuroscientists.)

(Update 5 months later: For most of these things, I feel like I've graduated from "I'm confused" to "I think I understand this, but my ideas might be wrong, especially since I haven't written them up & solicited feedback". So I'm still happy for any thoughts!)

It’s not that I’m totally stumped on these; mostly these are things I haven’t looked into much yet. Still, I’d be very happy and grateful for any pointers and ideas.

Most of these are motivated by one or more of my longer-term neuroscience research interests: (1) “What is the “API” of the telencephalon learning algorithm?” [LW · GW] (relevant to AGI safety because maybe we’ll build similar learning algorithms and we’ll want to understand all our options for “steering” them towards trying to do the things we want them to try to do), (2) “How do social instincts work, in sufficient detail that we could write the code to implement them ourselves?” (relevant to AGI safety for a couple reasons discussed here [LW · GW]), (3) I’d also like to eventually have good answers to the “meta-problem of consciousness” and “meta-problem of suffering” [EA · GW], but maybe that’s getting ahead of myself.

1. Layout of autonomic reactions / “assessments” in amygdala, mPFC, ACC, etc.

In Big Picture of Phasic Dopamine [LW · GW], I talked about these areas under “Dopamine category #3: supervised learning”. In the shorter follow-up A Model of Decision-making in the Brain [LW · GW] I talked about them as “Step 2”, the step where possible plans are "assessed" in dozens of genetically-hardcoded categories like "If I do this plan, would it be a good idea to raise my cortisol levels?".

Anyway, at a zoomed-out level, I think I have a good story that explains a lot. At a zoomed-in level, I'm pretty unclear on exactly what's happening where.

What I have so far is:

1.1 What’s with S.M.?

As I mentioned here [LW · GW], S.M., a person supposedly missing her whole amygdala and nothing else, seems to have more-or-less lost the ability to have (and to understand in others) negative emotions, but not positive emotions. This seems to suggest that the amygdala triggers negatively-valenced autonomic outputs, and not positively-valenced ones. But my impression from other lines of evidence is that the amygdala can do both. So I'm confused by that.

1.2 The lesions that cause pain asymbolia are in the wrong place

I was reading a book about pain asymbolia (the ability to be intellectually aware of pain inputs, without caring about them or reacting to them). On a quick skim, I got the impression that this condition is caused by lesions of the insular cortex. Unless I’m confusing myself, that’s backwards from what I would have expected: I would have thought a lesion of the insular cortex should make a person intellectually unaware of the pain input (since the “primary interoceptive cortex” in the insula should presumably be what feeds that information into higher-level awareness / GNW [AF · GW]), but still motivated by and reacting to the pain input (since that comes from these assessment areas, like maybe ACC, working in loops with the hypothalamus / brainstem that ultimately feeds into dopamine-based motivation signals).

I can kinda come up with a story that hangs together, but it has a lot of implausible-to-me elements.

…But then I saw a later paper that early studies didn’t reproduce and maybe pain asymbolia is not caused by insula lesions after all. But their evidence isn’t that great either. Also, their proposed alternative lesion sites wouldn't make it any easier for me to explain.

Well anyway, I guess I’m hoping that things will clear up when I read more about the insula, survey the literature better, etc. But for now I'm confused.

2. A few things about the superior colliculus

We have two sensory-processing systems, one in the cortex and one in the brainstem. I have a nice little story about how they relate:

I think the brainstem one needs to take incoming sensory data and use it to answer a finite list of genetically-hardcoded questions like “Is there something here that looks like a spider? Is there something here that sounds like a human voice? Am I at imminent risk of falling from a great height? Etc. etc.” And it needs to do that from the moment of birth, using I guess something like hardcoded image classifiers etc.

By contrast, the cortex one is a learning algorithm. It needs to take incoming sensory data and put it into an open-ended predictive model. Whatever patterns are in the data, it needs to memorize them, and then go look for patterns in the patterns in the patterns, etc. Like any freshly-initialized learning algorithm, this system is completely useless at birth, but gets more and more useful as it accumulates learned knowledge, and it’s critical for taking intelligent actions in novel environments.

Well anyway, that’s a neat story, but there are other things going on with the superior colliculus too, and I’m hazy on the details of what they are and why.

2.1 Connections from neocortex sensory processing to superior colliculus

Let’s just talk about the case of vision, although I believe there are analogs for auditory cortex, somatosensory, etc.

As far as I understand, there are connections from primary visual cortex (V1) to the superior colliculus (SC), arranged topographically—i.e. the parts that analyze the same part of the visual field are wired together.

One theory is that these connections are cortical motor control (superior colliculus is involved in moving the eyes / saccades, in addition to sensory processing). I heard the "motor control" theory from Jeff Hawkins (he didn't really defend it in the thing I read, he just claimed it). I think Hawkins likes that theory because it fits in neatly with “cortical uniformity”—every part of the cortex is a sensorimotor processing system, he says. A new paper from S. Murray Sherman and W. Martin Usrey also says that these connections are motor commands. I don’t know who else thinks that, those are the only two places I’ve seen it.

I generally don’t like the “motor control” theory. For one thing, my understanding is that V1 is not set up with the cortico-basal ganglia-thalamo-cortical loops [LW · GW] that the brain uses for RL, and I normally think you need RL to learn motor control. For another thing, aren’t the frontal eye fields in charge of saccades?? (At least, in charge at the cortical level.) For yet another thing, it seems to me that “V1 cortical column #832” is not in a good position to know whether saccading to the corresponding part of the visual field is a good or bad idea. The decision of where and when to saccade needs to incorporate things like “what am I trying to do”, “what’s going on in general”, “what has high value-of-information”, etc.—information that I don’t think a particular V1 column would have.

The closest thing to motor control theory that kinda makes sense to me is a “Confusing things are happening here” message. More specifically, each V1 column ought to “know” if it's the case that higher-level models keep issuing confident predictions about what’s gonna happen at that part of the visual field, and those predictions keep being falsified. So when that happens, it could send a "Confusing things are happening here" message to SC.

Those messages would not be exactly a motor command per se, but the SC could reasonably act on the information by saccading to the confusing area. So then the messages wind up being more-or-less a motor command in effect.

That's not bad, but I'm still not entirely happy about this theory. For one thing, it seems not to match which neocortical layer these messages are coming out of. Also, I think that "the saccade target that best resolves a confusion" is not necessarily "the saccade target where incorrect predictions keep happening", and my introspection tentatively says that I would tend to saccade to the former, not the latter, when they disagree.

So here's one more theory I was thinking about. There’s a thing where if there’s a sudden flashing light, we immediately saccade to it, and maybe do other orienting reactions like move our head and body (and maybe also release cortisol etc.). My impression is that it’s SC that decides that this reaction is appropriate, and that orchestrates it.

But if we expect the flashing light, we’ll be less likely to orient to it.

So maybe the V1 → SC axons are saying: “Hey SC, there’s about to be motion in this particular part of the visual field. So if you see something there, it’s fine, chill out, we don’t have to orient to it.”

I don’t know which of those ideas (or something else entirely) is the real explanation, and haven’t looked into it too much.

2.2 Connections from superior colliculus to neocortex sensory processing

I think these exist too. Why?

I guess I always have my go-to cop-out answer of "They provide "context" that the neocortical learning algorithm can exploit to make better predictions". But maybe there's something else going on.

2.3 Learning in the superior colliculus

Contrary to my neat theorizing, there does seem to be some learning that happens in SC. I mean, I guess there kinda has to be, insofar as SC has some role in orchestrating motor commands, and the body keeps changing as it grows. I’m just generally hazy on what is being learned and where the ground truth comes from. I'll return to this in a later section below.

3. Why are there (a few) dopamine receptors in primary visual cortex?

Dopamine receptors are stereotypically used for RL, although I happen to think they're used for supervised-learning too [LW · GW]. But (see here [LW · GW]), V1 doesn't seem to me to have use for either of those things. Predictive learning (augmented by top-down attention and variable learning rates) seems like the right tool for the visual-processing job, and I don’t see what could be missing.

Yet there are in fact dopamine receptors in V1, apparently. Very few of them! But some! That makes it even weirder, right?

This paper found that mice with no D2 receptors (anywhere, not just V1) had close-to-normal vision. The differences were small, and I presume indirect; in fact the D2-knockout mice had slightly sharper vision!

…So anyway, I'm at a loss, this doesn’t make any sense to me. I’m tempted to just shrug and say “there’s some process tangentially related to vision processing, and the circuits doing that thing happen to be intermingled with the normal V1 visual-processing circuits, and that’s what the dopamine is there for.” I’m not happy about this. :-/

As with everything else here, I haven't looked into it much.

4. Learning-from-scratch-ism in motor cortex

(For definition of “learning-from-scratch-ism” see here [LW · GW].)

I’ll start by saying that I really like Michael Graziano’s grand unified theory of motor cortex. He argues (e.g. here and his book, and see here for someone arguing against) that the textbook division of motor-related cortex into “primary motor cortex”, “premotor cortex”, “supplementary motor area”, “frontal eye field”, “supplementary eye field”, etc. etc., is all kinda arbitrary and wrongheaded. Instead all those areas are basically doing the same kinds of thing in the same way, namely orchestrating different species-typical actions. If you think about mapping a discontinuous multi-dimensional space of species-typical actions onto a 2D piece of cortical sheet, you’re gonna get some sharp boundaries, and that’s where those textbook divisions come from.

Anyway, all that is kinda neat, but the part I’m confused about is how the motor cortex learns to do this. Like, what are the training signals, and how are those signals calculated?

One hint is that the midbrain can apparently also perform species-typical actions. I’m very unclear on what’s the difference between when the midbrain orchestrates a species-typical action versus the cortex orchestrating (nominally) the same action. I doubt they’re redundant; that would be a big waste of space, compared to having a much smaller area of cortex that merely “presses go” on the midbrain motor programs. Or does motor cortex do a better job somehow? How do these two regions talk to each other? Does the cortex teach the midbrain? Does the midbrain teach the cortex? Does the midbrain “initialize” the cortex and then the cortex improves itself by RL? Does the midbrain motor system learn, and if so, how does it get ground truth?

I don’t know, and I haven’t really looked into it, I’m just currently confused about what’s going on here.

And certainly I can’t feel good about advocating the truth of “learning-from-scratch-ism” if I’m not confident that the theory is compatible with everything we know about motor cortex.

5. Every brainstem-to-telencephalon neuromodulator signal besides dopamine and acetylcholine: what do they do?

I feel generally quite happy about my big-picture understanding of dopamine (see here [LW · GW]) and acetylcholine (see here [LW · GW]), even if I have a few confusions around the edges. But I haven’t gotten a chance to look at serotonin, norepinephrine, and so on, or at least not much. I’ve tried a little bit and nothing I read made any sense to me at all. So I remain confused.

5 comments

Comments sorted by top scores.

comment by MadHatter · 2021-07-29T21:49:54.756Z · LW(p) · GW(p)

Various thoughts:

  • It would make a lot of sense to me if norepinephrine acted as a Q-like signal for negative rewards. I don't have any neuroscience evidence for this, but it makes sense to me that negative rewards and positive rewards are very different for animals and would benefit from different approaches. I once ran some Q-learning experiments on the classic Taxi environment to see if I could make a satisficing agent (one that achieves a certain reward less than the maximum achievable and then rests). The agent responded by taking illegal actions that give highly negative rewards in the Taxi environment and hustling as hard as possible the rest of the time to achieve the reward specified. So I had to add a Q-function solely for negative rewards to get the desired behavior. Given that actual animals need to rest in a way that RL agents don't have to in most environments, it makes sense to me that Q-learning on its own is not a good brain architecture.
  • Dopamine receptors in V1 kind of makes sense if you want to visually predict reward-like properties of objects in the environment. Like something could look tasty or not tasty, maybe.
Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-07-30T15:05:13.290Z · LW(p) · GW(p)

That's an interesting anecdote about the satisficing thing! I don't think it quite applies to animals because I don't think animals are maximizing the sum of future rewards (see here [LW · GW]). Anyway the system is already set up with separate channels throughout for good things happening vs bad things happening (there's a thing I haven't written about but believe where the striatum sends out a cost and benefit estimate separately rather than just adding them up, and also in the "assessor" zone here [LW · GW] there are different channels because good vs bad things have different autonomic consequences, e.g. sympathetic vs parasympathetic). Also this says norepinephrine is slow-acting, which suggests that it doesn't implement a learning rule tied to particular thoughts and actions and events.

But the article says it does affect learning rate, and arousal and whatnot. So maybe something like: NE and acetylcholine both signal "important things are happening now, let's increase learning rate, tune the dial towards fast reactions at the expense of energy efficiency, etc. etc.", but acetylcholine is fast and local ("important things are happening at this particular part of the visual field right now") and NE is is slow and global ("I am in a generally important situation")? Dunno, just speculating based on one abstract.

comment by edoarad · 2021-08-02T19:24:28.787Z · LW(p) · GW(p)

Maybe the V1 dopamine receptors are simply useless evolutionary leftovers (perhaps it's easier from a developmental perspective)

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-08-03T17:34:05.188Z · LW(p) · GW(p)

LOL! The ultimate cop-out answer!!

Not that it's necessarily wrong. But I would be very surprised if that were the correct answer.

My vague impression is that there's an awful lot of genetic micro-management of cell types and receptors and so on for different areas of cortex. So "not expressing a receptor in a cortical area where it's unused" is (I assume) very easy evolutionarily, and these dopamine receptors are in lots of mammal species I think.

Also, "I'm confused about this" has a pretty high prior for me. I don't feel obligated to go looking very hard for ways for that not to be true. :-P

But thanks for the comment :)

comment by Mer -F (mer-f) · 2021-07-27T01:46:45.297Z · LW(p) · GW(p)

LW paradigm right here. Interesting too.