Dennett on the selfish neuron, etc.

post by NancyLebovitz · 2013-09-17T17:09:35.414Z · LW · GW · Legacy · 12 comments

Dennett:

Mike Merzenich sutured a monkey's fingers together so that it didn't need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don't have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what's in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don't have a job? Well, they're out of work. They're unemployed, and if you're unemployed, you're not getting your neuromodulators. If you're not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you're going to be really out of work, and then you're going to die.

I hadn't thought about any of this-- I thought the hard problem of brains was that dendrites grow so that neurons aren't arranged in a static map. Apparently that is just one of the hard problems.

He also discusses the question of how much of culture is parasitic, that philosophy has something valuable to offer about free will (I don't know what he has in mind there), the hard question of how people choose who to trust and why they're so bad at it (he thinks people chose their investment advisers more carefully than they chose their pastors, I suspect he's over-optimistic), and a detailed look at Preachers Who Are Not Believers. That last looks intriguing-- part of the situations is that preachers have been taught it's very bad to shake someone else's faith, so there's an added layer of inhibition which keeps preachers doing their usual job even after they're no longer believers themselves.

12 comments

Comments sorted by top scores.

comment by Nornagest · 2013-09-17T19:42:20.556Z · LW(p) · GW(p)

I think you really have to think in terms of individual neurons as micro-agents, and ask what's in it for them?

You don't need goal-directed behavior to explain this; we're probably looking at a load-balancing process of some sort, but that doesn't mean that it's working in a way that's well modeled by agents acting on desires. Analogously, the branches of a red-black tree appear from the outside to even themselves out if one gets much longer than its sibling, a process that looks pretty goal-oriented, but all the meat of the algorithm goes into reflexively satisfying local constraints that have nothing obviously to do with branch length.

Truthfully, this reads as a "when all you have is a hammer" sort of thought process to me.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-19T22:30:41.179Z · LW(p) · GW(p)

It depends on how one uses the hammer. If you are a philosopher there's nothing wrong with trying your hammer at all problems and seeing whether some useful insight comes out of it.

comment by Creutzer · 2013-09-17T17:26:53.731Z · LW(p) · GW(p)

I think you really have to think in terms of individual neurons as micro-agents, and ask what's in it for them?

And how does that sort of anthropomorphization explain anything?

Replies from: mwengler, NancyLebovitz
comment by mwengler · 2013-09-20T14:16:56.490Z · LW(p) · GW(p)

And how does that sort of anthropomorphization explain anything?

The answer to this is a major theme of Dennett's book "Intuition Pumps..." Which isn't to say that I understood the answer when he wrote the book, but I did get the impression that a large number of dynamic systems, especially living or computationally driven systems, are more effectively predicted by theories using what they "want" than by theories that do not use such abstractions.

comment by NancyLebovitz · 2013-09-17T18:17:53.699Z · LW(p) · GW(p)

If neurons sometimes aim for individual advantage that doesn't serve the brain/person, rather than cooperating reliably, it's worth understanding.

And if that's an important part of how brains operate, then you'd want to know about it if you're simulating brains.

Replies from: Creutzer, None
comment by Creutzer · 2013-09-17T19:23:59.597Z · LW(p) · GW(p)

I don't see anything in the quoted passage that suggests that individual neurons do something in their own interest to the detriment of the brain/person. But much more importantly, neurons don't aim for anything. They're not, you know, agents.

So this is why I'm objecting: the anthropomorphization is singularly unhelpful in understanding any of this, because whatever the mechanism behind what's going on, goal-directed intentional behavior is very far from it.

Replies from: chaosmage
comment by chaosmage · 2013-09-18T11:45:04.631Z · LW(p) · GW(p)

But much more importantly, neurons don't aim for anything.

You don't know that.

We don't know how neurons work. There are huge networks of transcription processes going on every time a neuron fires, and much of it is uncharted. We don't know the minimum complexity required for goal-oriented behavior and it could well be below the complexity of the processes going on in neurons.

Bacteria can distinguish between different nutrients available around them, and eat the more yummy ones first. Is that not goal-oriented behavior? Neurons are way more complicated than bacteria.

My personal speculation is that neurons have a very simple goal: they attempt to predict the input they're going to receive next and correlate it with their own output. If they arrive at a model that, based on their own output, predicts their input fairly reliably, they have achieved a measure of control over their neighboring neurons. Of course this means their neighboring neurons also experience more predictability - many things that know each other become one thing that knows itself. So this group of neurons begins to act in a somewhat coordinated fashion, and it does the only thing neurons know how to do, i.e. attempt to predict its surroundings based on its own output. Escalate this up to the macroscopic level, with neuroanatomical complications along the way, and you have goal-orientation in humans, which is the thing Dennett is really trying to explain.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-18T12:18:55.731Z · LW(p) · GW(p)

I think choosing nutrients is plausible. I'm much more dubious about neurons trying to predict, but I might be underestimating the computational abilities of cells.

Replies from: chaosmage
comment by chaosmage · 2013-09-18T12:33:37.545Z · LW(p) · GW(p)

Okay. Now according to you, is choosing nutrients goal-oriented behavior?

comment by [deleted] · 2013-09-17T18:49:01.528Z · LW(p) · GW(p)

But why would we even suspect that? We expect that genes encode traits which increase the inclusive genetic fitness because there's a known mechanism which eliminates genes that don't. So we have a situation in which we know that something (the occurrence of a gene) has distant effects but we don't necessarily knowing the intervening causal chain. This is analogous to knowing somebody's goals without understanding their plan, so a careful application of anthropomorphism might help us understand.

I don't know if there's a mechanism that would make us expect neurons to behave in ways that give them lots of neuromodulators or whatever, even if it makes the entire system less effective.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-09-18T05:39:05.164Z · LW(p) · GW(p)

But why would we even suspect that?

Well, the following is highly speculative, but if many neurons die early at an early age, and there is variation within the individual neurons then the neurons that survive to adulthood will be selected for actions that reinforce their own survival. That said, I'd be surprised if there's enough variation in neurons for anything like this to happen.

Replies from: timtyler
comment by timtyler · 2013-09-19T06:55:00.633Z · LW(p) · GW(p)

if many neurons die early at an early age

Well known fact.

[...] and there is variation within the individual neurons

Well known fact.

That said, I'd be surprised if there's enough variation in neurons for anything like this to happen.

Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use - due to the relatively small quantity of selection acting on neurons over an individual's lifespan. Probably the main thing they explain is some types of brain cancer.

Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there's a lot more selection going on than happens with neurons.