How do biological or spiking neural networks learn?

post by Dom Polsinelli (dom-polsinelli) · 2025-01-31T16:03:38.425Z · LW · GW · 1 comment

This is a question post.

Contents

1 comment

I have read several papers on this and covered the standard stuff about STDP, BCM, other forms of Hebbian learning, and recently a paper about how local rules are equivalent to minimizing error between vectors encoded in populations. I have tried to implement these in my own code with some success but not as I would like. My specific questions are:

  1. How does synaptic weight change with local variables (current synaptic weight, average and/or time dependent activity of pre and post synaptic neuron) and perhaps some global reward signal? Some kinds of Hebbian learning have yielded moderate success in my own testing but the classic exponential STDP rule does not even when paired with various homeostatic mechanisms to prevent the weights going to extreme values.
  2. How does that learning rule change for different types of neurons? There are excitatory and inhibitory but from what I have read, neurons can also be integrators or resonators and can be mono or bistable (spike once and return to resting state or can be excited into persistent firing). If these properties are relevant, they should affect learning rules, if they don't that is interesting but you can't take it for granted.
  3. How do these rules work for different encoding schemes? There is at least temporal and rate coding, probably others or mixtures of multiple as biology is messy. I would expect learning rules for temporal and rate coding to be very different. As said above, if they are the same that is fine, but I would like to see direct evidence of it.
  4. Do these learning rules effectively extend across layers in deep neural networks? I have seen many papers with only two or three total layers which is sufficient to learn interesting behavior but is clearly not the same as real brains. In my own experiments, expanding an architecture that works well with one hidden layer and a given learning rule to an architecture with many hidden layers but the same rule universally decreased performance which cast doubt on the biological plausibility as well as my ability to write code.
  5. Is there an agreed upon standard for what rules are best in terms of biological plausibility and/or training effectiveness? To my knowledge, there is not really a competing standard for backpropagation in traditional neural nets but it seems like every paper I read has a different learning rule for spiking neural networks.

Thank you for any information on these. Information on spiking backpropagation is interesting and I would like to hear it but learning about that is not my primary goal. Any advice related to me generally preparing for applications to computational neuroscience programs is also welcome. I feel extremely ignorant about the field but also like I could make some contributions given my background is in physics and my current job is in systems neuroscience. 

Answers

1 comment

Comments sorted by top scores.

comment by Hzn · 2025-02-01T23:54:56.675Z · LW(p) · GW(p)

For simplicity I'm assuming the activation functions are the step function h(x)=[x>0]…

For ‘backpropagation’ pretend the derivative of this step function is a positive number (A). A=1 being the most obvious choice.

I would also try reverse Hebbian learning ie give the model random input & apply the rule in reverse

“expanding an architecture that works well with one hidden layer and a given learning rule to an architecture with many hidden layers but the same rule universally decreased performance” -- personally I don't find this surprising

NB for h only relative weight matters eg h(5-x+y) = h(0.5-(x-y)/10) so weights going to extreme values effectively decreases the temperature & L1 & L2 penalties may have odd effect