Synthetic Neuroscience
post by hpcfung · 2025-03-25T17:45:05.916Z · LW · GW · 3 commentsContents
Motivation Outline Advantages of this approach Remarks None 3 comments
Abstract: Neuroscience as engineering -- understanding how the brain works by building simplified, effective models of it, while requiring the model to behave as a real animal would.
Motivation
Suppose we want to build an AGI. The only working human-level general intelligence we know of is the human brain. So if the goal is to guarantee the creation of a successful AGI, then we should study the human brain.
For ethical and practical reasons we do not want to copy the brain exactly. Rather, the goal is to understand the brain well enough, so that we can construct an intelligence with analogous capabilities. This differs from computational neuroscience, where the goal of simulations is to better understand biological systems, especially to answer isolated, more narrow scoped research questions; or neuromorphic computing, where performance supercedes biological accuracy.
A full brain simulation, down to the molecular level guarantees success but is impossibly expensive computationally. And even so the brain would still be a black box to us so this is undesirable. Instead we look for effective models that at least exhibit the same qualitative behavior as the brain. For example we may limit ourselves to modelling at the spike train level and ignore details such as gene upregulation. (The level of detail needed is still unknown.) To reproduce the same qualitative behavior, a simulated mouse brain should be capable of the same things a real mouse can do: eg not just motor skills and vision in isolation, but complicated tasks like foraging and hunting, navigation, and learning.
Outline
Our research program is as follows. Much like synthetic biology, we shall follow the philosophy, "What I cannot create, I do not understand." At each stage, we will test our effective model by testing its performance in some environment, and compare that with the performance of a test animal in such an environment. Eg we can compare the amount of time needed for an animal to learn to walk, behavior when placed in a new location, etc. If our effective model has significantly inferior performance, then we must have missed an important element when simplifying the true biological model, and we must go back, identify it, and incorporate it. In a sense we, the experimenter, are performing artificial selection to search for the model architecture. Instead of optimizing for some kind of fitness as in natural evolution, we create a list of tasks which our model must perform.
If successful, we then demand more sophisticated behavior from the model, moving on to a more complex model organism. Eg when we move from a mouse model to a chimpanzee model, we add the requirement of tool use.
Note that the requirements themselves may also be analogous instead of exact. Eg we may choose to simplify or alter the anatomy of the simulated organism, or to simplify the hunting process.
Advantages of this approach
Biological intelligences significantly outperform traditional machine learning in out of distribution performance, few-shot learning, robustness from hallucination, no catastrophic forgetting, etc. A particularly interesting research question would be to study the role of instincts and reflexes, which are built-in in the animal brain and not learned. Introducing such induction biases may be essential for increasing sample efficiency when learning motor tasks. Another research question is to understand how biological intelligence performs executive functions. The brain has memory and a persistent world model, so that it normally does not hallucinate like an LLM does. The brain may also possess a special architecture that grants it superior generalization capabilities.
Guaranteed progress is very alluring, unlike other approaches to AGI like scaling LLMs or introducing ad hoc changes to the architecture. And since we're always testing our effective model on real world tasks, we are not stuck in the intellectual quagmire of understanding the brain completely -- we only need to understand the important aspects. These tests will evaluate our level of understanding, at least as far as engineering intelligent systems is concerned. (Eg as a completely made up example, our approach may tell us nothing about how bipolar neurons behave in real life; only that we need 30% of our neurons to be bipolar to give us the right connectivity for a functioning neural network. The first question falls under traditional neuroscience.)
The fact that we are scaling up intelligence means that even before reaching human level intelligence, we have obtained viable, robust general intelligence, suitable for applications. A dog level intelligence can navigate complex terrain, infer human intentions, avoid danger, manipulate objects, etc. Although modifications are likely needed to ensure compliance or to enhance capabilities before deployment.
Remarks
This would be a long term, resource intensive research program. I am currently going through the literature to see if this vision is already being pursued, eg perhaps Nengo. I do not have formal training in neuroscience, so feedback is appreciated.
3 comments
Comments sorted by top scores.
comment by Dom Polsinelli (dom-polsinelli) · 2025-03-26T00:41:07.503Z · LW(p) · GW(p)
I think this is general admirable in theory, at least broad strokes, but way way harder than you anticipate. The last project I worked on alone I was trying to copy c elegans chemotaxis with a biological neuron model and then have it remember where food was from a previous run and steer in that direction even if there was no food anywhere in its virtual arena, something real c elegans has been observed doing. Even the first part was not a huge success and because of that I put an indefinte pause on the second part. I would love to see you carry on the project or something similar, maybe you will have more success especially if you abstract more. I'm happy to share code and talk more if you're interested. But at this time, it is my impression that we just don't understand individual neurons, synaptic weight, or learning rules well enough to take a good pass at it.
Replies from: hpcfung↑ comment by hpcfung · 2025-03-26T07:05:21.915Z · LW(p) · GW(p)
Very interesting, thank you for letting me know. I kind of expected that this is where we are right now, I am still catching up with the literature.
So even though we have the complete C. elegans connectome, this is not enough? (As you said, we don't understand individual neurons, synaptic weight, or learning rules well enough.) A quick search seems to show that the relevant sensorimotor circuits have been studied before. Is it not possible to model these directly?
https://pmc.ncbi.nlm.nih.gov/articles/PMC4082684/
If not, perhaps starting with an organism that is even simpler than C. elegans would help.
Replies from: dom-polsinelli↑ comment by Dom Polsinelli (dom-polsinelli) · 2025-03-26T20:45:15.924Z · LW(p) · GW(p)
To my knowledge, the most recent c. elegans model was all the way back in 2009
it is this PhD thesis which I admit I have not read in its entirety.
I found on the OpenWorm history page which is looking rather sparse unfortunately.
I was trying to go through everything they have, but again, was very disillusioned after trying to fully replicate + expand this paper on chemotaxis. You can read more about what I did here on my personal site. It's pretty lengthy so the TL;DR is that I tried to convert his highly idealized model back into explicit neuron models and it just didn't really work. Explicitly modeling c elegans in any capacity would be a great project because there is so much published, you can copy others and fill in details or abstract as you wish. There is even an OpenWorm slack but I don't remember how to join + it's relatively inactive.
That is more than enough stuff to keep you busy but if you want to hear me complain about learning rules read on.
I am really frustrated with learning rules for a couple reasons. The biggest one is that researchers just don't seem to have very much follow through on the obvious next steps. either that or I'm really bad at finding/reading papers. In any case, what I would love to work on/read about a learning algorithm that
- Uses only local information + maybe some global reward function (as in, it can't be some complicated error minimizer like backpropagation, people generally call this biologically plausible)
- Has experimental evidence that real neurons really learn like this
- Can do well on one shot learning tasks (fear/avoidance can be learned from single negative stimuli even in really simple animals)
- Performs well on general learning, as an example, I tried to recreate tuning curves with LIF neurons using the BCM rule + homeostasis, it was really easy to get a population of neurons to respond differently to horizontal vs vertical sine waves but if those sine waves had a phase shift it basically completely failed.
- Work with deep/complex recurrent architecture
From what I can tell, many papers address one or two of these but fail to capture everything. Maybe I'm being too greedy, but I feel like this list is pretty sensible for a minimum of whatever learning algorithms are at play in the brain.
I am going to work on the project I outline here [LW · GW] but I would genuinely love to help you even if it's just bouncing ideas off me. Be warned, I also am not formally trained in a lot of neuroscience so take everything I say with a heap of salt.