Supervised learning in the brain, part 4: compression / filtering
post by Steven Byrnes (steve2152) · 2020-12-05T17:06:07.778Z · LW · GW · 0 commentsContents
Supervised learning for compression / filtering of input data Isn’t decoding the compressed data, umm, impossible? Conclusion None No comments
For those who haven't been following along, for reasons described here [LW · GW] I've been trying to get a big-picture understanding of brain modules outside what I call the “neocortex subsystem”. So then I was reading about the cerebellum and amygdala. Both the cerebellum and amygdala run supervised learning algorithms (ref, ref), although the amygdala seems to do other things too. So, here I am, writing yet another blog post about why it might be useful for a brain to have self-contained modules that do supervised learning, at a big-picture level.
So far I've blogged about three reasons that a brain benefits from supervised learning:
- First, as I discussed here [LW · GW], the neocortex tries to predict its upcoming inputs—what Yann LeCun calls “self-supervised learning”. This helps us understand the world (i.e., build good latent representations), and that predictive model then gets leveraged for planning, decision-making, etc., using reinforcement learning.
- Second, as I discussed here [LW · GW], there are times when you have a "ground truth" supervisory signal that arrives too late—e.g., a signal that you just got hit in the face and therefore should have flinched, or a signal that you're suddenly in grave danger and therefore should have been more nervous and alert before you got yourself into this bad situation. In cases like this, you can use a supervised learning system to synthesize a predictor signal that precedes the corresponding supervisory signal. Then in that same blog post [LW · GW], I drew a bunch of diagrams speculating on how such a system could be used to improve motor control (by putting a cerebellum circuit between the neocortex and the muscles), and even to help you think better and faster (by putting a cerebellum circuit in the middle of a neocortex-to-neocortex connection). And indeed, the cerebellum is involved in both motor control and language skills etc.
- Third, as I discussed here [LW · GW], if you look at the patterns of neuron activity in the neocortex when someone is expecting that they’re just about to taste salt (for example), then I claim those same patterns of neuron activity happen (albeit more weakly) when they’re imagining tasting salt, remembering tasting salt, watching someone else taste salt, etc. So you can train a supervised learning system with the ground-truth Taste-Of-Salt signal, and you’ll incidentally be able to reuse that same trained model to detect whether or not the neocortex is imagining / remembering / etc. tasting salt. I think this is a critical ingredient in many of our instincts, including social emotions [LW · GW], although this is still just a hypothesis that I’m not sure how to prove.
This post is about yet a fourth thing, which is sorta similar to the first one but can happen as a separate module outside the neocortex. I belatedly got this when it was pointed out to me that the Dorsal Cochlear Nucleus (DCN) seems to have cerebellum-like circuitry, but doesn't fit into any of the above three stories, at least not straightforwardly (to me). Instead the DCN is one of several structures that sit between the cochlea—where sound information is first encoded as nerve impulses—and the auditory processing systems in the neocortex and midbrain. The consensus seems to be that it’s some sort of input preprocessor / filter, doing things like filtering out the sound of your jaw moving.
Can a supervised learning system really do that? How? Let’s put aside the neuroscience for a moment, and just think about it! Thinking about algorithms is fun.
Supervised learning for compression / filtering of input data
Background: The predictive coding compression algorithm. Long before predictive coding was a hot topic in computational neuroscience, predictive coding was a family of compression algorithms. The basic idea is: the encoder (compressor) has a deterministic model that tries to predict the next bit, and it only stores the errors in that model. These errors tend to be small numbers, so they don’t take as much space to store. Then the decoder (decompressor) runs the process in reverse, using the same deterministic model to reconstruct what the encoder was doing, each time adding in the stored error to get the original datapoint. As I’m describing it, this is a lossless compression algorithm, although there are lossy variants.
Why might a brain want to compress input data? I guess that some brain systems have an easier time processing a trickle of high-entropy input data than a flood of low-entropy (i.e. redundant) input data. To take an extreme case, if there’s some new sensory data that’s 100% redundant with what that brain system has already figured out, then sending that data is a total waste of processing power. Right? Seems plausible, I guess.
I’m calling it “compression”, but you can equally call it “filtering”—specifically, filtering out the redundant information.
Isn’t decoding the compressed data, umm, impossible?
In a predictive-coding compression algorithm, whatever system receives the compressed data needs to know the exact predictive model that was used for compression. Otherwise, how would it have any idea what the bits mean? Well, this poses a problem. Remember the example above: The DCN learns a predictive model of input auditory data by supervised learning, and passes on the errors to the neocortex. The predictive model is some complicated pattern of synapses in the DCN—and worse, it is changing over time. There’s no obvious way that the neocortex can have access to this predictive model. So how can it possibly know what the compressed bits mean?
I don't have a particularly well-formed mathematical theory here, but I don't think this is an unsolvable problem. Consider the following intuitions:
- Let's say I overhear someone saying "it's going to be hotter than expected tomorrow". What does that mean? You don't know this person. Any temperature could be "hotter than expected" to that person, because no matter how cold it is, maybe she was previously expecting an even colder temperature! ...Yet still, the information content is not zero!! Whatever temperature you were expecting before you overheard her, your best-guess temperature should be at least a bit higher after hearing her. So by the same token, if the DCN tells the neocortex "line 17 is more active than expected", that communicates something meaningful to the neocortex, even if the neocortex doesn't know exactly what the DCN was expecting.
- The goal of the neocortex is not to reconstruct the raw auditory data. The goal is to find the behavior-relevant higher-level semantic interpretation. Like, try putting on earmuffs. The raw sound data—the nerve impulses coming from the cochlea—change substantially. But that’s irrelevant. It’s a distraction from the information you need.
- Slide 4 here has the prediction errors from a predictive-coding image-compression algorithm displayed as a picture, and it is distorted but still easily recognizable. That means that there were features that the prediction algorithm failed to predict, and we see those features and it's enough to figure out what's going on.
- I imagine that things like extended temporal structure, relations between different lines, cross-modal and contextual information, and so on, would not be predicted by the DCN, and would therefore be available to help the neocortex make sense of the incoming bits.
- I mentioned that the neocortex's job is made harder by the fact that the DCN’s prediction algorithm keeps changing, due to supervised learning. Well, I guess if that’s a problem, there's a straightforward solution: lower the learning rate on the DCN (i.e., reduce plasticity). If the DCN changes slowly enough, the neocortex should be able to keep up.
Conclusion
OK. This isn't a particularly deep or careful or neuroscience-grounded analysis, but the basic concept seems plausible enough to me!
So, going forward, when I look at the DCN, or anything else with cerebellum-like circuitry (including the cerebellum itself), one of my candidate stories for what the thing is doing will be “this thing is doing input data compression / filtration using predictive coding”.
By the way, I doubt anything in this post is terribly important for answering my burning questions [LW · GW] related to ensuring that possible future neocortex-like AGIs are safe and beneficial and leading us towards the awesome post-AGI utopia we're all hoping for, with a superintelligent Max Headroom personal assistant in every smart toaster and all that. Hmm, let me think. Maybe it would make interpretability marginally harder. Really, the main thing is that I'm chipping away at the number of things in the brain that are “unknown unknowns” to me.
0 comments
Comments sorted by top scores.