Theories of Biological Inspiration
post by Eric Zhang (ChaseDanton) · 2023-05-25T13:07:10.972Z · LW · GW · 3 commentsContents
3 comments
Many influential AI techniques either explicitly draw inspiration from or are similar to mechanisms found in biology.
- Biological neurons and artificial neural networks
- The visual cortex and convolutional neural networks
- Attention and attention mechanisms like Transformers
- Short-term memory and LSTMs
- Episodic memory and experience replay
- Curiosity and exploration techniques
- lots of others, probably
Some basic reasons why this might be:
- Because those answers are convergent. Any two good solutions to a problem are going to get the same answers because there is just one answer.
- Because we're stealing. Evolution did a whole lot of work for us and it's easier to steal than figure it out on our own. If we did try to do it on our own without stealing, we might find totally different answers, but that would take a whole lot more time and effort.
- Because we're romantics. We like looking for solutions that look to us like ourselves, that are "in our own image", as it were. We didn't gain much technical advantage from doing it this way and could've just as well done it a different way, but it was more appealing to us to do it this way.
- It's just marketing. These techniques aren't really all that similar to the biological mechanisms, but it gets more hype if you can draw a connection to something the brain is doing.
All of these play some role, and in differing degrees depending on the particular technique. The extent to which any of these explanations is generally the case may have strategically important implications for AI safety.
Insofar as 2 is more the case, then progress in neuroscience may be something of a limiting factor in further AI progress. It may also point to progress in techniques which more closely imitate brains (such as SNNs) being things to watch.
To the extent 1 or 2 are strongly the case, searching for a paradigm very different from deep learning which might be more interpretable may be hopeless (even more than it already is, let us say).
I'd be interested in hearing what people think about to what extent these four stories (or a different one I didn't think of) apply either in the general case or to specific techniques.
3 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2023-05-25T13:52:45.552Z · LW(p) · GW(p)
I have some related discussion in Section 2.1 here [LW · GW]. I think if I were writing the list, I would remove the assumption of bad faith from 4, i.e. my 4 choices would be:
- They are similar because those answers are convergent.
- They are similar because we're stealing.
- They are similar because we're romantics.
- They are not in fact similar. Maybe it's just marketing, or maybe biology was a source of inspiration during the brainstorming process, or maybe somebody was trying to copy how they thought biology worked but their beliefs about how biology worked were incorrect, or whatever.
And then I think your particular examples are a mix of 1 & 2 & (my now-more-broad) 4.
Replies from: Charlie Steiner, ChaseDanton↑ comment by Charlie Steiner · 2023-05-25T15:32:24.782Z · LW(p) · GW(p)
I feel like it's 4 ~ 1 > 2 > 3. The example of CNNs seems like this, where the artificial neural networks and actual brains face similar constraints and wind up with superficially similar solutions, but when you look at all the tricks that CNNs use (especially weight-sharing, but also architecture choices, choice of optimizer, etc.) they're not actually very biology-like, and were developed based on abstract considerations more than biological ones.
↑ comment by Eric Zhang (ChaseDanton) · 2023-05-25T15:02:42.228Z · LW(p) · GW(p)
Do you have a granular take about which ones are relatively more explained by each point?