Takeoff speeds, the chimps analogy, and the Cultural Intelligence Hypothesis
post by NickGabs · 2022-12-02T19:14:59.825Z · LW · GW · 2 commentsContents
2 comments
In debates about AI takeoff speeds, the most common empirical example/analogy is that of the transition between chimpanzees and humans. For example, in this conversation between Paul Christiano and Eliezer Yudkowsky on takeoff speeds, the chimp analogy is discussed extensively.
Generally, I have seen the chimp example cited as evidence for a fast takeoff, as humans brains are only 3-4x bigger than chimp brains with few "algorithmic" differences, yet humans are much more generally intelligent than chimps. Thus, given that due to Moore's law and increased investment we could probably increase the compute used in training an AI in a relatively that much in a relatively short period of time, if the relationship between compute and intelligence in AI's is similar to that in biological systems, a similarly fast takeoff may occur. Alternatively, if one does not focus on compute, the transition arguably suggests at a higher level of abstraction that, as Yudkowsky puts it, "There’s stuff in the underlying algorithmic space... where you move a bump and get a lump of capability out the other side."
I claim that this is an incorrect inference from the chimps to humans transition, and that chimps provide only weak evidence for fast takeoff. This is because I believe that the specific explanation for why humans are so much smarter than chimps despite the relatively small differences between human brains and chimp brains is unlikely to reoccur in the context of AI development. In particular, I believe that the "cultural intelligence hypothesis" explains to a significant extent the origins of human intelligence.
According to the cultural intelligence hypothesis, humans are smarter than chimps largely because our cultural abilities such as superior imitative learning capabilities and our ability to use language allow us to use the knowledge and cognitive skills developed by past generations of humans. In other words, humans do not have vastly superior "general intelligence" to chimps, but rather are much better than them in the particular domain of learning from other people, in particular through our ability to use and understand language. Humans, especially adult humans, can then utilize the skills and knowledge we learn from others to perform better at a wider range of tasks (making us eventually better at ~everything than chimps), but the cultural intelligence hypothesis claims that it is primarily our cultural skills which lie at the root of this development of more general cognitive capabilities. I believe it is true for the following reasons:
- Firstly, according to this widely cited paper, chimps and other primates only greatly underperform human infants at cultural skills such as imitative learning. Given that they are close to as good as human infants at working memory and understanding basic physical systems, this suggests that the major difference cognitive difference between humans and chimps lies in cultural domains. Moreover, without access to these cultural resources, humans are much less cognitively impressive; feral children are not known for their amazing cognitive powers.
- By far the largest basic behavioral difference between humans in the EEA and other animals seems to be our use of language. Given that language enables the transmission of complex cultural information, this further suggests to me that our cultural skills lie at the heart of our vastly superior intelligence.
- If intelligence is understood as optimization power on the actual world, humans have only become vastly more intelligent than chimps relatively recently, as our access to the cultural resources of contemporary science has opened up powerful parts of action and cognition space. Before this (and certainly before we gained the cultural resources associated with agriculture, metallurgy, etc.), humans were not extremely impressively better optimizers than chimps.
- Finally, something like the cultural intelligence hypothesis seems to be the dominant explanation of the origins of human intelligence in contemporary psychology. While psychology obviously has significant flaws as a field, it is not wordcel bullshit a la social theory, and so I think one ought to update significantly though not massively in the direction of the opinions of top psychology researchers.
The truth of the cultural intelligence hypothesis suggests to me that the chimps to humans discontinuity provides little evidence for fast takeoffs. As previously noted, when we understand intelligence as optimization power, it is only relatively recently that we have become much more intelligent than chimps. This suggests that cultural learning skills by themselves are not sufficient for powerful general intelligence, but that the "data" of human culture and others' speech is also necessary. While modern humans have access to massive amounts of such data, chimps can access ~0 of it, creating a massive difference in intelligence.
However, these dynamics will not be mirrored in the case of the development of AI. Chimps’ lack of cultural learning/language use skills and lack of access to cultural data reinforced each other. Without a cultural and linguistic data, there is little pressure to develop cultural learning and linguistic skills. Conversely, without pressure towards developing cultural learning and linguistic skills, the data is never developed. However, in the case of LLMs, the fact that massive amounts of cultural/linguistic data is available to them via the internet means that, contrary to chimps. there are pressures from the get go for them to develop the skills to utilize this data. We can observe this in the increase in performance of language models, which already seem to exhibit an ability to build a model of the world based on language that is inferior to that of humans but superior to that of chimps. Moreover, the cultural knowledge that LLMs are accessing with their language skills will stay relatively constant throughout AI development until AIs are contributing significantly to the growth in our body of knowledge about the world. Thus, rather than quickly being subjected to massively increased pressures to develop cultural and linguistic skills and then gaining access to an exponentially growing amount of cultural and linguistic data, LLMs will face roughly constant pressures to develop an ability to understand and utilize a comparatively fixed amount of data.
This argument fits with Paul Christiano's "changing selection pressures" argument. According to this argument, evolution is importantly disanalogous to AI development because, while evolution only recently began strongly selecting humans for intelligence, we have always and will continue to select AIs for intelligence. In particular, this argument claims that the specific selection pressures which increased in the case of humans were pressures towards developing cultural learning skills and language use and understanding, which then allowed us to quickly develop a basis of cultural data enabling us to be powerful optimizers.
In conclusion, those who argue in favor of fast takeoffs on must either reject the cultural intelligence hypothesis, or argue that even if it is true, we should expect a discontinuity in AI capabilities. For example, it could be that while LLM's are trained on language and thus, unlike chimps, have access to most of the "data" which enables modern humans to be intelligent, their ability to actually utilize this cultural knowledge is discontinuous. However, given the steady increase of LLM performance as well as a general prior towards continuity, this seems unlikely to me. There could also be other reasons to expect fast takeoffs. However, if this argument is successful, they cannot rest on this key empirical example.
2 comments
Comments sorted by top scores.
comment by mruwnik · 2022-12-05T12:08:42.705Z · LW(p) · GW(p)
There was no transition from chimps to humans. There was a divergence from a common ancestor some time ago, after which both lines evolved equally (maybe, sort of, possibly) far along their own respective paths under different selection pressures. It's very likely that you know this - I just happen to be very sensitive to this phrasing, since it's one of the most often misunderstood aspects of evolution.
comment by mruwnik · 2022-12-05T12:02:00.634Z · LW(p) · GW(p)
This sounds a lot like the arguments around punctuated equilibrium and gradualism. It's a matter of perspective.
The important divergence point between humans and other hominids isn't intelligence per se (though that plays a large part), it's language along with abstract thought, counterfactuals and all that good stuff, which then allow for cultural transmission. They were fully formed and working for quite a while (like 100k years ago) before FOOMing around 10k years ago. And this wasn't triggered by greater intelligence, but by greater density (via agriculture).
If you only feed the NNs with literature, wikipedia, blog posts etc., then you could be right about the limitations. Thing is, though, the resulting AIs are only a very small subset of possible AIs. There are many other ways to provide data, any one of which could be the equivalent of agriculture.
Also, evolution is very limited in how it can introduce changes. AI development is not.