Humans, chimpanzees and other animals

post by gjm · 2023-05-30T23:53:08.295Z · LW · GW · 18 comments

[Epistemic status: underinformed musings; I am posting this not because I am sure of anything in it but because the point seems important and I don't recall seeing anyone else make it. Maybe that's because it's wrong.]

A common analogy for the relationship between postulated superhumanly-smart AIs and humans is the relationship between humans and chimpanzees. See, e.g., https://intelligence.org/2017/12/06/chollet/ where Eliezer Yudkowsky counters various arguments made by François Chollet by drawing this analogy.

It's pretty compelling. Humans are a bit like chimps but substantially smarter ... and, lo, humans control the future of life on earth (including, in particular, the future of chimpanzee life on earth) in a way chimps absolutely do not.

But wait. Humans coexist with chimps, and are smarter, and are utterly dominant over them: fair enough. But surely we want to do better than a sample size of 1. What other cases are there where animals of different intelligence levels coexist?

Well, for instance, chimps coexist with lions and leopards. Are chimp-leopard relations anything like human-chimp or human-leopard relations? So far as I can tell, no. Chimps don't appear to reshape their environment radically for their own purposes. When they encounter other animals such as lions and leopards they not infrequently get killed.

In general I'm not aware of any pattern of the form "smarter animals consistently face no threat from less-smart animals" or "smarter animals consistently wipe out less-smart animals that threaten them" or "the smartest type of animal in any area controls the whole local ecosystem".

(I am not an expert. I could be all wrong about this. I will be glad of any corrections.)

What all this suggests, at least to me, is that what's going on with humans and chimpanzees is not "smarter animal wins" but something more like "there is a qualitative difference between humans and other animals, such that animals on the smarter side of this divide win against ones on the less-smart side".

There might be a similar divide between us and our prospective AI overlords. I can think of various things that might turn out to have that effect. But so far as I can tell, it's a genuinely open question, and if there's some reason to be (say) 90% confident that there will be such a divide I haven't seen it.

18 comments

Comments sorted by top scores.

comment by Mart_Korz (Korz) · 2023-05-31T14:17:20.589Z · LW(p) · GW(p)

I would argue that a significant factor in the divide between humans and other animals is that we (now) accumulate knowledge over many generations/people. Humans a 100 000 years ago might already have been somewhat notable compared to chimpanzees, but we only hit the threshold of really accumulating knowledge/strategies/technology of millions of individuals very recently.

If I would have been raised by chimpanzees (assuming this works), my abilities to have influence over the world would have been a lot lower even though I would be human.

Given that LLMs already are massively beyond human ability in absorbing knowledge (I could not read even a fraction of all the texts that they now use in training), we have good reasons to think that future AI will be beyond human abilities, too.

Further than this, we have good reasons to think that AIs will be able to evade bottlenecks which humans cannot (compare life 3.0 by Max Tegmark). AIs will not have to suffer from ageing and an intrinsic expiration date to all of their accumulated expertise, will be in principle able to read and change their own source code, and have the scalability/flexibility advantages of software (one can just provide more GPUs to run the AI faster or with more copies).

Replies from: gjm
comment by gjm · 2023-05-31T14:41:43.223Z · LW(p) · GW(p)

Transmission, preservation and accumulation of knowledge (all enabled by language) are at the top of my list of guesses for most important qualitative change from chimps to humans too.

It certainly seems very likely that AIs can be much better at this than us, but it's not obvious to me how big a difference that makes, compared with the difference between doing it at all (like us) and not (like chimps).

(I exaggerate slightly: chimps do do it a bit, because they teach one another things. But it does feel like more of a yes-versus-no difference than AIs versus us. Though that may be because I'm failing to see how transformative AIs' possible superiority in this area will be.)

A lot of the ways in which we hope/fear AIs may be radically better than us seem dependent on having AIs that are designed in a principled way rather than being huge bags of matrices doing no one knows quite what. That does seem like a thing that will happen eventually, but I suspect it won't happen until after there's something significantly smarter than us designing them. For the nearest things to AI we know how to make at the moment, for instance, even we can't in any useful sense read their source code. All our attempts to build AIs that work in comprehensible ways seem to produce results that lag far far behind the huge incomprehensible mudballs. (Maybe some sort of hybrid, with comprehensible GOFAI bits and incomprehensible mudballs working together, will turn out to be the best we can do. In that case, the system could at least read the source code for the comprehensible bits.)

Replies from: AnthonyC, Korz
comment by AnthonyC · 2023-06-01T00:12:16.965Z · LW(p) · GW(p)

I (not an expert) suspect there's a few key ways the human/AI difference is likely to be pretty large.

  1. Generation time. Training an expert human from scratch takes 20-30 years and no one is in a position to curate more than a small fraction of the training data and environment. AI could copy itself in potentially seconds to minutes. It could train up a new model for a specific purpose much faster, too, if it can't just paste new capabilities into itself directly. And there should be no equivalent of capabilities that can't be taught, only learned (the kinds of things humans need years-long apprenticeships to even have a chance to acquire imperfectly).
  2. Even with "huge bags of matrices doing no one knows quite what" there are still things we can learn, with the tools we have today, from e.g. analyzing weights. We can also (compared to a human mind) test more precisely how such AIs "would" act in different circumstances by just giving a prompt, even if we can't reliably predict behavior in advance. Even assuming this doesn't hold for future architectures and higher capability levels, and AI shouldn't have a human's problems predicting it's own future behavior.
  3. Breadth. being effectively a high-level expert in every field of knowledge at once should allow a massively higher level of ability to extract and propagate insight from new data. There should be no equivalent of it taking decades to generations for results to filter from researchers to other fields of academia to public policy to common knowledge among the public. With enough compute, there should be no equivalent of a result languishing for years before anyone recognizes it's relevance to a new problem.
  4. Precise introspective and retrospective data. This is more about sensors and robotics than AI in some ways, but it would be much easier for me to learn new physical skills if I could see exactly what I was doing each time I attempted it, and measure exactly what the result was, and intuitively do statistics on the data, and similarly precisely control future behavior. I do think this also applies to any other kind of data feedback, too. If I had access to a precise playback of my entire life (or even just the text of all my conversations and summary of my actions) at all times, I'd be better at a lot of things.
comment by Mart_Korz (Korz) · 2023-05-31T16:28:40.969Z · LW(p) · GW(p)

I agree a lot. I had the constant urge to add some disclaimer "this is mostly about possible AI and not about likely AI" while writing.

It certainly seems very likely that AIs can be much better at this than us, but it's not obvious to me how big a difference that makes, compared with the difference between doing it at all (like us) and not (like chimps).

The only obvious improvements I can think of are things where I feel that humans are below their own potential[1]. This does seem large to me, but by itself not quite enough to create a difference as large as chimps-to-humans. But I do think that adding possibilities like compute speed could be enough to make this a sufficiently large gap: If I imagine myself to wake up for one day every month while everyone else went on with their lives in the mean time, I would probably be completely overwhelmed by all the developments after only a few subjective 'days'. Of course, this is only a meaningful analogy if the whole world is filled with the fast AIs, but that seems very possible to me.

A lot of the ways in which we hope/fear AIs may be radically better than us seem dependent on having AIs that are designed in a principled way rather than being huge bags of matrices doing no one knows quite what. [...]

Yeah. I still think that the other considerations are have significance: they give AIs structural advantages compared to biological life/humans. Even if there never comes to be some process which can improve the architecture of AIs using a thorough understanding, mostly random exploration still creates an evolutionary pressure on AIs becoming more capable. And this should be enough to take advantage of the other properties (even if a lot slower).


  1. My sense: In the right situation or mindset, people can be way more impressive than we are most of the time. I sometimes feel like a lot of this is caused by hard-wired mechanisms that conserve calories. This does not make much sense for most people in rich countries, but is part of being human. ↩︎

Replies from: lsgos
comment by lsgos · 2023-05-31T19:52:30.656Z · LW(p) · GW(p)

If people are reading this thread and want to read this argument in more detail: the (excellent) book 'The Secret of our Success' by Joseph Henrich (astral codex 10 review/summary here https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) makes this argument in a very compelling way. There is a lot of support for the idea that the crucial 'rubicon' that separates chimps from people is cultural transmission which enables the gradual evolution of strategies over periods longer than an individual lifetime rather than any 'raw' problem solving intelligence. In fact according to Heinrich there are many ways in which humans are actually worse than chimps in some measures of raw intelligence: chimps have better working memory and faster reactions for complex tasks in some cases, and they are better than people at finding Nash equilibria which require randomising your strategy. But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of 'cultural technology', which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology -> smaller stomachs and bigger brains -> even more culture -> bigger brains even more of an advantage etc.) It's hard to appreciate how much this kind of thing helps you think; for instance, most people can learn maths but few would have invented arabic numerals by themselves. Similarly, having a large brain by itself is actually not super useful without the cultural superstructure: most people alive today would quickly die if dropped into the ancestral environment without the support of modern culture unless they could learn from hunter-gatherers (see Henrich for many examples of this happening to European explorers!). For instance, i like to think I'm a pretty smart guy but I have no idea how to make e.g bronze or stone tools, and it's not obvious that my physics degree would help me figure it out! Henrich also makes the case for the importance of this with some slightly chilling examples of cultures that lost their ability to make complex technology (e.g boats) when they fell below a critical population size and became isolated.

It's interesting to consider the implications for AI: I'm not very sure about this. On the one hand LLMs clearly have superhuman ability to memorise facts, but I'm not sure if this means they can learn new tasks or information particularly easily. On the other it seems likely that LLMs are taking pretty heavy advantage of the 'culture overhang' of the internet! I don't know if it really makes sense to think of their abilities here as strongly superhuman: if you magically had the compute and code to try to train gpt-n in 1950 it's not obvious you could have got it to do very much, without the internet for it to absorb.

Replies from: jkraybill
comment by jkraybill · 2023-06-01T04:28:30.240Z · LW(p) · GW(p)

Haven't read that book, added to the top of my list, thanks for the reference!

But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of 'cultural technology', which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology -> smaller stomachs and bigger brains -> even more culture -> bigger brains even more of an advantage etc.)

One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow play an important role in one (or a handful) of groups of sapiens happening to develop some strong (or "viral") positive-feedback cultural learning mechanisms that eventually dramatically outpaced other creatures? We know that other species can learn by demonstration, and pass down information from generation to generation, and we know that humans have big brains, but were some combination of timing / luck / climate / resources perhaps also a major factor?

If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human "pre-history" without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees? If our ancestral primates never branched off from chimps and bonobos, are we sure the earth wouldn't now (or some time in the past or future) be populated with chimpanzee city-equivalents and something that looked remotely like our definition of technology?

It's hard to appreciate how much this kind of thing helps you think

Strongly agree. It seems possible that a time-travelling scientist could go back to some point in time and conduct rigorous experiments that would show sapiens as not as "intelligent" as some other species at that point in time. It's easy to forget how recently human society looked a lot closer to animal society than it does to modern human society. I've seen tests that estimate the human IQ level of adult chimpanzees to be maybe in the 20-25 range, but we can't know how a prehistoric adult human would perform on the same tests. Like, if humans are so inherently smart and curious, why did it take us over 100,000 years to figure out how plants work? If someone developed an AI today that took 100,000 years to figure out how plants work, they'd be laughed at if they suggested it had "human-level" intelligence.

One of the problems with the current AI human-level intelligence debate that seems pretty fundamental is that many people, without even realising it, conflate concepts like "as intelligent as a [modern educated] human" with "as intelligent as humanity" or "as intelligent as a randomly selected Homo sapiens from history".

Replies from: lsgos
comment by lsgos · 2023-06-02T13:26:31.393Z · LW(p) · GW(p)

Well maybe you should read the book! I think that there are a few concrete points you can disagree on.

One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow play an important role in one (or a handful) of groups of sapiens happening to develop some strong (or "viral") positive-feedback cultural learning mechanisms that eventually dramatically outpaced other creatures?

I'm not an expert, but I'm not so sure that this is right; I think that anatomically modern humans already had significantly better abilities to learn and transmit culture than other animals, because anatomically modern humans generally need to extensively prepare their food (cooking, grinding etc.) in a culturally transmitted way. So by the time we get to sapiens we are already pretty strongly on this trajectory.

I think there's an element of luck: other animals do have cultural transmission (for example elephants and killer whales) but maybe aren't anatomically suited to discover fire and agriculture. Some quirks of group size likely also play a role. It's definitely a feedback loop though; once you are an animal with culture, then there is increased selection pressure to be better at culture, which creates more culture etc.

If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human "pre-history" without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees?

I'm gonna go with absolutely yes, see my above comment about anatomically modern humans and food prep. I think you are severely under-estimating the sophistication of hunter-gatherer technology and culture!

The degree to which 'objective' measures of intelligence like IQ are culturally specific is an interesting question.

comment by Viliam · 2023-05-31T14:55:26.840Z · LW(p) · GW(p)

I think the divide could be "is potentially immortal and can make copies of themselves".

A weaker version of this ("potentially immortal and can make loyal servants") is a stereotypical vampire, if you excuse the fictional evidence [? · GW], and they seem pretty powerful, and their only weakness is that you can destroy their bodies -- which would be much more difficult if they could make copies that share the same goals and can update each other's memory.

*

The human superpower over animals is "can remember lots of things and cooperate with its own kind on a large scale, including learning/teaching across generations". Most other things are downstream of this.

Ants are also quite good at cooperation, and in absence of humans would be very impressive. If they had bigger brains, I could imagine them win.

comment by tangerine · 2023-06-01T15:47:30.634Z · LW(p) · GW(p)

The difference between humans and chimpanzees is purely one of imitation. Humans have evolved to sustain cultural evolution, by imitating existing culture and expanding upon it during a lifetime. Chimpanzees don’t imitate reliably enough to sustain this process and so their individual knowledge gains are mostly lost at death, but the individual intelligence of a human and a chimpanzee is virtually the same. A feral human child, that is, a human without human culture, does not behave more intelligently than a chimpanzee.

The slow accumulation of culture is what separated humans from chimpanzees. For AI to be to humans what humans are to chimps is not really possible because you either accumulate culture (knowledge) or you don’t. The only remaining distinction is one of speed. AI could accumulate a culture of its own, faster than humans. But how fast?

comment by Dagon · 2023-05-31T17:58:27.117Z · LW(p) · GW(p)

This is still n=1 for intelligence (and specifically individual intelligence aggregated by culture into "power to change the environment") over the super-dunbar organization threshold.  Other examples of intelligence distance are all low absolute values which leave other aspects of evolution and impact more important.

We just don't know where the next "threshold" is, where it's different enough that all other differences are irrelevant.  

comment by phelps-sg · 2023-05-31T16:46:37.125Z · LW(p) · GW(p)

When presenting claims that the cognitively superior agent wins, often the AI safety community makes an analogies with 2-player zero-sum games such as Chess and Go where the smartest and most ruthless players prevail.  However, most real-world interactions are best modeled by repeated non-zero-sum games.

In an ecological context, Maynard Smith and Price introduced the Hawk-Dove game to try to explain the fact that in nature, many animals facing contests over scarce resources such as mates or foods engage in only limited conflict rather than wiping out rivals (Maynard Smith and Price, 1973); for example, a male stag will often yield to a rival without a fight.  



When we model how hard-coded strategies for this game propagate via genetic reproduction in a large well-mixed population, it turns out that, for certain payoff structures, peaceful strategies ("Doves") co-exist with aggressive strategies ("Hawks") in a stable equilibrium (see https://sphelps.net/teaching/egt.html for illustrative numerical simulations). 

The Hawk-Dove game is also sometimes called "the Chicken Game" because we can imagine it also models a scenario in which two opposing drivers drive on a collision course and simultaneous choose whether to swerve or drive straight.  Neither player wants to "look like a chicken" by serving, but if both players drive straight they crash and die.  

In the Chicken Game, cognitive superiority does not always equate to winning.  For example, by pre-commiting to driving straight and making this common-knowledge we can beat a rational opponent whose best response is then to swerve.  Similar arguments were put forward during the cold-war to argue for removing rational deliberation from the decision to retaliate against a first strike by the enemy by removing humans from the loop, and putting strategic weapons systems on hair-trigger automated alert, because in the absence of pre-commit a threat to retaliate is not credible since there is no aposterior advantage to retaliation once the opponent actually strikes.  Notice that this dynamic is the exact opposite of the power-seeking behaviour posited by reinforcement-learning agents which seek to maximise expected utility by expanding their possible choices. With non-zero-sum games, in contrast, it can make sense to reduce one's choices.  Moreover, the cognitive capacity required to enact policies over reduced choices is lower -  stupid can beat smart.      

Although Maynard Smith and Price originally formulated the logic of animal conflict in terms of evolutionary adaptation, the same dynamic model can be used to model social learning by boundedly-rational agents (see Phelps and Wooldridge, 2013 for a review).  As regards the cognitive capacity of non-human animals to deliberate in Hawk-Dove interactions, see Morikawa et al. (2002).

References

Morikawa, T., Hanley, J.E. and Orbell, J., 2002. Cognitive requirements for hawk-dove games: A functional analysis for evolutionary design. Politics and the Life Sciences, 21(1), pp.3-12.

Phelps, S. and Wooldridge, M., 2013. Game theory and evolution. IEEE intelligent systems, 28(04), pp.76-81.

Maynard Smith, J. and Price, G.R., 1973. The logic of animal conflict. Nature, 246(5427), pp.15-18.

comment by Lalartu · 2023-05-31T14:19:13.725Z · LW(p) · GW(p)

There is a threshold where intelligence becomes much more useful, and this threshold is an ability to make better weapons than other animals have. In a world where this is not possible at all (for example, animals have zerg-level natural weapons, and there is no metal to make guns), having human-level intelligence is just not worth downsides from a big brain.

Replies from: sanxiyn
comment by sanxiyn · 2023-06-01T01:55:12.220Z · LW(p) · GW(p)

I think bow and arrow is powerful enough and gun is not necessary.

Replies from: Dagon, Lalartu
comment by Dagon · 2023-06-01T15:07:28.202Z · LW(p) · GW(p)

I'd argue that artificial shelter and food storage was the tipping point, and weapons downstream of that, both causally and in impact.  

I do wonder how that applies to AI - it seems too direct to believe it'll out-compete us when it controls electricity production, but at a coarse level is IS all about efficiency and direction of energy.

comment by Lalartu · 2023-06-01T07:44:09.711Z · LW(p) · GW(p)

On Earth, yes. In a world where animals have projectile weapons, no.

comment by phelps-sg · 2023-06-03T08:37:55.217Z · LW(p) · GW(p)

From David Chapman's "Better without AI", section "Fear AI Power":

"The AI risks literature generally takes for granted that superintelligence will produce superpowers, but which powers and how this would work is rarely examined, and never in detail. One explanation given is that we are more intelligent than chimpanzees, and that is why we are more powerful, in ways chimpanzees cannot begin to imagine. Then, the reasoning goes, something more intelligent than us would be unimaginably more powerful again. But for hundreds of thousands of years humans were not more powerful than chimpanzees. Significantly empowering technologies only began to accumulate a few thousand years ago, apparently due to cultural evolution rather than increases in innate intelligence. The dramatic increases in human power beginning with the industrial revolution were almost certainly not due to increases in innate intelligence. What role intelligence plays in science and technology development is mainly unknown;"

comment by meijer1973 · 2023-05-31T12:06:52.946Z · LW(p) · GW(p)

Good point. As I understood it, humans have an OOM more parameters than chimps. But chimps also have an OOM over a dog. So not all OOM's are created equal (so I agree with your point).

I am very curious about the qualatative differences between humans and and superintelligence. Are there qualatative emergent capabilities aboven human leven intelligence that we cannot imagine or predict at the moment.   

Replies from: meijer1973
comment by meijer1973 · 2023-05-31T12:43:40.040Z · LW(p) · GW(p)

One difference I suspect could be generality over specialization in the cognitive domain. It is assumed that specialization is better. But this might only be true for the physical domain. In the cognitive domain general reasoning skills might be more important. E.g. for an ASI the specialized knowledge of a lawyer might be small from the perspective of the ASI.