Might humans not be the most intelligent animals?

post by Matthew Barnett (matthew-barnett) · 2019-12-23T21:50:05.422Z · LW · GW · 41 comments

Contents

41 comments

The idea that humans are the most intelligent animals on Earth appears patently obvious to a lot of people. And to a large extent, I agree with this intuition. Humans clearly dominate the world in technological innovation, control, communication, and coordination.

However, more recently I have been acquainted with some evidence that the proposition is not actually true, or at the very least is non-obvious. The conundrum arises when we distinguish raw innovative capability, from the ability to efficiently process culture. I'll explain the basic case here.

Robin Hanson has sometimes pointed to the accumulation of culture as the relevant separation between humans and other animals. Under this view, the reason why humans are able to dominate the world with technology has less to due with our raw innovation abilities, and more to do with the fact that we can efficiently process accumulated cultural information and build on it.

If the reason for our technological dominance is due to raw innovative ability, we might expect a more discontinuous jump in capabilities for AI, since there was a sharp change between "unable to innovate" and "able to innovate" during evolutionary history, which might have been due to some key architectural tweak. We might therefore expect that our AIs will experience the same jump after receiving the same tweak.

If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed. Therefore, our current systems are currently bottlenecked on some metric which is more continuous.

The evidence for the cultural accumulation view comes from a few lines of inquiry, such as

Under the view that our dominance is due to cultural accumulation, we would expect that there are some animals that are more intelligent than humans in the sense of raw innovative ability. The reason is that it would be surprising a priori for us to be the most intelligent, unless we had reason to believe so anthropically. However, if culture is the reason why we dominate, then the anthropic argument here is weaker.

We do see some initial signs that humans might not be the most intelligent species on Earth in the innovative sense. For instance,

If humans are the most intelligent in the sense of having the best raw innovative abilities, this hypothesis should be testable by administering a battery of cognitive tests to animals. However, this is made difficult due to a number of factors.

First, most intelligence tests that humans take rely on their ability to process language, disqualifying other animals from the start.

Second, humans might be biased towards administering tests about things that we are good at, since we might not even be aware of the type of cognitive abilities we score poorly on. This may have the effect of proving that humans are superior in intelligence, but only on the limited subset of tests that we used.

Third, if we use current human intelligence tests (or anything like them), the following argument arises. Computers can already outperform humans at some tasks that intelligence tests measure, such as memory, but this doesn't mean that computers already have a better ability innovate. We would need to test something that we felt confident accurately indicated innovative ability.

Since I'm mostly interested in this question because of its implications for AI takeoff speeds, I'd want to know what type of things are most useful for developing technology, and see if we can see the same abilities in animals sans cultural accumulation modules.

41 comments

Comments sorted by top scores.

comment by Vaniver · 2019-12-23T22:28:22.751Z · LW(p) · GW(p)

I think the current record is suggestive that Neanderthals were more individually intelligent than the anatomically modern humans that displaced them, tho obviously anatomically modern human society was more capable than Neanderthal society (through a combination of superior communication ability, trade, and higher population density). [I think it's quite likely that Neanderthals were less intelligent than humans currently alive today.]

We also have some evidence that we've selectively destroyed the intelligence of other animals as part of domestication; wolves perform better than dogs on causal reasoning tasks (of the sort that would make them more obnoxious as pets; many people want a 'smart' dog and then discover how difficult it is to get them to take their medicine, or keep them out of the treat drawer, and so on). So even if most animals around today are 'dumb', it may be because 'we wanted it that way.'

When it comes to AI, tho, I think the right analogy is not "individual human vs. individual AI" but "human civilization (with computers)" vs. "AI civilization". It seems pretty clear to me that most of the things to worry about with AGI look like it getting the ability to do 'cultural accumulation', at least with regards to technological knowledge, and aren't related to "it has a much higher working memory!" or other sorts of individual intelligence tasks. In this lens, the superiority of artificial intelligence is mostly cultural superiority. (For example, this description of AI takeoff [LW(p) · GW(p)] hinges not on superior individual ability, but enhanced 'cultural learning'.)

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-23T22:36:56.550Z · LW(p) · GW(p)
When it comes to AI, tho, I think the right analogy is not "individual human vs. individual AI" but "human civilization (with computers)" vs. "AI civilization". It seems pretty clear to me that most of the things to worry about with AGI look like it getting the ability to do 'cultural accumulation', at least with regards to technological knowledge, and aren't related to "it has a much higher working memory!" or other sorts of individual intelligence tasks. In this lens, the superiority of artificial intelligence is mostly cultural superiority.

I agree with this, but I'm mostly interested in this question because the abilities of AI culture depend quite heavily on the abilities of individual AI systems.

Replies from: ChristianKl
comment by ChristianKl · 2019-12-26T11:11:14.601Z · LW(p) · GW(p)

I don't see why this is a case for AI systems in a different way then it is for humans.

The fact that an AI can easily clone specific instances of it makes it much faster for it to spread culture. We humans can't simply make 100,000 copies of Elon Musk but if Elon would be an AI that would be easy.

comment by Bucky · 2019-12-24T07:51:50.162Z · LW(p) · GW(p)

One example is the chimpanzee, which may have better working memory

This isn’t what the linked paper says. It claims that

Chimps + practice > humans

but

Humans + practice >> chimps + practice

From the abstract:

There is no evidence for a superior or qualitatively different spatial memory system in chimpanzees.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T08:34:51.596Z · LW(p) · GW(p)

Good point. I don't think my thesis here rests on this fact, but thanks for pointing this error out.

Replies from: Bucky
comment by Bucky · 2019-12-24T14:35:58.240Z · LW(p) · GW(p)

Whilst I don’t think the thesis rests on it, it seemed like the strongest (and most surprising to me) evidence if it were true. It actually does provide some evidence that the gap isn’t as large as one might think.

You might want to edit the OP for anyone who doesn’t get round to reading the comments.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T22:09:42.602Z · LW(p) · GW(p)

The problem with it is that I'm finding many links that seem to argue that chimpanzees actually do have better memory, even compared to comparably trained humans (see this Wikipedia page, for instance). That one link says I'm wrong, but there's many that say I'm right and I'm not sure what the answer is. It's unfortunate that I linked to something that said I was wrong! Anyway, I'll edit the post so that it says that I'm not actually sure.

Replies from: Bucky
comment by Bucky · 2019-12-24T23:59:55.030Z · LW(p) · GW(p)

That’s really interesting. You can actually try a task which was used yourself.

Part of it seems to be that chimps are able to perform the task super fast - I can do the 9 number task on easy, on medium I can do it ok-ish and think if I kept practicing I’d be fairly consistent, but I don’t even have time to take all the numbers in on chimp speed.

I’m also not sure what to make of it. One possibility would be that chimps have an incredible short term memory (something like photographic) but that humans doing the same task have to rely on working memory. That would explain the speed at which they can take in all of the information.

comment by ESRogs · 2019-12-24T17:13:55.323Z · LW(p) · GW(p)
We do see some initial signs that humans might not be the most intelligent species on Earth in the innovative sense. For instance,
Although humans have the highest encephalization quotient, we don't have the most neurons, or even the most neurons in our forebrain.

I don't find the facts about number of neurons very suggestive of humans not being the most intelligent. On the contrary, when I look at the lists, it reinforces the idea that humans are smartest individually.

For total neuron count, only one animal beats humans: the African elephant, with 50% more, and humans have more than 2x the next highest (which is the Gorilla). So if we think the total neuron count numbers tell us how smart an animal is, then either humans or elephants are at the top.

For forebrain neurons, the top 10 are 8 kinds of whale, with humans at number 3 and gorillas at number 10.

Notably, humans and gorillas are the only animals in the top 10 on both lists, with humans handily beating gorillas in both cases. And all the animals which beat humans having much higher body mass.

I think if you were an alien who thought neurons were important for intelligence, and saw the lists above, but didn't know anything else about the species, you'd probably have most of your probability mass on either elephants, primates, or whales being the smartest. Once you saw human behavior you'd update to most of the mass being on humans at the top.

And similarly, if you already had as one of your leading hypotheses that humans were smartest, based on their behavior, and also that neurons were important, then I'd think seeing humans so high on the lists above would tend to confirm the humans-are-smartest hypothesis, rather than disconfirm it.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T22:07:13.665Z · LW(p) · GW(p)
On the contrary, when I look at the lists, it reinforces the idea that humans are smartest individually.

It's worth noting that I have little reason to believe that the Wikipedia list is comprehensive.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-12-24T01:10:00.255Z · LW(p) · GW(p)

I think one of the strongest arguments for humans being the smartest animals is that the social environment that language and human culture created lead to a massive increase in returns to intelligence, and I think there's some evidence from what we know about evolution that the human neocortex ballooned around the same time that we're theorized to have developed language and culture.

Replies from: PeterMcCluskey, matthew-barnett
comment by PeterMcCluskey · 2019-12-24T18:22:28.718Z · LW(p) · GW(p)

The increase in human brain size seems more due to increased ability to get enough calories to fuel it than it does to the benefits of intelligence. See Suzana Herculano-Houzel's book The Human Advantage for evidence.

Replies from: capybaralet, None
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-12-29T01:15:46.862Z · LW(p) · GW(p)

Basic question: how can this be a sufficient explanation? There needs to be *some* advantage to having the bigger brain, it being "cheap" isn't a good enough explanation...

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2019-12-29T05:20:44.395Z · LW(p) · GW(p)

My point is that the advantage to bigger brains existed long before humans.

This paper suggests that larger brains enable a more diverse diet.

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-01-01T18:38:51.849Z · LW(p) · GW(p)

OK I understand. JTBC, my original statement was: " language and human culture created lead to a massive increase in returns to intelligence", not that larger brains (/greater intelligence) suddenly became valuable.

comment by [deleted] · 2019-12-26T04:56:33.737Z · LW(p) · GW(p)

I second this recommendation. Not just for this reason - it clears up a lot of foggy definitions and longstanding misapprehensions about comparative neuroscience in a wonderfully neat, tidy package.

comment by Matthew Barnett (matthew-barnett) · 2019-12-24T02:00:17.820Z · LW(p) · GW(p)

That's a good theoretical argument, but I want to see good empirical evidence too.

Replies from: 9eB1, capybaralet
comment by 9eB1 · 2019-12-24T14:23:16.813Z · LW(p) · GW(p)

Your take is contrarian as I suspect you will admit. There is quite a bit of empirical evidence, and if it turned out that humans were not the most intelligent it would be very surprising. There is probably just enough uncertainty that it's still within the realm of possibility, but only by a small margin.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T18:02:34.405Z · LW(p) · GW(p)

If there's good empirical evidence I suspect that it will be easy to show me. I pointed out in the post what type of empirical evidence I would find most compelling (cognitive tests). I am still reading comments, but so far people have only given me theoretical reasons.

Replies from: 9eB1
comment by 9eB1 · 2020-01-06T14:41:10.885Z · LW(p) · GW(p)

Sorry, I could have been clearer. The empirical evidence I was referring to was the existence of human civilization, which should inform priors about the likelihood of other animals being as intelligent.

I think you are referring to a particular type of "scientific evidence" which is a subset of empirical evidence. It's reasonable to ask for that kind of proof, but sometimes it isn't available. I am reminded of Eliezer's classic post You're Entitled to Arguments, But Not (That Particular) Proof [LW · GW].

To be honest, I think the answer is that there is just no truth to this matter. David Chapman might say that "most intelligent" is nebulous, so while there can be some structure, there is no definite answer as to what constitutes "most intelligent." Even when you try to break down the concept further, to "raw innovative capacity" I think you face the same inherent nebulosity.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-12-29T01:16:39.709Z · LW(p) · GW(p)

TBC: I'm alluding to others' scholarly arguments, which I'm not very familiar with. I'm not sure to what extent these arguments have empirical vs. theoretical basis.

comment by AlexMennen · 2019-12-24T18:58:10.860Z · LW(p) · GW(p)

There's more than one thing that you could mean by raw innovative capacity separate from cultural processing ability. First, you could mean someone's ability to innovate on their own without any direct help from others on the task at hand, but where they're allowed to use skills that they previously acquired from their culture. Second, you could mean someone's counterfactual ability to innovate on their own if they hadn't learned from culture. You seem to be conflating these somewhat, though mostly focusing on the second?

The second is underspecified, as you'd need to decide what counterfactual upbringing you're assuming. If you compare the cognitive performance of a human raised by bears to the cognitive performance of a bear in the same circumstances, this is unfair to the human, since the bear is raised in circumstances that it is adapted for and the human is not, just like comparing the cognitive performance of a bear raised by humans to that of a human in the same circumstances would be unfair to the bear. Though a human raised by non-humans would still make a more interesting comparison to non-human animals than Genie would, since Genie's environment is even less conducive to human development (I bet most animals wouldn't cognitively develop very well if they were kept immobilized in a locked room until maturity either).

I think this makes the second notion less interesting than the first, as there's a somewhat arbitrary dependence on the counterfactual environment. I guess the first notion is more relevant when trying to reason specifically on genetics as opposed to other factors that influence traits, but the second seems more relevant in other contexts, since it usually doesn't matter to what extent someone's abilities were determined by genetics or environmental factors.

I didn't really follow your argument for the relevance of this question to AI development. Why should raw innovation ability be more susceptible to discontinuous jumps than cultural processing ability? Until I understand the supposed relevance to AI better, it's hard for me to say which of the two notions is more relevant for this purpose.

I'd be very surprised if any existing non-human animals are ahead of humans by the first notion, and there's a clear reason in this case why performance would correlate strongly with social learning ability: social learning will have helped people in the past develop skills that they keep in the present. Even for the second notion, though it's a bit hard to say without pinning down the counterfactual more closely, I'd still expect humans to outperform all other animals in some reasonable compromise environment that helps both develop but doesn't involve them being taught things that the non-humans can't follow. I think there are still reasons to expect social learning ability and raw innovative capability to be correlated even in this sense, because higher general intelligence will help for both; original discovery and understanding things that are taught to you by others both require some of the same cognitive tools.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T20:42:01.827Z · LW(p) · GW(p)
I didn't really follow your argument for the relevance of this question to AI development. Why should raw innovation ability be more susceptible to discontinuous jumps than cultural processing ability? Until I understand the supposed relevance to AI better, it's hard for me to say which of the two notions is more relevant for this purpose.

If our technological power comes from accumulated cultural power, then we can say that this power came from a slow, accumulated process over tens of thousands of years. This type of thing is much harder to unseat than if our power came from a simple architectural difference in ability to innovate.

Replies from: AlexMennen
comment by AlexMennen · 2019-12-24T20:46:50.182Z · LW(p) · GW(p)

The abilities we obtained from architectural changes to our brains also came from a slow, accumulated process, taking even longer than cultural evolution does.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-24T20:52:59.136Z · LW(p) · GW(p)

Yes, but the actual power didn't come from the architectural difference, even if the architectural difference got us the ability to process culture. The actual power came from the culture. That's my point.

ETA: Are you saying that AI could discontinuously develop culture much more quickly than humans?

Replies from: AlexMennen
comment by AlexMennen · 2019-12-24T23:26:43.753Z · LW(p) · GW(p)

I guess what I was trying to say is (although I think I've partially figured out what you meant; see next paragraph), cultural evolution is a process that acquires adaptations slowly-ish and transmits previously-acquired adaptations to new organisms quickly, while biological evolution is a process that acquires adaptations very slowly and transmits previously-acquired adaptations to new organisms quickly. You seem to be comparing the rate at which cultural evolution acquires adaptations to the rate at which biological evolution transmits previously-acquired adaptations to new organisms, and concluding that cultural evolution is slower.

Re-reading the part of your post where you talked about AI takeoff speeds, you argue (which I hadn't understood before) that the rise of humans was fast on an evolutionary timescale, and slow on a cultural timescale, so that if it was due to an evolutionary change, it must involve a small change that had a large effect on capabilities, so that a large change will occur very suddenly if we mimic evolution quickly, while if it was due to a cultural change, it was probably a large change, so mimicking culture quickly won't produce a large effect on capabilities unless it is extremely quick.

This clarifies things, but I don't agree with the claim. I think slow changes in the intelligence of a species is compatible with fast changes in its capabilities even if the changes are mainly in raw innovative ability rather than cultural learning. Innovations can increase ability to innovate, causing a positive feedback loop. A species could have high enough cultural learning ability for innovations to be transmitted over many generations without having the innovative ability to ever get the innovations that will kick off this loop. Then, when they start slowing gaining innovative ability, the innovations accumulated into cultural knowledge gradually increase, until they reach the feedback loop and the rate of innovation becomes more determined by changes in pre-existing innovations than by changes in raw innovative ability. There doesn't even have to be any evolutionary changes in the period in which innovation rate starts to get dramatic.

If you don't buy this story, then it's not clear why the changes being in cultural learning ability rather than in raw innovative ability would remove the need for a discontinuity. After all, our cultural learning ability went from not giving us much advantage over other animals to "accumulating decisive technological dominance in an evolutionary eyeblink" in an evolutionary eyeblink (quotation marks added for ease of parsing). Does this mean our ability to learn from culture must have greatly increased from a small change? You argue in the post that there's no clear candidate for what such a discontinuity in cultural learning ability could look like, but this seems just as true to me for raw innovative ability.

Perhaps you could argue that it doesn't matter if there's a sharp discontinuity in cultural learning ability because you can't learn from a culture faster than the culture learns things to teach you. In this case, yes, perhaps I would say that AI-driven culture could make advancements that look discontinuous on a human scale. Though I'm not entirely sure what that would look like, and I admit it does sound kind of soft-takeoffy.

comment by Rohin Shah (rohinmshah) · 2019-12-29T02:32:26.884Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

We can roughly separate intelligence into two categories: _raw innovative capability_ (the ability to figure things out from scratch, without the benefit of those who came before you), and _culture processing_ (the ability to learn from accumulated human knowledge). It's not clear that humans have the highest raw innovative capability; we may just have much better culture. For example, feral children raised outside of human society look very "unintelligent", The Secret of Our Success documents cases where culture trumped innovative capability, and humans actually _don't_ have the most neurons, or the most neurons in the forebrain.
(Why is this relevant to AI alignment? Matthew claims that it has implications on AI takeoff speeds, though he doesn't argue for that claim in the post.)

Planned opinion:

It seems very hard to actually make a principled distinction between these two facets of intelligence, because culture has such an influence over our "raw innovative capability" in the sense of our ability to make original discoveries / learn new things. While feral children might be less intelligent than animals (I wouldn't know), the appropriate comparison would be against "feral animals" that also didn't get opportunities to explore their environment and learn from their parents, and even so I'm not sure how much I'd trust results from such a "weird" (evolutionarily off-distribution) setup.
comment by avturchin · 2019-12-24T10:18:24.493Z · LW(p) · GW(p)

Interestingly, we created selection pressure on other species to create something like human intelligence. First of all, dogs, which were selected for15 000 years to be more compatible with humans, which also includes a capability to understand human signals and language. Some dogs could understand few hundreds words.

Replies from: jmh
comment by jmh · 2019-12-24T14:49:17.918Z · LW(p) · GW(p)

My understanding is that the domestication has also produced animals that are mentally more juvenile than their wild counter part. Someone made that point a bit earlier about the difference between wolves and domesticated dogs.

If that is the case it may not be that the domestic animals are really any smarter or better selected to understand human signals but more about its more subservient mental state along with more exposure/practice.

comment by Lukas_Gloor · 2020-09-16T09:47:24.607Z · LW(p) · GW(p)
If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed.

I keep hearing people say this (the part "and there doesn't seem to be a hard separation"), but I don't intuitively agree! I've spelled out my position here [LW(p) · GW(p)]. I have the intuition that there's a basin of attraction for good reasoning ("making use of culture to improve how you reason") that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don't seem to "get it" when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn't do this, or does it in a fashion that doesn't sufficiently correspond to reality's structure, they predictably won't make original and groundbreaking intellectual contributions. By contrast, other people do "get it," and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.

comment by Dagon · 2019-12-25T23:58:23.064Z · LW(p) · GW(p)

Ehn. If other animals are so smart, why aren't they rich?

I think you can make a strong argument that processing culture is more important than raw innovative ability for growth over time. I don't think you can make much of an argument that any beast but us has a mix of the two that has allowed significant sustained improvements in individual or species longevity or adaptation to environment.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-26T01:08:50.100Z · LW(p) · GW(p)

If we define intelligence purely by how successful someone is, then we run into a ton of issues. For example, is a billionaire who inherited their wealth but failed out of high school smarter than a middle class college professor?

I'm not arguing that other species are more successful than humans. I'm using the more intuitive definition of intelligence (problem solving capability/ability to innovate).

comment by Rafael Harth (sil-ver) · 2019-12-24T09:16:53.538Z · LW(p) · GW(p)

Isn't it the case that no non-human animal has ever been able to speak a language, even if we've tried to raise it as a human? (And that we have in fact tried this with chimps?) If that's true, why isn't that the end of the conversation? This is what's always confused me about this debate. It seems like everyone is ignoring the slam-dunk evidence that's right there.

Replies from: jmh, steve2152, Teerth Aloke
comment by jmh · 2019-12-24T15:07:47.932Z · LW(p) · GW(p)

Well, that might be true of human language -- though I'm not sure about the case of sign language for apes in captivity. Part of the problem is the physiological ability to make the sounds needed for human language. The other species simply lack that ability for the most part.

But how about flipping that view. Has any human been able to learn any language any other species might use? Sea mammals (dolphin, whales) appear to have some form of vocal communications. Similarly, I've at least heard stories that wolves also seem to communicate. Anecdotally, I have witnessed what I would take as an indication of communication between two of the dogs we used to have.

My hypothesis might be somewhat along the line of most social/pack animals will have some communication mechanisms, for many that will likely be auditory but perhaps in some settings the non-"verbal" might dominate[1], that is, some level of language. Until we have the ability to establish that one way or another we probably need to keep an open mind on these types of questions (level of intelligence and cross species comparisons.

[1] For instance, do octopi have the ability to communicate with one another visually.

Note -- I'm using communicate to indicate more than merely signally some state -- aggressive, wary/threatened, sexually active/interested. Until we could decode such communications we will not really be able to say anything about the level of information exchanges or the underlying thinking, if any, associated with it.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2019-12-24T16:06:28.023Z · LW(p) · GW(p)
Well, that might be true of human language -- though I'm not sure about the case of sign language for apes in captivity. Part of the problem is the physiological ability to make the sounds needed for human language. The other species simply lack that ability for the most part.
But how about flipping that view. Has any human been able to learn any language any other species might use? Sea mammals (dolphin, whales) appear to have some form of vocal communications. Similarly, I've at least heard stories that wolves also seem to communicate. Anecdotally, I have witnessed what I would take as am indication of communication between two of the dogs we used to have.

Those are decent points. Not enough to sell me but enough to make me take it more seriously.

comment by Steven Byrnes (steve2152) · 2019-12-24T13:27:52.755Z · LW(p) · GW(p)

One could respond by saying that language is a specific human instinct, and if we were all elephants we would be talking about how no other species has anything like our uniquely elephant trunk, etc. etc. (I think I took that example from a Steven Pinker book?) There are certainly cognitive tasks that other animals can do that we can't at all or as well, like dragonflies predicting the trajectories of their prey (although we could eventually program a computer to do that, I imagine). Anyway, to the larger point, I actually don't have a strong opinion about the intelligence of "humans without human culture", and don't see how it's particularly relevant to anything. "Humans without human culture" might or might not have language; I know that groups of kids can invent languages from scratch (e.g. Nicaraguan sign language) but I'm not sure about a single human.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2019-12-24T14:54:17.904Z · LW(p) · GW(p)
There are certainly cognitive tasks that other animals can do that we can't at all or as well, like dragonflies predicting the trajectories of their prey

That's fine, but those aren't nearly as powerful. Language was a big factor in humans taking over the world, predicting the trajectory of whatever dragonflies eat wasn't and couldn't be.

Anyway, to the larger point, I actually don't have a strong opinion about the intelligence of "humans without human culture", and don't see how it's particularly relevant to anything. "Humans without human culture" might or might not have language; I know that groups of kids can invent languages from scratch (e.g. Nicaraguan sign language) but I'm not sure about a single human.

The point is that it is possible to teach a human language, and it seems to be impossible to teach a non-human animal language.

comment by Teerth Aloke · 2019-12-24T09:41:11.386Z · LW(p) · GW(p)

There was the Project Nim Chimpsky. It is probably the one you are talking about.

comment by Steven Byrnes (steve2152) · 2019-12-24T02:36:49.409Z · LW(p) · GW(p)

If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed.

I think you're giving GPT-2 too much credit here. I mean, on any dimension of intelligence, you can say there's a continuum of capabilities along that scale with no hard separations. The more relevant question is, might there be a situation where all the algorithms are like GPT-2, which can only pick up superficial knowledge, and then someone has an algorithmic insight, and now we can make algorithms that, as they read more and more, develop ever deeper and richer conceptual understandings? And if so, how fast could things move after that insight? I don't think it's obvious.

I do agree that pretty much everything that might make an AGI suddenly powerful and dangerous is in the category of "taking advantage of the products of human culture", for example: coding (recursive self-improvement, writing new modules, interfacing with preexisting software and code), taking in human knowledge (reading and deeply understanding books, videos, wikipedia, etc., a.k.a. "content overhang") , computing hardware (self-reproduction / seizing more computing power , a.k.a. "hardware overhang"), the ability of humans to coordinate and cooperate (social manipulation, earning money, etc.), etc. In all these cases and more, I would agree that one could in principle define a continuum of capabilities from 0 to superhuman with no hard separations, but still think that it's possible for a new algorithm to jump from "2019-like" (which is more than strictly 0) to "really able to take advantage of this tool like humans can or beyond" in one leap.

Sorry if I'm misunderstanding your point.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-12-24T00:10:40.144Z · LW(p) · GW(p)

" some animals that are more intelligent than animals" --> " some animals that are more intelligent than HUMANS "

Replies from: matthew-barnett