birds and mammals independently evolved intelligence

post by bhauth · 2025-04-08T20:00:05.100Z · LW · GW · 23 comments

This is a link post for https://www.quantamagazine.org/intelligence-evolved-at-least-twice-in-vertebrate-animals-20250407/

Contents

24 comments

Researchers used RNA sequencing to observe how cell types change during brain development. Other researchers looked at connection patterns of neurons in brains. Clear distinctions have been found between all mammals and all birds. They've concluded intelligence developed independently in birds and mammals; I agree. This is evidence for convergence of general intelligence.

23 comments

Comments sorted by top scores.

comment by Julian Bradshaw · 2025-04-08T20:59:37.922Z · LW(p) · GW(p)

Couple takeaways here. First, quoting the article:

By comparing the bird pallium to lizard and mouse palliums, they also found that the neocortex and DVR were built with similar circuitry — however, the neurons that composed those neural circuits were distinct.

“How we end up with similar circuitry was more flexible than I would have expected,” Zaremba said. “You can build the same circuits from different cell types.”

This is a pretty surprising level of convergence for two separate evolutionary pathways to intelligence. Apparently the neural circuits are so similar that when the original seminal paper on bird brains was written in 1969, it just assumed there had to be a common ancestor, and that thinking felt so logical it held for decades afterward.

Obviously, this implies strong convergent pressures for animal intelligence. It's not obvious to me that artificial intelligence should converge in the same way, not being subject to same pressures all animals face, but we should maybe expect biological aliens to have intelligence more like ours than we'd previously expected.

Speaking of aliens, that's my second takeaway: if decent-ish (birds like crows/ravens/parrots + mammals) intelligence has evolved twice on Earth, that drops the odds that the "evolve a tool-using animal with intelligence" filter is a strong Fermi Paradox filter. Thus, to explain the Fermi Paradox, we should posit increased odds that the Great Filter is in front of us. (However, my prior for the Great Filter being ahead of humanity is pretty low, we're too close to AI and the stars—keep in mind that even a paperclipper has not been Filtered, a Great Filter prevents any intelligence from escaping Earth.)

Replies from: Yoreth, Davidmanheim, dylan-mahoney, Max Lee, Jonas Hallgren
comment by Yoreth · 2025-04-09T15:38:08.847Z · LW(p) · GW(p)

I previously posted Was the K-T event a Great Filter? [LW · GW] as a pushback against the notion that different lineages of life on Earth evolving intelligence is really "independent evidence" in any meaningful sense. Intelligence can evolve only if there's selective pressure favoring it, and a large part of that pressure likely comes from the presence of other intelligent creatures competing for resources. Therefore mammals and birds together really should only count as one data point.

(It's more plausible that octopus intelligence is independent, since the marine biome is largely separate from the terrestrial, although of course not totally.)

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-09T19:34:22.570Z · LW(p) · GW(p)

Interesting thought. I think you have a point about coevolution, but I don't think it explains away everything in the birds vs. mammals case. How much are birds really competing with mammals vs. other birds/other animals? Mammals compete with lots of animals, why did only birds get smarter? I tend to think intra-niche/genus competition would generate most of the pressure for higher intelligence, and for whatever reason that competition doesn't seem to lead to huge intelligence gains in most species.

(Re: octopus, cephalopods do have interactions with marine mammals. But also, their intelligence is seemingly different from mammals/birds - strong motor intelligence, but they're not really very social or cooperative. Hard to compare but I'd put them in a lower tier than the top birds/mammals for the parts of intelligence relevant to the Fermi Paradox.)

In terms of the K-T event, I think it could plausibly qualify as a filter, but asteroid impacts of that size are common enough it can't be the Great Filter on its own - it doesn't seem the specific details of the impact (location/timing) are rare enough for that.

Replies from: ChristianKl
comment by ChristianKl · 2025-04-10T14:52:43.574Z · LW(p) · GW(p)

The Humboldt squid is an octupus that can coordinate to hunt together. 

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-10T18:14:08.975Z · LW(p) · GW(p)

Huh, seems you are correct. They also apparently are heavily cannibalistic, which might be a good impetus for modeling the intentions of other members of your species…

Replies from: ChristianKl
comment by ChristianKl · 2025-04-10T20:07:37.826Z · LW(p) · GW(p)

I searched a bit more and it seems they don't have personal relationships with other members of the same species the way mammals and birds can.

Personal relationships seem to something that needs intelligence and that birds and mammals evolved separately.

Replies from: JenniferRM
comment by JenniferRM · 2025-04-10T21:01:49.320Z · LW(p) · GW(p)

I came here to say "look at octopods!" but you already have. Yay team! :-)

One of the alignment strategies I have been researching in parallel with many others involves finding examples of human-and-animal benevolence and tracing convergent evolution therein, and proposing that "the shared abstracts here (across these genomes, these brains, these creatures all convergently doing these things)" is probably algorithmically simple, with algorithm-to-reality shims that might also be important, and please study it and lean in the direction of doing "more of that".

There is an octopod cognate of "ocytocin" (the "maternal love and protection hormone"), but from what I can tell they did NOT re-use it in the ways that we did. But also they mostly lay eggs while abandoning the individual babies to their own survival, rather than raising children carefully.

By contrast, birds and mammals share a relatively similar kind of "high parental investment"!

comment by Davidmanheim · 2025-04-09T06:31:58.138Z · LW(p) · GW(p)

...Thus, to explain the Fermi Paradox, we should posit increased odds that the Great Filter is in front of us. (However, my prior for the Great Filter being ahead of humanity is pretty low, we're too close to AI and the stars—keep in mind that even a paperclipper has not been Filtered, a Great Filter prevents any intelligence from escaping Earth.)


Or that the filter is far behind us - specifically, Eukaryotes only evolved once. And in the chain-model by Sandberg et al, pre-intelligence filters are the vast majority of the probability mass, so it seems to me that eliminating intelligence as a filter shifts the remaining probability mass for a filter backwards in time in expectation.

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-09T19:50:34.877Z · LW(p) · GW(p)

I agree it's likely the Great Filter is behind us. And I think you're technically right, most filters are behind us, and many are far in the past, so the "average expected date of the Great Filter" shifts backward. But, quoting my other comment [LW(p) · GW(p)]:

Every other possible filter would gain equally, unless you think this implies that maybe we should discount other evolutionary steps more as well. But either way, that’s still bad on net because we lose probability mass on steps behind us.

So even though the "expected date" shifts backward, the odds for "behind us or ahead of us" shifts toward "ahead of us". 

Let me put it this way: let's say we have 10 possible filters behind us, and 2 ahead of us. We've "lost" one filter behind us due to new information. So, 9 filters behind us gain a little probability mass, 1 filter behind us loses most probability mass, and 2 ahead of us gain a little probability mass. This does increase the odds that the filter is far behind us, since "animal with tool-use intelligence" is a relatively recent filter. But, because "animal with tool-use intelligence" was already behind us and a small amount of that "behind us" probability mass has now shifted to filters ahead of us, the ratio between all past filters and all future filters has adjusted slightly toward future filters.

Replies from: Jonas Hallgren
comment by Jonas Hallgren · 2025-04-10T06:42:20.864Z · LW(p) · GW(p)

I see your point, yet if the given evidence is 95% in the past, the 5% in the future only gets a marginal amount added to it, I do still like the idea of crossing off potential filters to see where the risks are so fair enough!

comment by Pretentious Penguin (dylan-mahoney) · 2025-04-08T23:15:59.676Z · LW(p) · GW(p)

To clarify the last part of your comment, the ratio of the probability of the Great Filter being in front of us to the probability of the Great Filter being behind tool-using intelligent animals should be unchanged by this update, right?

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-09T04:22:17.749Z · LW(p) · GW(p)

Yes. Every other possible filter would gain equally, unless you think this implies that maybe we should discount other evolutionary steps more as well. But either way, that’s still bad on net because we lose probability mass on steps behind us.

comment by Knight Lee (Max Lee) · 2025-04-09T04:29:16.097Z · LW(p) · GW(p)

I agree with everything you said but I disagree that the Fermi Paradox needs explanation.

Fermi Paradox = Doomsday Argument

The Fermi Paradox simply asks, "why haven't we seen aliens?"

The answer is that any civilization which an old alien civilization chooses to communicate to (and is able to reach), will learn so much technology that they will quickly reach the singularity. They will be influenced so much that they effectively become a "province" within the old alien civilization.

So the Fermi Paradox question "why aren't we born in a civilization which "sees" an old alien civilization," is actually indistinguishable from the Doomsday Argument question "why aren't we born in an old [alien] civilization ourselves?"

Doomsday Argument is wrong

Here's the problem: the Doomsday Argument is irrational from a decision theory point of view.

Suppose your parents were Omega and Omego. The instant you were born, they hand you a $1,000,000 allowance, and they instantly ceased to exist.

If you were rational in the first nanosecond of your life, the Doomsday Argument would prove it's extremely unlikely you'll live much longer than 1 nanosecond, and you should spend all your money immediately.

If you actually believe the Doomsday Argument, you should thank your lucky stars that you weren't rational in the first nanosecond of your life.

Both SSA and SIA are bad decision theories (when combined with CDT), because they are optimizing something different than your utility.

Explanation

SSA is optimizing the duration of time your average copy has correct probabilities. SIA is optimizing the duration of time your total copies have the correct probabilities.

SSA doesn't care if the first nanosecond you is wrong, because he's a small fraction of your time (even if he burns your life savings resulting in near 0 utility).

SIA doesn't care if you're almost certainly completely wrong (grossly overestimating the probability of counterfactuals with more copies of you), because in the unlikely case you are correct, there are far more copies of you who have the correct probabilities. It opens the door to Pascal's Mugging.

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-09T05:06:31.172Z · LW(p) · GW(p)

Two objections:

  1. Granting that the decision theory that would result from reasoning based on the Fermi Paradox alone is irrational, we'd still want an answer to the question[1] of why we don't see aliens. If we live in a universe with causes, there ought to be some reason, and I'd like to know the answer.
  2. "why aren't we born in a civilization which 'sees' an old alien civilization" is not indistinguishable from "why aren't we born in an old [alien] civilization ourselves?" Especially assuming FTL travel limitations hold, as we generally expect, it would be pretty reasonable to expect to see evidence of interstellar civilizations expanding as we looked at galaxies hundreds of millions or billions of lightyears away—some kind of obviously unnatural behavior, such as infrared radiation from Dyson swarms replacing normal starlight in some sector of a galaxy.[2] There should be many more civilizations we can see than civilizations we can contact. 
  1. ^

    I've seen it argued that the "Fermi Paradox" ought to be called simply the "Fermi Question" instead for reasons like this, and also that Fermi himself seems to have meant it as an honest question, not a paradox. However, it's better known as the Paradox, and Fermi Question unfortunately collides with Fermi estimation.

  2. ^

    It is technically possible that all interstellar civilizations don't do anything visible to us—the Dark Forest theory is one variant of this—but that would contradict the "old civilization would contact and absorb ours" part of your reasoning.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2025-04-09T06:48:26.954Z · LW(p) · GW(p)

For point 1, I can argue about how rational a decision theory is, but I cannot argue for "why I am this observer rather than that observer." Not only am I unable to explain why I'm an observer who doesn't see aliens, I am unable to explain why I am an observer believes 1+1=2, assuming there are infinite observers who believe 1+1=2 and infinite observers who believe 1+1=3. Anthropic reasoning becomes insanely hard and confusing and even Max Tegmark, Eliezer Yudkowsky [LW · GW] and Nate Soares [LW · GW] are confused.

Let's just focus on point 2, since I'm much more hopeful I get get to the bottom of this one.

Of course I don't believe in faster-than-light travel. I'm just saying that "being born as someone who sees old alien civilizations" and "being born as someone inside an old [alien] civilization" are technically the same, if you ignore the completely subjective and unnecessary distinction of "how much does the alien civilization need to influence me before I'm allowed to call myself a member of them?"

Suppose at level 0 influence, the alien civilization completely hides from you, and doesn't let you see any of their activity.

At level 1.0 influence, the alien civilization doesn't hide from you, and lets you look at their Dyson swarms or start lifting machines and all the fancy technologies.

At level 1.1 influence, they let you see their Dyson swarms, plus they send radio signals to us, sharing all their technologies and allowing us to immediately reach technological singularity. Very quickly, we build powerful molecular assemblers, allowing us to turn any instructions into physical objects, and we read instructions from the alien civilization allowing us to build a copy of their diplomats.

Some countries may be afraid to follow the radio instructions, but the instructions can easily be designed so that any country which refuses to follow the instructions will be quickly left behind.

At this point, there will be aliens on Earth, we'll talk about life and everything, and we are in some sense members of their civilization.

At level 2.0 influence, the aliens physically arrive at Earth themselves, and observe our evolution, and introduce themselves.

At level 3.0 influence, the aliens physically arrive at Earth, and instead of observing our evolution (which is full of suffering and genocide and so forth), they intervene and directly create humans on Earth skipping the middle step, and we are born in the alien laboratory, and we talk to them and say hi.

At level 4.0 influence we are not only born in an alien laboratory, but we are aliens ourselves, completely born and raised in their society.

Now think about it. The Fermi Paradox is asking us why we aren't born as individuals who experience level 1.0 influence. The Doomsday Argument is asking us why we aren't born as individuals who experience level 4.0 influence (or arguably level 1.1 influence can count).

But honestly, there is no difference, from an epistemic view, between 1.0 influence and 4.0 influence. The two questions are ultimately the same: if most individuals exist inside the part of the light cone of an alien civilization (which they choose to influence), why aren't we one of them?

Do you agree the two problems are epistemically the same?

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-09T20:13:27.476Z · LW(p) · GW(p)

I think if you frame it as:

if most individuals exist inside the part of the light cone of an alien civilization, why aren't we one of them?

Then yes, 1.0 influence and 4.0 influence both count as "part of the light cone", and so for the related anthropic arguments you could choose to group them together.

But re: anthropic arguments,

Not only am I unable to explain why I'm an observer who doesn't see aliens

This is where I think I have a different perspective. Granting that anthropic arguments (here, about which observer you are and the odds of that) cause frustration and we don't want to get into them, I think there is an actual reason why we don't see aliens - maybe they aren't there, maybe they're hiding, maybe it's all a simulation, whatever - and there's no strong reason to assume we can't discover that reason. So, in that non-anthropic sense, in a more scientific inquiry sense, it is possible to explain why I'm an observer who doesn't see aliens. We just don't know how to do that yet. The Great Filter is one possible explanation behind the "they aren't there" answer, and this new information adjusts what we think the filters that would make up the Great Filter might be.

Another way to think about this: suppose we discover that actually science proves life should only arise on 1 in a googol planets. That introduces interesting anthropic considerations about how we ended up as observers on that 1 planet (can't observe if you don't exist, yadda yadda). But what I care about here is instead, what scientific discovery proved the odds should be so low? What exactly is the Great Filter that made us so rare?

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2025-04-09T22:33:19.279Z · LW(p) · GW(p)

Okay I guess we're getting into the anthropic arguments then :/

So both the Fermi Paradox and the Doomsday Argument are asking, "assuming that the typical civilization lasts a very long time and has trillions of trillions of individuals inside the part of its lightcone it influences (either as members in the Doomsday Argument, or observers in the Fermi Paradox). Then why are we one of the first 100 billion individuals in our civilization?"

Before I try to answer it, I first want to point out that even if there was no answer, we should behave as if there was no Doomsday nor great filter. Because from a decision theory point of view, you don't want your past self, in the first nanosecond of your life, to use the Doomsday Argument to prove he's unlikely to live much longer than a nanosecond, and then spend all his resources in the first nanosecond.

For the actual answer, I only have theories.

One theory is this. "There are so many rocks in the universe, so why am I a human rather than a rock?" The answer is that rocks are not capable of thinking "why am I X rather than Y," so given that you think such a thing, you cannot be a rock and have to be something intelligent like a human.

I may also ask you, "why, of all my millions of minutes of life, am I currently in the exact minute where I'm debating someone online about anthropic reasoning?" The answer might be similar to the rock answer: given you are thinking "why am I X rather than Y," you're probably in a debate etc. over anthropic reasoning.

If you stretch this form of reasoning to its limits, you may get the result that the only people asking "why am I one of the first 100 billion observers of my civilization," are the people who are the first 100 billion observers.

This obviously feels very unsatisfactory. Yet we cannot explain why exactly this explanation feels unsatisfactory, while the previous two explanations feel satisfactory, so maybe it's due to human biases that we reject the third argument by accept the first two.

Another theory is that you are indeed a simulation, but not the kind of simulation you think. How detailedly must a simulation simulate you, before the simulation contains a real observer, and you might actually exist inside the simulation? I argue, that the simulation only needs to be detailed enough such that your resulting thoughts and behaviours are accurate.

But merely human imagination, imagining a narrative, and knowing enough facts about the world to make it accurate, can actually simulate something accurately. Characters in a realistic story has similar thoughts and behaviours as real world humans, so they might just be simulations.

So people in the far future, who are not the first 100 billion observers of our civilization, but maybe the trillion trillionth observers, might be imagining our conversation play out, as an entertaining but realistic story, illustrating the strangeness of anthropic reasoning. As soon as the story finishes, we may cease to exist :/. In fact, as soon as I walk away from my computer, and I'm no longer talking about anthropic reasoning, I might stop existing and only exist again when I come back. But I won't notice it happening, because such a story isn't entertaining nor realistic if the characters actually observe glitches in the story simulation.

Or maybe they may be simply reading our conversation instead of writing it themselves, but reading it and imagining how it's playing out still succeeds in simulating us.

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-10T04:04:00.336Z · LW(p) · GW(p)

Dangit I can't cease to exist, I have stuff to do this weekend.

But more seriously, I don't see the point you're making? I don't have a particular objection to your discussion of anthropic arguments, but also I don't understand how it relates to the "what part of evolution/planetary science/sociology/etc. is the Great Filter" scientific question.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2025-04-10T04:10:52.384Z · LW(p) · GW(p)

What I'm trying to argue is that there could easily be no Great Filter, and there could exist trillions of trillions of observers who live inside the light cone of an old alien civilization, whether directly as members of the civilization, or as observers who listen to their radio.

It's just that we're not one of them. We're one of the first few observers who aren't in such a light cone. Even though the observers inside such light cones outnumber us a trillion to one, we aren't one of them.

:) if you insist on scientific explanations and dismiss anthropic explanations, then why doesn't this work as an answer?

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2025-04-10T05:48:47.236Z · LW(p) · GW(p)

Oh okay. I agree it's possible there's no Great Filter.

comment by ChristianKl · 2025-04-10T14:51:05.477Z · LW(p) · GW(p)

It's important to be clear what we mean with intelligence in this context. The last common ancestors of birds and mammals lived 325 million years ago or roughly 200 million years after the Cambrian revolution. It was likely a lizard-like creature of maybe 20-30cm length.

It was likely intelligent enough to coordinate the movement of its four legs and make some decisions about what to eat. It had eyes. It probably had a brain with neurons. Maybe capable of a freeze response to avoid getting eaten by a predator but no fight/flight. It was likely much dumber than today's mammals and birds. 

Given the same building blocks of neurons and axions that you connect together to form a brain birds and mammals seem to have found different strategies of how to create higher intelligence.

Octopi who diverged before the Cambrian evolution would likely make a much better argument of independent development of intelligence.

comment by cubefox · 2025-04-10T09:04:47.113Z · LW(p) · GW(p)

Your headline overstates the results. The last common ancestor of birds an mammals probably wasn't exactly unintelligent. (In contrast to our last common ancestor with the octopus, as the article discusses.)

comment by Alex K. Chen (parrot) (alex-k-chen) · 2025-04-12T04:15:14.620Z · LW(p) · GW(p)

Also you forgot manta rays. They're just a small branch on the tree that failed to diversify, but cartilage fishes are more distal than bony fishes but way brainier