We don't understand what happened with culture enough

post by Jan_Kulveit · 2023-10-09T09:54:20.096Z · LW · GW · 22 comments

Contents

  Different stories
  Not even approximately true
  Conclusion
  (Note on AI)
None
22 comments

This is a quick response to Evolution Provides No Evidence For the Sharp Left Turn [LW · GW], due to it winning first prize in The Open Philanthropy Worldviews contest. I think part of the post is sufficiently misleading about evolutionary history and the OP first prize gives it enough visibility, that it makes sense to write a post-long response.

Central evolutionary biology related claim of the original post is this:

  • The animals of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
  • The generation dies, and all of the accumulated products of within lifetime learning are lost.
  • Differential reproductive success slightly changes the balance of traits across the species.



The only way to transmit information from one generation to the next is through evolution changing genomic traits, because death wipes out the within lifetime learning of each generation.



However, this sharp left turn does not occur because the inner learning processes suddenly become much better / more foomy / more general in a handful of outer optimization steps. It happens because you devoted billions of times more optimization power to the inner learning processes, but then deleted each inner learner shortly thereafter. Once the inner learning processes become able to pass non-trivial amounts of knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.

In my view, this interpretation of evolutionary history is something between "speculative" and "wrong".

Transmitting some of the data gathered during the lifetime of the animal to next generation by some other means is so obviously useful that is it highly convergent [LW · GW]. Non-genetic communication channels to the next generation include epigenetics, parental teaching / imitation learning, vertical transmission of symbionts, parameters of prenatal environment, hormonal and chemical signaling, bio-electric signals, and transmission of environmental resources or modifications created by previous generations, which can shape the conditions experienced by future generations (e.g. beaver dams). 

Given the fact overcoming the genetic bottleneck is so highly convergent, it seems a bit surprising if there was a large free lunch on table in exactly this direction, as Quintin assumes:

Evolution's sharp left turn happened because evolution spent compute in a shockingly inefficient manner for increasing capabilities, leaving vast amounts of free energy on the table for any self-improving process that could work around the evolutionary bottleneck. Once you condition on this specific failure mode of evolution, you can easily predict that humans would undergo a sharp left turn at the point where we could pass significant knowledge across generations. I don't think there's anything else to explain here, and no reason to suppose some general tendency towards extreme sharpness in inner capability gains.


It's probably worth to go a bit into technical details here: evolution did manage to discover evolutionary innovations like mirror neurons: A mirror neuron is a neuron that fires both when an organism acts and when the organism observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. ... Further experiments confirmed that about 10% of neurons in the monkey inferior frontal and inferior parietal cortex have "mirror" properties and give similar responses to performed hand actions and observed actions.[1] 

Clearly, mirror neurons are type of an innovation which allows high throughput behavioural cloning / imitation learning. "10% of neurons in the monkey inferior frontal and inferior parietal cortex" is a massive amount of compute. Neurons imitating your parent's motoric policy based on visual channel information about the behaviour of your parent is a high-throughput channel. (I recommend doing a Fermi estimate of this channel capacity).

The situation where you clearly have a system totally able to eat the free lunch on the table, and supposedly the lunch is still there, makes me suspicious.

At the same time: yes, clearly, nowadays, human culture is a lot of data, and humans learn more than monkeys.

Different stories

What are some evolutionary plausible alternatives of Quintin's story? 

Alternative stories would usually suggest that ancestral humans had access to channels to overcome the genetic bottleneck, and were using such channels to the extent it was marginally effective. Then, some other major change happened, the marginal fitness advantage of learning more grew, and humans developed to transmit more bits, so, modern humans transmit more.

An example of such major change could be advent of culture. If you look at the past timeline from a replicator dynamics perspective, the next most interesting event after the beginning of life is cultural replicators running on human brains crossing R>1 and starting  the second vast evolutionary search, cultural evolution.

How is the story "cultural evolution is the pivotal event" different? Roughly speaking, culture is a multi-brain parallel immortal evolutionary search computation. Running at higher speed and a layer of abstraction away from physical reality (compared to genes), it was able to discover many pools of advantage, like fire, versatile symbolic communication, or specialise-and-trade superagent organisation.

In this view, there is a type difference between 'culture' and 'increased channel capacity'. 

You can interpret this in multiple ways, but if you want to cast this as a story of a discontinuity, where biological evolution randomly stumbled upon starting a different powerful open-ended misaligned search, it makes sense. The fact that such search finds caches of fitness and negentropy seems not very surprising. [2]

Was the "increased capacity to transfer what's learned in brain's lifetime to the next generation" at least the most important or notably large direction what to exploit? I'm not a specialist on human evolution, but seems hard to say with confidence: note that 'fire' is also a big deal, as it allows you do spend way less on digestion, and cheaper ability to coordinate is a big deal, as illustrated by ants, and symbolic communication is a big deal, as it is digital, robust and and effective compression.

Unfortunately for attempts to figure out what were the precise marginal costs and fitness benefits for ancestral humans, my impression is, ~ten thousand generations of genetic evolution in a fitness landscape shaped by cultural evolution screens a lot of evidence. In particular, from the fact modern humans are outliers in some phenotype characteristic, you can not infer it was the cause of the change to humans. For example, argument like 'human kids have unusual capacity to absorb significant knowledge across generations compared to chimps, ergo, the likely cause of human explosive development is ancestral humans having more of this capacity than other species' has very little weight. Modern wolfs are also notably different from modern chihuahuas, but the correct causal story is not 'ancestral chihuahuas had an overhang of loyalty and harmlessness'.

Does this partially invalidate the argument toward implications for AI in the original post? In my view yes; if, following Quintin, we translate the actual situation  into quantities and narratives that drive AI progress rates

- the "specific failure mode" of not transmitting what brains learn to the next generation is not there
- the marginal fitness advantage of transmitting more bits to the next generation brains is unclear, similarly to an unclear marginal advantage of e.g. spending more on LLMs curating data for the next gen LLM training
- because we don't really understand what happened, the metaphorical map to AI progress mostly maps this lack of understanding to lack of clear insights for AI
- it seems likely culture is somehow big deal, but it is not clear how you would translate what happened to AI domain; if such thing can happen with AIs, if anything, it seems pushing more toward the discontinuity side, as the cultural search uncovered relatively fast multiple to many caches of negentropy
(- yes, obviously, given culture, it is important that you can transmit it to next generation, but it seems quite possible that for transferring seed culture  the capacity channel you have via mirror neurons is more than enough)
 

Not even approximately true

In case you still believe the original post is still somehow approximately true, and the implications for AI progress still somehow approximately hold, I think it's important to basically un-learn that update. Quoting the original post:

This last paragraph makes an extremely important claim that I want to ensure I convey fully:

- IF we understand the mechanism behind humanity's sharp left turn with respect to evolution

- AND that mechanism is inapplicable to AI development

- THEN, there's no reason to reference evolution at all when forecasting AI development rates, not as evidence for a sharp left turn, not as an "illustrative example" of some mechanism / intuition which might supposedly lead to a sharp left turn in AI development, not for anything.
 

The conjunctive IF is a crux, and because we don't understand what happened with culture enough, the rest of the implication does not hold.

Consider a toy model counterfactual story: in a fantasy word, exactly repeating 128 bits of the first cultural replicator gives the human ancestor the power to cast a spell and gain +50% fitness advantage.  Notice that this is a different story from "overcoming channel to offspring capacity" - you may be in the situation you have plenty of capacity, but don't have the 128 bits, and this is a situation much more prone to discontinuities.

Because it is not clear if reality was more like stumbling upon a specific string, or piece of code, or evolutionary ratchet, or something else, we don't know enough to rule out a metaphor suggesting discontinuities.  

Conclusion

Where I do agree with Quintin is scepticism toward some other stories attempting to draw some strong conclusion from human evolution, including strong conclusions about discontinuities.

I do think there is a reasonably good metaphor genetic evolution : brains ~ base optimiser : mesa-optimiser, but notice that evolution was able to keep brains mostly aligned for all other species except humans.  Relation human brain : cultural evolution is very unlike base optimiser : mesa-optimiser

(Note on AI)

While I mostly wanted to focus on the evolutionary part of the OP, I'm sceptical about the AI claims too. (Paraphrasing: While the current process of AI training is not perfectly efficient, I don't think it has comparably sized overhangs which can be exploited easily.)

In contrast, to me, it seems current way how AIs learn is very obviously inefficient, compared to what's possible. For example, explain to GPT4 something new, or make it derive something new. Open a new chat window, and probe if it now knows it. Compare with a human.
 

  1. ^
  2. ^

    This does not imply the genetic evolutionary search is a particularly bad optimiser - instead, the landscape is such that there are many sources of negentropy available.

22 comments

Comments sorted by top scores.

comment by jacob_cannell · 2023-10-09T19:27:01.601Z · LW(p) · GW(p)

You are missing the forest for the trees, over-focusing on simplistic binary distinctions.

Organism death results in loss of all brain information that wasn't transmitted, so the emergence of sapiens and civilization (techno-culture) requires surpassing a key criticality where enough novel information about the world is transmitted to overcome the dissipative loss of death.

Non-genetic communication channels to the next generation include epigenetics, parental teaching / imitation learning, vertical transmission of symbionts, parameters of prenatal environment, hormonal and chemical signaling, bio-electric signals, and transmission of environmental resources or modifications created by previous generations, which can shape the conditions experienced by future generations

Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.

Regardless, the sharp left turn argument is wrong for an entirely different more fundamental reason.

Humans are the most successful new species in history according to the utility function of evolution. The argument that there is some inner/outer alignment mismatch is obviously false for this simple reason. It is completely irrelevant that some humans use contraception, or whatever, because it has zero impact on the fact that humanity is off the charts successful according to the true evolutionary utility function.

Replies from: Raemon, Jan_Kulveit, MinusGix
comment by Raemon · 2023-10-09T19:57:21.125Z · LW(p) · GW(p)

Humans are the most successful new species in history according to the utility function of evolution.

Have you covered this before in more detail? This seems probably false to me at first glance (I'd expect some insects and plants to be more successful). Last I checked, population rates in industrialized nations were also sub-replacement. But I also haven't thought about this that much. 

Replies from: jacob_cannell, Dagon, Gurkenglas
comment by jacob_cannell · 2023-10-10T00:06:58.046Z · LW(p) · GW(p)

Yes. I said "most successful new species in history according to the utility function of evolution". There are about 100k to 300k chimpanzees (similar for gorillas, orangutans, etc) for example, compared to ~8 billion humans. So we are over 4 OOM more successful than related lineages.

We are far and away the most successful recent species by a landslide, and probably the most successful mammal ever. There are almost as many humans as there are bats (summing over all 1,400 known bat species). There are a bit more humans in the world than rats (summing over all 60 rat species) - a much older ultra-successful lineage.

By biomass, humans alone - a single species - accounts for almost half of the land mammal biomass, and our domesticated food sources account for the other half. Biomass is perhaps a better estimate of genetic replicator success, as larger animals have more cells and thus more gene copies.

We are the anomaly.

Replies from: mikkel-wilson, mesaoptimizer, mesaoptimizer
comment by MikkW (mikkel-wilson) · 2023-10-12T09:43:39.652Z · LW(p) · GW(p)

Your argument indicates that humans are successful (by said metric) among mammals, but doesn't address how it compares to insects. As I understand it, some insect species have both more many more individuals and much more biomass than humans

comment by mesaoptimizer · 2023-10-10T18:12:38.400Z · LW(p) · GW(p)

I think I get the issue here. You seem to be aggregating over IGF of every human gene in the human population.

The sheer stupidity of our civilization and the rate at which we are hurtling towards extinction does imply that we are not 'aligned' to IGF, though -- so I disagree.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2023-10-12T09:53:32.752Z · LW(p) · GW(p)

I agree that conditional on humanity going extinct, the seeming success of our species by a genetic metric would only be a false success.

comment by mesaoptimizer · 2023-10-10T18:06:42.539Z · LW(p) · GW(p)

The unit of selection is the gene, not the species. Aggregating over a species is not a proxy for the success of a gene replicator -- why not aggregate over apes instead, or European-origin races instead?

comment by Dagon · 2023-10-09T23:26:29.923Z · LW(p) · GW(p)

I'd also like to hear more about why "evolution" is modeled as HAVING a utility function, rather than just being the name we give to the results of of variation and selection.  And only then discussion of what that function might be.  

I don't see how decision theory or VNM rationality applies to evolution, let alone what "success" might mean for a species as opposed to an individual conscious entity with semi-coherent goals.

Replies from: jacob_cannell
comment by jacob_cannell · 2023-10-10T00:11:21.497Z · LW(p) · GW(p)

The entire argument of the sharp left turn presupposes evolution has a utility function for the analogy to work, so arguing about that is tangential, but it' pretty obvious you can model genetic evolution as optimizing for replication count (or inclusive fitness over a species). We have concrete computational models of genetic optimization already, so there is really no no need to bring in rationality or agents, it's just a matter of optimization functions.

Replies from: philh
comment by philh · 2023-10-15T16:09:37.133Z · LW(p) · GW(p)

I think there's a terminological mismatch here. Dagon was asking about a "utility function" as specifically being something satisfying the VNM axioms. But I think you're using it (in this comment and the one Dagon was replying to) synonymous with the more general concept of an "optimization function", i.e. a function returning some output that somehow gets optimized for?

comment by Gurkenglas · 2023-10-10T18:04:10.977Z · LW(p) · GW(p)

Number of individuals is not a conserved quantity. If you're going to score contests of homeostasis, do it by something like biomass (plants win), or how much space it takes up in a random encounter table, or how much attention aliens would need to pay to it to predict the planet's future (humans win).

comment by Jan_Kulveit · 2023-10-10T12:47:34.418Z · LW(p) · GW(p)

Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.


It's likely literally not true, but if it was ... this proves my point, doesn't it? 

"Symbolic language" is exactly the type of innovation which can be discontinuous, has a type "code" more than "data quantity", and unlocks many other things. For example more rapid and robust horizontal synchronization of brains (eg when hunting). Or yes, jump in effective quantity of information transmitted via other signals in time.

At the same time ...could be clearly discontinuous: you can teach actual apes sign language, and it seems plausible this would make them more fit, if done in the wild. 

(It's actually somewhat funny that Eric Drexler has a hundred page report based exactly on the premise "AI models using human language is obviously stupid inefficiency, and you can make a jump in efficiency with more native-architecture-friendly format".

This does not seem obviously stupid: e.g, now, if you want one model to transfer some implicit knowledge it learned, the way to do it is use the ML-native model to generate shitload of human natural language examples, and train the other model on it, building the native representation again.)

comment by MinusGix · 2023-10-09T21:57:28.745Z · LW(p) · GW(p)

Along with what Raemon said, though I expect us to probably grow far beyond any Earth species eventually, if we're characterizing evolution as having a reasonable utility function then I think there's the issue of other possibilities that would be more preferable.
Like, evolution would-if-it-could choose humans to be far more focused on reproducing, and we would expect that if we didn't put in counter-effort that our partially-learned approximations ('sex enjoyable', 'having family is good', etc.) would get increasingly tuned for the common environments.

Similarly, if we end up with an almost-aligned AGI that has some value which extends to 'filling the universe with as many squiggles as possible' because that value doesn't fall off quickly, but it has another more easily saturated 'caring for humans' then we end up with some resulting tradeoff along there: (for example) a dozen solar systems with a proper utopia set up.
This is better than the case where we don't exist, similar to how evolution 'prefers' humans compared to no life at all. It is also maybe preferable to the worlds where we lock down enough to never build AGI, similar to how evolution prefers humans reproducing across the stars to never spreading. It isn't the most desirable option, though. Ideally, we get everything, and evolution would prefer space algae to reproduce across the cosmos.

There's also room for uncertainty in there, where even if we get the agent loosely aligned internally (which is still hard...) then it can have a lot of room between 'nothing' to 'planet' to 'entirety of the available universe' to give us. Similar to how humans have a lot of room between 'negative utilitarianism' to 'basically no reproduction past some point' to 'reproduce all the time' to choose from / end up in. There's also the perturbations of that, where we don't get a full utopia from a partially-aligned AGI, or where we design new people from the ground up rather than them being notably genetically related to anyone.
So this is a definite mismatch - even if we limit ourselves to reasonable bounded implementations that could fit in a human brain. It isn't as bad a mismatch as it could have been, since it seems like we're on track to 'some amount of reproduction for a long period of time -> lots of people', but it still seems to be a mismatch to me.

comment by RogerDearnaley (roger-d-1) · 2023-11-08T09:49:11.435Z · LW(p) · GW(p)

It's a well-know fact in anthropology that:

  1. During the ~500,000 years that Neanderthals were around, their stone- tool-making technology didn't advance at all: tools from half-a-million years apart are functionally identical. Clearly their capacity for cultural transmission of stone-tool-making skills was already at its capacity limit the whole time.
  2. During the ~300,000 years that Homo sapiens has been around, our technology has advanced at an accelerating rate, with a rate-of-advance roughly proportion to planetary population, and planetary population increasing with technological advances, with the positive feedback giving super-exponential acceleration. Clearly our cultural transmission of technological skills has never saturated its capacity limit (and information technology such as writing, printing, and the Internet has obviously further increased that limit).

So there's a clear and dramatic difference here, and it seems to date back to around the start of our species. Just what caused such a massive increase in our species' capacity to pass on useful information between generations is unclear. (Personally I suspect something in syntactic generality of our language, perhaps loosely analogous to the phenomenon of Turing-completeness.) But Homo sapiens is not just another hominid, and the sapiens part isn't just puffery: we have a dramatic capability shift from any previous species in the bandwidth of our cultural information transmission —  it's vastly larger than the information content of our genome, and still growing.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-01-11T02:35:22.251Z · LW(p) · GW(p)

Copying my response agreeing with and expanding on Jan's comment [LW(p) · GW(p)] on Evolution Provides No Evidence [LW(p) · GW(p)] [LW · GW]for the Sharp Left Turn. [LW(p) · GW(p)]

Jan, your comment here got a lot of disagree votes, but I have strongly agreed with it. I think the discussion of cultural transmission as source of the 'sharp left turn' of human evolution is missing a key piece.

Cultural transmission is not the first causal mechanism. I would argue that it is necessary for the development of modern human society, but not sufficient. 

The question of "How did we come to be?" is something I've been interested in my entire adult life. I've spent a lot of time in college courses studying neuroscience, and some studying anthropology. My understanding as I would summarize it here:

Around 2.5 million years ago - first evidence of hominids making and using stone tools

Around 1.5 million years ago - first evidence of hominids making fires

https://en.wikipedia.org/wiki/Prehistoric_technology

Around 300,000 years ago (15000 - 20000 generations), Homo sapiens arises as a new subspecies in Africa. Still occasionally interbreeds with other subspecies (and presumably thus occasionally communicates with and trades with). Early on, homo sapiens didn't have an impressive jump in technology. There was a step up in their ability to compete with other hominids, but it wasn't totally overwhelming. After out-competing the other hominids in the area, homo sapiens didn't sustain massively larger populations. They were still hunter/gatherers with similar tech, constrained to similar calorie acquisition limits.

They gradually grow in numbers and out-compete other subspecies. Their tools get gradually better.

Around 55,000 years ago (2700 - 3600 generations), Homo sapiens spreads out of Africa. Gradually colonizes the rest of the world, continuing to interbreed (and communicate and trade) with other subspecies somewhat, but being clearly dominant.

Around 12,000 years ago, homo sapiens began developing agriculture and cities. 

Around 6,000 years ago, homo sapiens began using writing.

From wikipedia article on human population:

 

Here's a nice summary quote from a Smithsonian magazine article:

For most of our history on this planet, Homo sapiens have not been the only humans. We coexisted, and as our genes make clear frequently interbred with various hominin species, including some we haven’t yet identified. But they dropped off, one by one, leaving our own species to represent all humanity. On an evolutionary timescale, some of these species vanished only recently.

On the Indonesian island of Flores, fossils evidence a curious and diminutive early human species nicknamed “hobbit.” Homo floresiensis appear to have been living until perhaps 50,000 years ago, but what happened to them is a mystery. They don’t appear to have any close relation to modern humans including the Rampasasa pygmy group, which lives in the same region today.

Neanderthals once stretched across Eurasia from Portugal and the British Isles to Siberia. As Homo sapiens became more prevalent across these areas the Neanderthals faded in their turn, being generally consigned to history by some 40,000 years ago. Some evidence suggests that a few die-hards might have held on in enclaves, like Gibraltar, until perhaps 29,000 years ago. Even today traces of them remain because modern humans carry Neanderthal DNA in their genome.

And from the wikipedia article on prehistoric technology:

Neolithic Revolution

The Neolithic Revolution was the first agricultural revolution, representing a transition from hunting and gathering nomadic life to an agriculture existence. It evolved independently in six separate locations worldwide circa 10,000–7,000 years BP (8,000–5,000 BC). The earliest known evidence exists in the tropical and subtropical areas of southwestern/southern Asia, northern/central Africa and Central America.[34]

There are some key defining characteristics. The introduction of agriculture resulted in a shift from nomadic to more sedentary lifestyles,[35] and the use of agricultural tools such as the plough, digging stick and hoe made agricultural labor more efficient.[citation needed] Animals were domesticated, including dogs.[34][35] Another defining characteristic of the period was the emergence of pottery,[35] and, in the late Neolithic period, the wheel was introduced for making pottery.[36]

So what am I getting at here? I'm saying that this idea of a homo sapiens sharp left turn doesn't look much like a sharp left turn. It was a moderate increase in capabilities over other hominids.

I would say that the Neolithic Revolution is a better candidate for a sharp left turn. I think you can trace a clear line of 'something fundamentally different started happening' from the Neolithic Revolution up to the Industrial Revolution when the really obvious 'sharp left turn' in human population began.

So here's the really interesting mystery. Why did the Neolithic Revolution occur independently in six separate locations?!

Here's my current best hypothesis. Homo sapiens originally was only somewhat smarter than the other hominids. Like maybe, ~6-year-old intelligences amongst the ~4-year-old intelligences. And if you took a homo sapiens individual from that time period and gave them a modern education... they'd seem significantly mentally handicapped by today's standards even with a good education. But importantly, their brains were bigger. But a lot of that potential brain area was poorly utilized. But now evolution had a big new canvas to work with, and the Machiavellian-brain-hypothesis motivation of why a strong evolutionary pressure would push for this new larger brain to improve its organization. Homo sapiens was competing with each other and with other hominids from 300,000 to 50,000 years ago! Most of their existence so far! And they didn't start clearly rapidly dominating and conquering the world until the more recent end of that. So 250,000 years of evolution figuring out how to organize this new larger brain capacity to good effect. To go from 'weak general learner with low max capability cap' to 'strong general learner with high max capability cap'. A lot of important things happened in the brain in this time, but it's hard to see any evidence of this in the fossil record, because the bone changes happened 300,000 years ago and the bones then stayed more or less the same. If this hypothesis is true, then we are a more different species from the original Homo sapiens than those original Homo sapiens were from the other hominids they had as neighbors. A crazy fast time period from an evolutionary time point, but with that big new canvas to work with, and a strong evolutionary pressure rewarding every tiny gain, it can happen. It took fewer generations to go from a bloodhound-type-dog to a modern dachshund.

There are some important differences between our modern Homo sapiens neurons and other great apes. And between great apes vs other mammals.

The fundamental learning algorithm of the cortex didn't change, what did change were some of the 'hyperparameters' and the 'architectural wiring' within the cortex.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3103088/ 

For an example of a 'hyperparameter' change, human cortical pyramidal cells (especially those in our prefrontal cortex) form a lot more synaptic connections with other neurons. I think this is pretty clearly a quantitative change rather than a qualitative one, so I think it nicely fits the analogy of a 'hyperparameter' change. I highlight this one, because this difference was traced to a difference in a single gene. And in experiments where this gene was expressed in a transgenic mouse line, the resulting mice were measurably better at solving puzzles.

 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10064077/ 

An example of what I mean about 'architectural wiring' changes is that there has been a shift in the patterns of the Brodmann areas from non-human apes to humans. As in, what percentage of the cortex is devoted to specific functions. Language, abstract reasoning, social cognition all benefited relatively more compared to say, vision. These Brodmann areas are determined by the genetically determined wiring that occurs during fetal development and lasts for a lifetime, not determined by in-lifetime-learning like synaptic weights are. There are exceptions to this rule, but they are exceptions that prove the rule. Someone born blind can utilize their otherwise useless visual cortex a bit for helping with other cognitive tasks, but only to a limited extent. And this plastic period ends in early childhood. An adult who looses their eyes gains almost no cognitive benefits in other skills due to 'reassigning' visual cortex to other tasks. Their skill gains in non-visual tasks like navigation-by-hearing-and-mental-space-modeling come primarily from learning within the areas already devoted to those tasks driven by the necessity of the life change.

https://www.science.org/content/blog-post/chimp-study-offers-new-clues-language 

What bearing does this have on trying to predict the future of AI?

If my hypothesis is correct, there are potentially analogously important changes to be made in shaping the defining architecture and hyperparameters of deep neural nets. I have specific hypotheses about these changes drawing on my neuroscience background and the research I've been doing over the past couple years into analyzing the remaining algorithmic roadblocks to AGI. Mostly, I've been sharing this with only a few trusted AI safety researcher friends, since I think it's a pretty key area of capabilities research if I'm right. If I'm wrong, then it's irrelevant, except for flagging the area as a dead end.

For more details that I do feel ok sharing, see my talk here: 

comment by chasmani · 2023-10-10T08:25:35.050Z · LW(p) · GW(p)

Here’s a slightly different story:

The amount of information is less important than the quality of the information. The channels were there to transmit information, but there were not efficient coding schemes.

Language is an efficient coding scheme by which salient aspects of knowledge can be usefully compressed and passed to future generations.

There was no free lunch because there was an evolutionary bottleneck that involved the slow development of cognitive and biological architecture to enable complex language. This developed in humans in a co-evolutionary process with advanced social dynamics. Evolution stumbled across cultural transmission in this way and the rest is quite literally history.

This is all highly relevant to AI development. There is the potential for the development of more efficient coding schemes for communicating AI learnt knowledge between AI models. When that happens we get the sharp left turn.

comment by Quintin Pope (quintin-pope) · 2023-10-10T01:19:41.292Z · LW(p) · GW(p)

I really don't want to spend even more [LW(p) · GW(p)] time arguing over my evolution post, so I'll just copy over our interactions from the previous times you criticized it, since that seems like context readers may appreciate.

In the comment sections of the original post:

Your comment [LW(p) · GW(p)]

[very long, but mainly about your "many other animals also transmit information via non-genetic means" objection + some other mechanisms you think might have caused human takeoff]

My response [LW(p) · GW(p)]

I don't think this objection matters for the argument I'm making. All the cross-generational information channels you highlight are at rough saturation, so they're not able to contribute to the cross-generational accumulation of capabilities-promoting information. Thus, the enormous disparity between the brain's with-lifetime learning versus evolution cannot lead to a multiple OOM faster accumulation of capabilities as compared to evolution.

When non-genetic cross-generational channels are at saturation, the plot of capabilities-related info versus generation count looks like this:

with non-genetic information channels only giving the "All info" line a ~constant advantage over "Genetic info". Non-genetic channels might be faster than evolution, but because they're saturated, they only give each generation a fixed advantage over where they'd be with only genetic info. In contrast, once the cultural channel allows for an ever-increasing volume of transmitted information, then the vastly faster rate of within-lifetime learning can start contributing to the slope of the "All info" line, and not just its height.

Thus, humanity's sharp left turn.

In Twitter comments on Open Philanthropy's announcement of prize winners:

Your tweet

But what's the central point, than? Evolution discovered how to avoid the genetic bottleneck myriad times; also discovered potentially unbounded ways how to transmit arbitrary number of bits, like learning-teaching behaviours; except humans, nothing foomed. So the updated story would be more like "some amount of non-genetic/cultural accumulation is clearly convergent and is common, but there is apparently some threshold crossed so far only by humans. Once you cross it you unlock a lot of free energy and the process grows explosively". (&the cause or size of treshold is unexplained)

(note: this was a reply and part of a slightly longer chain)

My response

Firstly, I disagree with your statement that other species have "potentially unbounded ways how to transmit arbitrary number of bits". Taken literally, of course there's no species on earth that can actually transmit an *unlimited* amount of cultural information between generations. However, humans are still a clear and massive outlier in the volume of cultural information we can transmit between generations, which is what allows for our continuously increasing capabilities across time.

Secondly, the main point of my article was not to determine why humans, in particular, are exceptional in this regard. The main point was to connect the rapid increase in human capabilities relative to previous evolution-driven progress rates with the greater optimization power of brains as compared to evolution. Being so much better at transmitting cultural information as compared to other species allowed humans to undergo a "data-driven singularity" relative to evolution. While our individual brains and learning processes might not have changed much between us and ancestral humans, the volume and quality of data available for training future generations did increase massively, since past generations were much better able to distill the results of their lifetime learning into higher-quality data.

This allows for a connection between the factors we've identified are important for creating powerful AI systems (data volume, data quality, and effectively applied compute), and the process underlying the human "sharp left turn". It reframes the mechanisms that drove human progress rates in terms of the quantities and narratives that drive AI progress rates, and allows us to more easily see what implications the latter has for the former.

In particular, this frame suggests that the human "sharp left turn" was driven by the exploitation of a one-time enormous resource inefficiency in the structure of the human, species-level optimization process. And while the current process of AI training is not perfectly efficient, I don't think it has comparably sized overhangs which can be exploited easily. If true, this would mean human evolutionary history provides little evidence for sudden increases in AI capabilities.

The above is also consistent with rapid civilizational progress depending on many additional factors: it relies on resource overhand being a *necessary* factor, but does not require it to be alone *sufficient* to accelerate human progress. There are doubtless many other factors that are relevant, such as a historical environment favorable to progress, a learning process that sufficiently pays attention to other members of ones species, not being a purely aquatic species, and so on. However, any full explanation of the acceleration in human progress of the form: 
"sudden progress happens exactly when (resource overhang) AND (X) AND (Y) AND (NOT Z) AND (W OR P OR NOT R) AND..." 
is still going to have the above implications for AI progress rates.

There's also an entire second half to the article, which discusses what human "misalignment" to inclusive genetic fitness (doesn't) mean for alignment, as well as the prospects for alignment during two specific fast takeoff (but not sharp left turn) scenarios, but that seems secondary to this discussion.

Replies from: Jan_Kulveit, Jonas Hallgren
comment by Jan_Kulveit · 2023-10-10T12:27:50.013Z · LW(p) · GW(p)

I'll try to keep it short
 

All the cross-generational information channels you highlight are at rough saturation, so they're not able to contribute to the cross-generational accumulation of capabilities-promoting information.

This seems clearly contradicted by empirical evidence. Mirror neurons would likely be able to saturate what you assume is brains learning rate, so not transferring more learned bits is much more likely because marginal cost of doing so is higher than than other sensible options. Which is a different reason than "saturated, at capacity".
 

Firstly, I disagree with your statement that other species have "potentially unbounded ways how to transmit arbitrary number of bits". Taken literally, of course there's no species on earth that can actually transmit an *unlimited* amount of cultural information between generations

Sure. Taken literally, the statement is obviously false ... literally nothing can store arbitrary number of bits because of Bekenstein bound. More precisely, the claim is existing non-human ways how to transmit leaned bits to the next generation in practice do not seem to be constrained by limits how many bits they can transmit, but by some other limits (e.g. you can transmit more bits than the capacity of the animal to learn).
 

Secondly, the main point of my article was not to determine why humans, in particular, are exceptional in this regard. The main point was to connect the rapid increase in human capabilities relative to previous evolution-driven progress rates with the greater optimization power of brains as compared to evolution. Being so much better at transmitting cultural information as compared to other species allowed humans to undergo a "data-driven singularity" relative to evolution. While our individual brains and learning processes might not have changed much between us and ancestral humans, the volume and quality of data available for training future generations did increase massively, since past generations were much better able to distill the results of their lifetime learning into higher-quality data.
 


1. As explained in my post, there is no reason to assume ancestral humans were so much better at transmitting information as compared to other species

2. The qualifier they were better at transmitting cultural information may (or may not) do a lot of work. 

The crux is something like "what is the type signature of culture".  Your original post roughly assumes "it's just more data". But this seems very unclear: a comment above yours, jacob_cannell confidently claims I miss the forest and makes a guess the critical innovation is "symbolic language". But, obviously, "symbolic language" is a very different type of innovation than "more data transmitted across generations". 

Symbolic language likely
- allows to use any type of channel more effectively
- in particular, allows more efficient horizontal synchronization, allowing parallel computations across many brains
- overall sounds more like software upgrade

Consider plain old telephone network wires: these have surprisingly large intrinsic capacity, which isn't that effectively used by analog voice calls.  Yes, when you plug a modem on both sides you experience "jump" in capacity - but this is much more like "software update" and can be more sudden.

Or a different example - empirically, it seems possible to teach various non-human apes sign language (their general purpose predictive processing brains are general enough to learn this). I would classify this as "software" or "algorithm" upgrade,. If someone did this to a group of apes in the wild, it seems plausible knowledge of language would stick and make them differentially more fit. But teaching apes symbolic language sounds in principle different from "it's just more data" or "it's a higher quality data", and implications for AI progress would be different.
 

it relies on resource overhand being a *necessary* factor,

My impression is compared to your original post your model drifts to more and more general concepts where it becomes more likely true,  harder to refute and less clear what the implication for AI is.  What is the "resource" here? Does negentropy stored in wood count as "a resource overhang"?

I'm arguing specifically against a version where "resource overhang" is caused by "exploitable resources you easily unlock by transmitting more bits learned by your brain vertically to your offspring brain" because your map of humans to AI progress is based on quite specific model of what are the bottlenecks and overhangs. 

If the current version of the argument is "sudden progress happens exactly when (resource overhang) AND ..." with "generally any kind of resource" then yes, this sounds more likely, but it seems very unclear what does this imply for AI.

(Yes I'm basically not discussing the second half of the article)

comment by Jonas Hallgren · 2023-10-10T09:04:45.270Z · LW(p) · GW(p)

Isn't there an alternative story here where we care about the sharp left turn, but in the cultural sense, similar to Drexler's CAIS where we have similar types of experimentation as happened during the cultural evolution phase? 

You've convinced me that the sharp left turn will not happen in the classical way that people have thought about it, but are you that certain that there isn't that much free energy available in cultural style processes? If so, why?

I can imagine that there is something to say about SGD already being pretty algorithmically efficient, but I guess I would say that determining how much available free energy there is in improving optimisation processes is an open question. If the error bars are high here, how can we then know that the AI won't spin up something similar internally? 

I also want to add something about genetic fitness becoming twisted as a consequence of cultural evolutionary pressure on individuals. Culture in itself changed the optimal survival behaviour of humans, which then meant that the meta-level optimisation loop changed the underlying optimisation loop. Isn't the culture changing the objective function still a problem that we have to potentially contend with, even though it might not be as difficult as the normal sharp left turn?

For example, let's say that we deploy GPT-6 and it figures out that in order to solve the loosely defined objective that we have determined for it using (Constitutional AI)^2 should be discussed by many different iterations of itself to create a democratic process of multiple COT reasoners. This meta-process seems, in my opinion, like something that the cultural evolution hypothesis would predict is more optimal than just one GPT-6, and it also seems a lot harder to align than normal? 

comment by Jonas Hallgren · 2024-12-06T14:34:40.477Z · LW(p) · GW(p)

I personally believe that this post is very important for claims between Shard Theory vs Sharp Left Turn. I often find that other perspectives on the deeper problems in AI alignment are expressed and I believe this to be a lot more nuanced take compared to Quentin Pope's essay on the Sharp Left Turn as well as the MIRI conception of evolution.

This is a field of study and we don't know what is going on, the truth is somewhere in between and acknowledging anything else is not being epistemically humble.

comment by tangerine · 2023-10-10T16:43:15.676Z · LW(p) · GW(p)

Cultural evolution is a bit of a catch-22; you need to keep it going for generations before you gain an advantage from it, but you can’t keep it going unless you’re already gaining an advantage from it. A young human today has a massive advantage in absorbing existing culture, but other species don’t and didn’t. It requires a very long up-front investment without immediate returns, which is exactly not what evolution tends to favor.

Regarding the relevance to AI, the importance of cultural evolution is a strong counter argument to fast take-off. Yudkowsky himself argues that humans somehow separated themselves from other apes, such that humans can uniquely do things that seem wildly out-of-distribution, like going to and walking on the moon, that we’re therefore more generally intelligent and that therefore AI could similarly separate itself from humans by becoming even more generally intelligent and gaining capabilities that are even more wildly out-of-distribution. However, the thing that separates humans from other apes is cultural evolution; it’s a one-time gain without a superlative. Moreover, it’s a counter argument to the very idea of general intelligence, because it shows that going to and walking on the moon are not in fact as wildly out-of-distribution as it first seems. The astronauts and the engineers who built their vehicles were trained exhaustively in the required capabilities, which had been previously culturally accumulated. Walking on the moon only seems striking because the compounding speed of memetic evolution is much higher than the extraordinary slow pace of genetic evolution.

A further argument I would make for the relevance of cultural evolution to AI is that in my view it shows that the ability of individual human agents to discover new capabilities is on average extremely limited and that the same is likely true for AI, although perhaps to a somewhat lesser extent. Humanity as a whole makes great strides, because among the many who try new things the very few who succeed pass on their new capabilities to the others. The vast majority of any individual’s capabilities relies on absorbing existing knowledge and habits. At the same time, most individuals do not pass on anything new and even when they do it’s the luck of the draw. I think the same is mostly true for any individual AI, because of the inherent rarity of useful behaviors in the space of all behaviors. If this is indeed true, then that means we have less to fear from misaligned individuals than from misaligned cultures.

comment by Algon · 2023-10-09T14:00:00.600Z · LW(p) · GW(p)

What do you think about the "SGD:AI :: human-learning-algorithms:humans" analogy? To me, that may be the most important belief underlying Quintin's model.