Values Darwinism

post by pchvykov · 2024-01-22T10:44:41.548Z · LW · GW · 13 comments

Contents

  TL;DR / summary:
None
13 comments

TL;DR / summary:

Can we objectively define "progress"? A Darwinian definition seems plausible. But when referring to cultures, world-views or theories, it seems that physical survival of the fittest must be complemented by memetic survival. The interaction of these two will generally be complex, and may parallel models from epidemiology. Most discussions usually account for physical or memetic selection - but not their interplay. If se suppose that happiness makes memes more contagious, we can push back on the common idea that "evolution does not select for happiness," and update our conventional notion of progress. 

[Cross-posted from my website http://pchvykov.com/blog]


I’ve had some discussions recently about how to measure whether our conventional idea of “human progress” really constitutes an improvement. There is some evidence that people were happier and more well-nourished in caves than directly after the agricultural revolution (see Sapiens by Y. Harari). Now there are ideas that “the simple life,” like at a permaculture village, is somehow more conducive to our humanity and well-being than city or corporate life. We could make the argument that the question of "progress" reflects the more fundamental question of “finding deeper truths”: On the one hand, we tend to think that our modern scientific understanding is “more true” than e.g., Aristotelian physics, or Buddhist cosmologies. On the other hand, we understand that "all models are wrong, but some are useful" (G. Box) — and perhaps some Buddhist ideas are more useful if the goal is human happiness (e.g., see Bhutan's Gross National Happiness). Note that for this and following discussion, we entertain the scenario where we do not have access to any fundamental notion of “truth” that we could universally agree on.

One argument that always seemed most convincing to me in favor of “progress is real” is Darwinian one: ultimately, success should be measured via “survival of the fittest.” Even if hunter-gatherer tribes were somehow happier, they were outcompeted by agricultural communities, which could support much larger, albeit more miserable, populations. Similar argument can be made for modern science: technological progress gave Europe the firepower to subjugate most Eastern nations. This idea could be summarized by refining Box’s quote to be “all models are wrong, but some are useful for long-term survival of their proponents” — and those models will be the ones we will see, get used to, and think of as "best" or "truest."

However, while Europe may have “conquered” India in physical space, lately it has become hard to ignore the proliferation of Yoga studios on every corner of major Western cities. This raises the idea that besides “physical Darwinism” – survival of the fittest physical organism – we must also consider “memetic Darwinism” – survival of the fittest idea. “Memes” (see Memetics) are ideas or “cultural units” that multiply, proliferate, and mutate against the substrate of the network of communicating human minds — not unlike epidemiology of diseases. Thus, while you are physically fighting and defeating another culture, that culture may be "infecting" yours with its ideas or world-views. Ultimately, the long-term survival of an ideology will depend on both, the rate at which its proponents physically survive in a competitive environment, as well as on the rate at which it can “infect” others’ minds (or be “displaced” in the minds of its proponents by other ideas). Note that this is a qualitatively new dynamic compared to Darwinism in the animal kingdom — where most non-human animals’ capacity for adapting fundamentally new behaviors in a single lifetime tends to be more limited.

As such, we can once again adapt Box’s quote to be “all models are wrong, but some are useful for their own long-term survival” – i.e., we get “model Darwinism.” What then determines “model fitness”? This question is especially interesting if we want to define our notion of "progress" (or even "truth"?) in terms of this fitness, as suggested earlier. Indeed, how does this fitness relate to our conventional or scientific ideas of truth? I think these relationships may be hard to guess, but could perhaps be quantitatively researched. We might hope that models conventionally seen as “closer to truth” give either a physical advantage (e.g., modern science) or are more contagious (e.g., yoga). Nonetheless, given the Darwinian success of astrology and tarot reading, it seems that the connection is non-obvious. In particular, it's clear that model fitness alone cannot be used to legitimize the notion of “technological progress.”

It could be interesting to leverage the research of epidemiology on networks to study similar trade-offs between physical and psychological effects of different world-views. E.g., we know that diseases that kill quickly often do not gather a high death-toll as they have little chance to spread. In model Darwinism, this could be paralleled to suicide cults. Modern science could then be compared to something that makes you live longer, but is not very contagious (though arguably this is starting to change). In contrast, more contagious ideas may perhaps be correlated with ones that give a “psychological advantage” – such as for subjective sense of well-being, purpose, or belonging. If an idea is physically helpful, but extremely distasteful, it will become unstable as it is easily displaced by more contagious ideas. This could motivate an interesting new way of modeling burnout, or conversely, altruism (see also my post on the Paradox of Tolerance).

Conventionally, we often think of material advantage as the sole criterion for what makes an world-view "good" or "progressive." But we are not purely physical beings. Long-term misery may be selected for by physical Darwinism, but not by memetic Darwinism. This way, we may start to push back on the idea that "evolution does not select for happiness" - which may be an artifact of ignoring the memetic contagion effects. If the trade-off is between long and miserable life, or short and happy one, the choice may be individual, but a bilateral view of evolution as presented here would likely select some balance.

13 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2024-01-26T21:03:19.840Z · LW(p) · GW(p)

However, while Europe may have “conquered” India in physical space, lately it has become hard to ignore the proliferation of Yoga studios on every corner of major Western cities.

I don't think this is in spite of conquering India; when a life form colonizes another life form (eg, a culture colonizing another), it typically eats parts of the victim. I think this is what we're seeing here. Note how much of the original complexity of yoga gets changed to fit the colonizing culture. The original phrasing seems to imply viewing the events as much more disconnected than I think is correct.

Thus, while you are physically fighting and defeating another culture, that culture may be "infecting" yours with its ideas or world-views.

In other words, I don't think this is the colonizing culture getting infected with the culture of the victim by accident; I claim that that was the goal, to consume the other culture's bodies, land, and minds, and use them for new purposes that lose the original meaning.

I don't think I can agree with the claim that this is evidence of a good thing happening.

Replies from: pchvykov, Jiro
comment by pchvykov · 2024-01-29T06:10:47.828Z · LW(p) · GW(p)

Interesting point - that adds a whole other layer of complexity to the argument, which feels a bit daunting to me to even start dissecting. 
Still, could we say that in the standard formulation of Darwinian selection, where only the "fittest" survives, the victim is really considered to be dead and gone? I think that at least in the model of Darwinism this is the case. So my goal in this post is to push back on this model. You give a slightly different angle to also push back on this model. I.e., whether intentional or accidental, when one culture defeats another, it takes on attributes of the victim - and therefore some aspects of the victim live on, modifying the dynamics of "natural selection."

As to whether it's a good thing - well, the whole post starts on moral relativism, so I don't want to suddenly bring in moral judgements at this point. It's an interesting question, and I think you could make the argument either way. 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-29T08:32:46.453Z · LW(p) · GW(p)

I have a hard time thinking about this stuff completely disconnected from moral implications, because there are a lot of people in and out of academia who want to take any opportunity to use this modeling to further their agenda; even improving models of this stuff gets used as propaganda, and in doing so, groups make understanding the models unpalatable to people they disagree with - which reduces insight input from people who become allergic to the topic. I almost didn't comment because of this, and I have a hunch that it's at least part of why the post didn't get more upvotes; the current ambient propaganda feeling around the concept of darwinism outside of academia is vaguely the implication that if you're talking about the concept, you think it's good.

I feel like we need to understand biological and civilizational agency - what it means for a thing to seek an outcome outside itself - in order to make high quality scientifically grounded claims about the relationship between morality and darwinism. I do think you're barking up vaguely the right tree, but I think we can do better than the might-makes-right perspective this seems like it could be twisted into implying: "so you're telling me that if we kill everyone who we don't like, that means our values are objectively good?" - if anything, the place values darwinism would be interesting to investigate is the long history of values selection up until humanity went civilization-scale a mere 12k years ago. and our lack of detailed evidence about the values of people 12k years ago and earlier makes that rather a problem.

Somewhat tangentially, on that note, among a few [LW · GW] other papers I was browsing a while ago, I really like this take on the large cost of information acquisition from death, because it seems to me to imply that learning mechanisms that do not cause death [LW(p) · GW(p)] are likely more effective at picking up adaptive traits.

Replies from: pchvykov
comment by pchvykov · 2024-02-03T12:31:23.457Z · LW(p) · GW(p)

Re: "so you're telling me that if we kill everyone who we don't like, that means our values are objectively good?" - winners write history, so I think yes, that is how people view Darwinism, selection of values, and I think implicitly our values are derived from this thinking (though no-one will ever admit to this). The modern values of tolerance I think still come from this same thinking - just with the additional understanding that diverse societies tend to win-out over homogeneous societies. So we transition from individual Darwinism, to group Darwinism - but still keep Darwinism as our way to arrive at values. 

Adding memetic Darwinism on top of this may qualitatively change the landscape, I believe. 

Thanks for those references - definitely an interesting way to quantitatively study these things, will look in more detail.

comment by Jiro · 2024-01-27T00:24:33.830Z · LW(p) · GW(p)

Note how much of the original complexity of yoga gets changed to fit the colonizing culture.

This is true of cultural elements that stay in the same country as well. Compare Casper the Friendly Ghost to how people thought of ghosts 150 years ago.

comment by MikkW (mikkel-wilson) · 2024-01-26T19:59:53.978Z · LW(p) · GW(p)

I would like to make a meta-comment, not directly related to this post.

When I came upon this post, it had a negative karma score. I don't think it's good form to have posts receiving negative net karma (except in extreme cases), so I upvoted to provide this with a positive net karma.

It is unpleasant for an author when they receive a negative karma score on a post which they spent time and effort to make (even when that effort was relatively small), much more so than receiving no karma beyond the starting score. This makes the author less likely to post again in the future, which prevents communication of ideas, and keeps the author from getting better at writing. In particular this creates a risk of LessWrong becoming more like a bubble chamber (which I don't think is desirable), and makes the community less likely to hear valuable ideas that go against the grain of the local culture.

A writer who is encouraged to write more will become more clear in their communication, as well as in their thoughts. And they will also get more used to the particular expectations of the culture of LessWrong- norms that have good reason to exist, but which also go against some people's intuitions or what has worked well for them in other, more "normie" contexts.

Karma serves as a valuable signal to authors about the extent to which they are doing a good job of writing clearly about interesting topics in a way that provides value to members of the community, but the range of positive integers provides enough signal. There isn't much lost in excluding the negative range (except in extreme cases).

Let's be nice to people who are still figuring writing out, I encourage you to refrain from downvoting them into negative karma.

Replies from: pchvykov
comment by pchvykov · 2024-01-29T06:24:01.596Z · LW(p) · GW(p)

I appreciate the care and support there :)
Honestly, I never really looked at my karma score and wasn't sure how that works. I think that helps.  The reason I post on here is because I find the engagement encouraging (even when negative) - like comments, evidence of people reading and thinking about my stuff. The worst is when no-one has read it at all. 

On the other hand, I agree that becoming a echo-chamber is a very possible danger, and goes deeply against LessWrong values - and I definitely have a sense that it's happening at least to some extent. I have a couple posts that got large negative scores for reasons that I think were more cultural than factual.

Still, it shouldn't be on readers to caretake for the writer's karma - I think your suggestion should be directed at whoever maintains this site, to update their karma calculation system. As for me, since engagement is encouraging, I'd love to see voting history of my posts - not just the final score (this article had quite some ups and downs over the last few days - I'd be curious to see it in detail). 

comment by Gunnar_Zarncke · 2024-01-22T16:55:13.616Z · LW(p) · GW(p)

You may wonder why you got downvoted. I didn't, I upvoted. But I know the local culture. There is a strong intention to overcome biological or other limitations. Maybe we can have it both (as can be seen in the other comment). 

I also think there has been a memetic evolution. A co-evolution of people and culture.  

But I think you commit a kind of Naturalistic Fallacy (see also Mind Projection Fallacy [? · GW]) here:

One argument that always seemed most convincing to me in favor of “progress is real” is Darwinian one: ultimately, success should be measured via “survival of the fittest.” [bold by me]

I think you are right that the current value systems result from evolutionary processes. That describes them decently. But you cannot derive a "should" from that. If we engineer values, the evolutionary pressures stop, and more complex processes start to take over. We could conceivably freeze culture in place or speed up change. 

Replies from: pchvykov
comment by pchvykov · 2024-01-26T06:20:21.356Z · LW(p) · GW(p)

Thanks for your comment! 
From this and other comments, I get the feeling I didn't make my goal clear: I'm trying to see if there is any objective way to define progress / values (starting from assuming moral relativism). I'm not tryin to make any claim as to what these values should be. Darwinian argument is the only one I've encountered that made sense to me - and so here I'm pushing back on it a bit - but maybe there are other good ways to objectively define values? 
Imho, we tend to implicitly ground many of our values in this Darwinian perspective - hence I think it's an important topic. 

I like what you point out about the distinction between prescriptive vs descriptive values here. Within moral relativism, I guess there is nothing to say about prescriptive values at all. So yes, Darwinism can only comment on descriptive values. 

However, I don't think this is quite the same as the fallacies you mention. "Might makes right" (Darwinian) is not the same as "natural makes right" - natural is a series of historical accidents, while survival of the fittest is a theoretical construct (with the caveat that at the scale of nations, number of conflicts is small, so historical accidents could become important in determining "fittest"). Similarly, "fittest" as determined by who survives seems like an objective fact, rather than a mind projection (with the caveat that an "individual" may be a mind projection - but I think that's a bit deeper). 

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-01-26T10:56:57.257Z · LW(p) · GW(p)

I think you should make clear that you are describing natural selection effects and not derive any norms from it ("should"). 

It would also be nice if you could make testable predictions ("if this effect continues, then we should see more/less of...").

Replies from: pchvykov
comment by pchvykov · 2024-01-29T06:14:40.432Z · LW(p) · GW(p)

yeah, that could be a cleaner line of argument, I agree - though I think I'd need to rewrite the whole thing. 

For testable predictions... I could at least see models of extreme cases - purely physical or purely memetic selection - and perhaps being able to find real-world example where one or the other or neither is a good description. That could be fun

comment by Dagon · 2024-01-22T16:54:50.486Z · LW(p) · GW(p)

I can't identify any clear claims in this.  Is it a malthusian argument (sure there are more people, but they're less happy, so it would be better to have stayed or go back to a previous equilibrium (better, that is (singing) for the people who are still alive)?  

comment by Richard_Kennaway · 2024-01-22T11:50:29.355Z · LW(p) · GW(p)

If the trade-off is between long and miserable life, or short and happy one, the choice may be individual

I'll take the long, happy life, thank you.