Value Claims (In Particular) Are Usually Bullshit

post by johnswentworth · 2024-05-30T06:26:21.151Z · LW · GW · 18 comments

Contents

  Idea 1: Parasitic memes tend to be value-claims, as opposed to belief-claims
  Idea 2: Transposons are ~half of human DNA
  Put Those Two Together...
None
18 comments

Epistemic status: mental model which I have found picks out bullshit surprisingly well.

Idea 1: Parasitic memes tend to be value-claims, as opposed to belief-claims

By "parasitic memes" I mean memes whose main function is to copy themselves - as opposed to, say, actually provide value to a human in some way (so that the human then passes it on). Scott's old Toxoplasma of Rage post is a central example; "share to support X" is another.

Insofar as a meme is centered on a factual claim, the claim gets entangled with lots of other facts about the world; it's the phenomenon of Entangled Truths, Contagious Lies [LW · GW]. So unless the meme tries to knock out a person's entire epistemic foundation, there's a strong feedback signal pushing against it if it makes a false factual claim. (Of course some meme complexes do [? · GW] try to knock out a person's entire epistemic foundation, but those tend to be "big" memes like religions or ideologies, not the bulk of day-to-day memes.)

But the Entangled Truths phenomenon is epistemic; it does not apply nearly so strongly to values. If a meme claims that, say, it is especially virtuous to eat yellow cherries from Switzerland... well, that claim is not so easily falsified by a web of connected truths.

Furthermore, value claims always come with a natural memetic driver: if X is highly virtuous/valuable/healthy/good/etc, and this fact is not already widely known, then it’s highly virtuous and prosocial of me to tell other people how virtuous/valuable/healthy/good X is, and vice-versa if X is highly dangerous/bad/unhealthy/evil/etc.

Idea 2: Transposons are ~half of human DNA

There are sequences of DNA whose sole function is to copy and reinsert themselves back into the genome. They're called transposons. If you're like me, when you first hear about transposons, you're like "huh that's pretty cool", but you don't expect it to be, like, a particularly common or central phenomenon of biology.

Well, it turns out that something like half of the human genome consists of dead transposons. Kinda makes sense, if you think about it.

Now we suppose we carry that fact over, by analogy, to memes. What does that imply?

Put Those Two Together...

… and the natural guess is that value claims in particular are mostly parasitic memes. They survive not by promoting our terminal values, but by people thinking it’s good and prosocial to tell others about the goodness/badness of X.

I personally came to this model from the other direction. I’ve read a lot of papers on aging. Whenever I mention this fact in a room with more than ~5 people, somebody inevitably asks “so what diet/exercise/supplements/lifestyle changes should I make to stay healthier?”. In other words, they’re asking for value-claims. And I noticed that the papers, blog posts, commenters, etc, who were most full of shit were ~always exactly the ones which answered that question. To a first approximation, if you want true information about the science of aging, far and away the best thing you can do is specifically look for sources which do not make claims about diet or exercise or supplements or other lifestyle changes being good/bad for you. Look for papers which just investigate particular gears, like “does FoxO mediate the chronic inflammation of arthritis?” or “what’s the distribution of mutations in mitochondria of senescent cells?”.

… and when I tried to put a name on the cluster of crap claims which weren’t investigating gears, I eventually landed on the model above: value claims in general are dominated by memetic parasites.

18 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2024-05-30T07:07:42.798Z · LW(p) · GW(p)

Had to remind myself that, temporarily assuming this claim is as true as stated, this doesn't mean value claims are bad to want to make, just that the rate of bullshit is higher and thus harder to make validly endorseable ones

comment by quila · 2024-05-30T08:11:58.633Z · LW(p) · GW(p)

By "parasitic memes" I mean memes whose main function is to copy themselves - as opposed to, say, actually provide value to a human in some way (so that the human then passes it on). [...] And I noticed that the papers, blog posts, commenters, etc, who were most full of shit were ~always exactly the ones which answered that question ["what diet/exercise/supplements/lifestyle changes should I make to stay healthier?"]

"x is healthy" is a factual claim. Those papers/blog posts/etc, if true, would "actually provide value to a human in some way," but are false by your account.

That falseness would also be hard for most to verify, because such claims are supposed to come from a specialized understanding. This gives them low entanglement with other beliefs in the audience's world model, which is the property you note (in 'idea 1') that value claims have.

I point this out to help locate what your heuristic is really approximating[1]. I.e., two components of something like memetic fitness: (1) a reason to care, (2) low entanglement with other beliefs.

  1. ^

    and as wording practice

Replies from: johnswentworth, eye96458
comment by johnswentworth · 2024-05-30T15:08:49.730Z · LW(p) · GW(p)

Yeah, admittedly health is kind of a borderline case where it's technically factual but in practice mostly operates as a standard value-claim because of low entanglement and high reason to care.

I basically agree with your claim that the heuristic is approximating (reason to care) + (low entanglement).

comment by eye96458 · 2024-06-20T16:26:39.433Z · LW(p) · GW(p)

I point this out to help locate what your heuristic is really approximating[1]. I.e., two components of something like memetic fitness: (1) a reason to care, (2) low entanglement with other beliefs.

 

By the term "reason to care" do you mean that the claim is relevant to someone's interests/goals?

  • eg, a claim of the form "X is healthy" is probably relevant for someone who highly values not dying
  • eg, a claim of the form "Guillermo del Toro's new movie is on Netflix." is probably not relevant for someone who does not value watching horror films
comment by Unnamed · 2024-05-30T08:28:07.241Z · LW(p) · GW(p)

I don't think that the key element in the aging example is 'being about value claims'. Instead, it's that the question about what's healthy is a question that many people wonder about. Since many people wonder about that question, some people will venture an answer. Even if humanity hasn't yet built up enough knowledge to have an accurate answer.

Thousands of years ago many people wondered what the deal is with the moon and some of them made up stories about this factual (non-value) question whose correct answer was beyond them. And it plays out similarly these days with rumors/speculation/gossip about the topics that grab people's attention. Where curiosity & interest exceeds knowledge, speculation will fill the gaps, sometimes taking on a similar presentation to knowledge.

Note the dynamic in your aging example: when you're in a room with 5+ people and you mention that you've read a lot about aging, someone asks the question about what's healthy. No particular answer needs to be memetic because it's the question that keeps popping up and so answers will follow. If we don't know a sufficiently good/accurate/thorough answer then the answers that follow will often be bullshit, whether that's a small number of bullshit answers that are especially memetically fit or whether it's a more varied and changing froth of made-up answers.

There are some kinds of value claims that are pretty vague and floaty, disconnected from entangled truths and empirical constraints. But that is not so true of instrumental claims about things like health, where (e.g.) the claim that smoking causes lung cancel is very much empirical & entangled. You might still see a lot of bullshit about these sorts of instrumental value claims, because people will wonder about the question even if humanity doesn't have a good answer. It's useful to know (e.g.) what foods are healthy, so the question of what foods are healthy is one that will keep popping up when there's hope that someone in the room might have some information about it.

Replies from: tailcalled
comment by tailcalled · 2024-05-30T09:52:46.849Z · LW(p) · GW(p)

I think the value-ladenness is part of why it comes up even when we don't have an answer, since for value-laden things there's a natural incentive to go up right to the boundary of our knowledge to get as much value as possible.

comment by romeostevensit · 2024-05-30T18:55:12.539Z · LW(p) · GW(p)

They're coextensive/parasitic on virtues, virtues being hard won compressions of lots of contextual information about how to prioritize and behave for min-maxing costs and benefits in a side-effect free way. Since virtues are illegible to younger people who haven't built up enough data yet, values are an easy attribute substitution.

comment by lukehmiles (lcmgcd) · 2024-05-31T06:07:40.190Z · LW(p) · GW(p)

Hmm lots of popular value claims are quite beneficial to the recipient ("wash your hands and make everyone you know wash them too or you're disgusting" or "save up your money or you're poverty-minded")

Lots of factual claims with zero value implications or actual personal value are memetically fit (eg flame wars on certain physics topics)

Lots of factual-looking claims actually written by someone with an opinion who cares ("did you know Africa has the most languages of any continent?")

Lots of people with opinions who care actually do know the facts (eg some ai safety people are tired and just like "pause ai now!! Evil companies!!")

Maybe there's a better place to draw the line? Maybe the wise thing is to focus on the conversations where people demonstrate real interest in root causes of things, without totally ignoring the other subjects "nobody can EXPLAIN to me why murder is actually bad"

comment by TAG · 2024-06-05T09:40:55.185Z · LW(p) · GW(p)

This can be seen more charitably as desire to cut to the chase -- to solicit a small amount of actionable advice rather than a large body of background theory. Since everyone's time is limited, that can be instrumentally rational.

Replies from: johnswentworth
comment by johnswentworth · 2024-06-05T19:19:37.488Z · LW(p) · GW(p)

I mean, it could be instrumentally rational if for some reason you expect the advice to be true/useful and not just a parasitic meme.

comment by Noosphere89 (sharmake-farah) · 2024-06-02T17:07:48.526Z · LW(p) · GW(p)

The big reason that value claims tend to be on the more bullshit side is that values/morality has far, far more degrees of freedom than most belief claims, primarily because there are too many right answers to the question of what is ethical.

Belief claims can also have sort of effect (I believe the Mathematical Multiverse/Simulation Hypothesis idea by Max Tegmark and others like Nick Bostrom, while true, are basically useless claims for almost any attempt at prediction because they allow basically everything to be predicted, so it's an extremely weak predictive model, though it's an extremely strong generative model, which is why I hate the discourse on the Simulation/Mathematical Universe hypotheses.), but value claims tend to be worst offenders of not being entangled and having far too many right answers.

comment by Seth Herd · 2024-05-31T10:33:43.074Z · LW(p) · GW(p)

Are value claims also more complex on average? Are they more likely to be false because they contain more factual claims? (In addition to being more memetically fit because it would be virtuous to spread them if they're true).

Even if they're harder to get right, it seems that value claims are ultimately the most important ones to make. Nobody cares about facts unless they somehow serve some value.

This isn't to undercut the value of the titular insight.

comment by tailcalled · 2024-05-30T07:53:03.990Z · LW(p) · GW(p)

I think this is true and good advice in general, but recently I've been thinking that there is a class of value-like claims which are more reliable. I will call them error claims.

When an optimized system does something bad (e.g. a computer program crashes when trying to use one of its features), one can infer that this badness is an error (e.g. caused by a bug). We could perhaps formalize this as saying that it is a difference from how the system would ideally act (though I think this formalization is intractable in various ways, so I suspect a better formalization would be something along the lines of "there is a small, sparse change to the system which can massively improve this outcome" - either way, it's clearly value-laden).

The main way of reasoning about error claims is that an error must always be caused by an error. So if we stay with the example of the bug, you typically first reproduce it and then backchain through the code until you find a place to fix it.

For an intentionally designed system that's well-documented, error claims are often directly verifiable and objective, based on how the system is supposed to work. Error claims are also less subject to the memetic driver, since often it's less relevant to tell non-experts about them (though error claims can degenerate into less-specific value claims and become memetic parasites that way).

(I think there's a dual to error claims that could be called "opportunity claims", where one says that there is a sparse good thing which could be exploited using dense actions? But opportunity claims don't seem as robust as error claims are.)

comment by kvas_it (kvas_duplicate0.1636121129676118) · 2024-06-03T09:56:58.350Z · LW(p) · GW(p)

I think value claims are more likely to be parasitic (mostly concerned with copying themselves or participating in a memetic ensemble that's mostly copying itself) than e.g. physics claims, but I don't think you have good evidence to say "mostly parasitic".

My model is that parasitic memes that get a quick and forceful pushback from reality would face an obstacle to propagation compared to parasitic memes for which the pushback from reality is delayed and/or weak. Value claims and claims about longevity (as in your example, although I don't think those are value claims) are good examples of a long feedback cycle, so we should expect more parasites.

comment by Aprillion · 2024-06-02T14:42:32.586Z · LW(p) · GW(p)

half of the human genome consists of dead transposons

The "dead" part is a value judgement, right? Parts of DNA are not objectively more or less alive.

It can be a claim that some parts of DNA are "not good for you, the mind" ... well, I rather enjoy my color vision and RNA regulation, and I'm sure bacteria enjoy their antibiotic resistance.

Or maybe it's a claim that we already know everything there is to know about the phenomena called "dead transposons", there is nothing more to find out by studying the topic, so we shouldn't finance that area of research.

Is there such a thing as a claim that is not a value claim?

Is "value claims are usually bullshit" a value claim? Does the mental model pick out bullshit more reliably than to label as value claim from what you want to be bullshit? Is there a mental model behind both, thus explaining the correlation? Do I have model close enough to John's so it can be useful to me too? How do I find out?

Replies from: johnswentworth
comment by johnswentworth · 2024-06-02T18:46:11.992Z · LW(p) · GW(p)

The "dead" part is a value judgement, right?

No, "dead transposons" meaning that they've mutated in some way which makes them no longer functional transposons, i.e. they can no longer copy themselves back into the genome (often due to e.g. another transposon copying into the middle of the first transposon sequence).

comment by Arturo Macias (arturo-macias) · 2024-05-31T08:39:24.798Z · LW(p) · GW(p)

That is the whole point of ethical systems, isn't it? To derive all (etical) values from a few postulates. Of course, most of valuations are not ethical (they are preferences or tastes), but this is an excellent agument for rational (systematic) Ethics.

comment by Review Bot · 2024-05-31T18:32:24.858Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?