Why do you need the story?

post by George (George3d6) · 2021-11-24T20:26:01.569Z · LW · GW · 11 comments

This is a link post for https://cerebralab.com/Why_do_you_need_the_story?


Ok, that sounds like a half believable wise mystic, right? That would make for a decent intro to a mediocre young-adult fantasy coming of age powertrip novel

Now, imagine if the shaman just left it as "The root is poisonous if you are not sick, but it helps if you are sick and eat a bit of its pulp, most of the time". No rules of thumb about dose making the poison, no speculations as to how it works, no postulation of metaphysical rules, no analogies to morality, no grand story for the creation of the universe.

That'd be one boring shaman, right? Outright unbelievable. Nobody would trust such a witcherman to take them in the middle of the jungle and give them a psychedelic brew, he's daft. The other guy though... he might be made of the right stuff to hold ayahuasca ceremonies.


This is not limited to shamans though, but it's more obvious to see in the case of shamans, because you don't use their services.

If you are willing to stop adverting your eyes from uncomfortable uncertainty, you'll see it in most modern specialists, from electricians, to doctors to nuclear physicists. We are all just 2 or 3 reality-validated claims away from babbling nonsense in order to give answers to a a "why".

Ok, let me try again:


Is this a believable smart doctor? I'd say so, I'd say she's even above average. The kind of medic you want as your GP.

Would you be less likely to go to her if she just said: "Because ibuprofen makes people with colds feel better until they go away on their own. It might be a bit dangerous, but it might also be a bit helpful. We don't really know, the processes involved are so complex as to be on the edge of scientific understanding, and the few studies that tried to take the question seriously I've never read, it's just common practice to take them when cold."

To most people this doctor sounds like an uninformed moron, they'll go and complain about her in their antivaxx naturist mom facebook groups. But, why? This doctor is being more honest, both about their own reason for saying take ibuprofen (everybody does it) and about the real state of knowledge on the subject of efficacy and mechanism (it's complicated and uncertain).

We really don't understand if NSAIDs are useful when confronted with "inflammation", which in itself is a vague term that nobody agrees on. We neither perfectly understand how COX1&2 interact with the immune system, platelets and other signalling mechanisms, nor do we understand the multitude of NSAID effects beyond COX inhibition. Even if we did, we don't understand the immune system as a whole. There's likely some situations where NSAIDs help and others when they hurt and others when they are neutral but help alleviate unnecessary suffering, but currently, we can't explain why or even when those happen.


I have observed a pattern around the internet:

  1. Someone holds claims the expert/educated/mainstream consensus on some topic is bonk. Some branch of medicine is hogwash, some physics theory is incoherent and useless, the ethical stances of some group is blatantly inconsistent and dangerous.
  2. I cheer them on, you go fellow crazy person! This stuff is bs and more should hear about it.
  3. I keep reading their reply/post/article/book... and get increasingly sad as they finish of their claims with: But I have THE SOLUTION that medics don't want you to know, but I KNOW the correct interpretation for this realm of physics, but MY ETHICS could be imposed upon that group and they'd be saved.

I cannot express how much this saddens me. Why must it be that all healthy scepticism always turns into quackery.

Maybe this in part scares me because it contributes towards a rule like: All writers which are sceptical of a lot of popular theories are insane quacks. Given that I fit in that category, I might be the same, and simply missing out on whatever my outrageous claims are.

More broadly though, it fits this model of people needing to know certain things, being unable to live in scepticism, in lack of knowledge.

You can't just disagree with modern medicine, you must be into yoga and crystal healing. You can't just think that cosmology is unfalsifiable and unreplicable, thus unscientific, you must then go on about your own theory of the universe. You can't just think that the tectonic plate model adds little value and is unfounded, you must then predicate at length why your cult has a better answer in your theory of geogensis.

Why ? Why !? Why !!


And I think in that very desperate question I've given myself an answer of sorts.

Because sometimes we can't stop demanding answers and other people, or our own selves, feel obliged to provide them.

I don't know why people can't be sceptical of things (theories, models, findings) without needing to fill that hole with other things.

I don't even claim that I'm correct in thinking this. Maybe my observations of this phenomenon are wrong, maybe the vast majority of sceptical thinkers don't do this, maybe I'm just selectively remembering these incidents because they bug me.

I think there's some probability that it's a thing that happens, that we should safeguard against being sceptical of something as an excuse for inserting our own dumb theory. Ideally, try and cultivate the ability to be ok without knowing under almost any circumstances.

I feel like any further speculation would be gravely disrespecting my own opinion.

But it does seem like something that a good pop writer would do. I'd be a much more credible social critic if I went on to expand about how this phenomenon defines what are fundamental questions, or how the rational mind needs some sort of "completeness" for its models, or how conceptual thinking is additive... or some such.

11 comments

Comments sorted by top scores.

comment by Viliam · 2021-11-25T16:50:40.872Z · LW(p) · GW(p)

Why must it be that all healthy scepticism always turns into quackery.

Because our instincts are optimized for signalling. Trusting the authorities on some topics and doubting them on other topics, is a lousy signal of either conformity or noncomformity. You will have no allies; both groups will laugh at you.

comment by G Gordon Worley III (gworley) · 2021-11-25T03:09:23.686Z · LW(p) · GW(p)

This points at something I find it very hard to work against: a desire to explain why things are the way they are rather than just accept that they are the way they are. Explanations are useful, but things will still be as they are even if i have no explanation for why they are the way they are. Yet when I find something in the world, there's a movement of mind that quickly follows observation that seeks to create a gears level model of the world. On the one hand, such models are useful. On the other, a desire to explain in the absence of any information to build it off of is worse than useless—it's the path to confusion.

Replies from: Jon Garcia
comment by Jon Garcia · 2021-11-25T05:10:38.088Z · LW(p) · GW(p)

This deep psychological need to latch onto some story, any story, to explain what we don't understand, seems to me to tie back in to the Bayesian Brain Hypothesis. Basically, our brains are constantly and uncontrollably generating hypotheses for the evidence we encounter in the world, seeing which ones predict our experiences with the greatest likelihood (weighted by our biological and cultural priors, of course). These hypotheses come in the form of stories because stories have the minimum level of internal complexity to explain the complex phenomena we experience (which, themselves, we internalize as stories). Choosing the "best" explanation, of course, follows Bayes' formula:

A few problems with this:

  1. We might just be terrible at choosing good priors (). Occam's Razor / Solomonoff Induction just isn't that intuitive to most humans. Most people find consciousness (which is familiar) to be simpler than neuroscience (which is alien), so they see no problem hypothesizing disembodied spirits, yet they scoff at the idea of humans being no more than matter. Astrology sounds reasonable when you have no reason to think that stars and planets shouldn't have personalities and try to affect your personal life, like everyone else, just so long as you don't try to figure out how that would actually work at a mechanistic level. Statistical modeling, on the other hand, is hard for humans to grasp, and therefore much more complicated, and therefore much less likely to have any explanatory power a priori, at least as far as most people are concerned.
  2. Likelihood functions () can be really hard to figure out. They require coming up with hypotheses that have the same causal structure as the real system they're trying to predict. When most of our declarative mental models exist at the level of abstraction of human social dynamics, it can be difficult to accurately imagine all the interacting bodily systems and metabolic pathways that make NSAIDs (or any other drugs, to say nothing of whole foods) have the precise effect that they do.
  3. Unfortunately, evolution didn't equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of. That means the denominator in the equation above () is often much less than it should be, even if the priors and evidential likelihoods are all correct, because other hypotheses have not had a chance to weigh in. For most people, all future (or as-yet unheard-of) scientific discoveries are effectively given a prior probability of 0, while all the myths passed down from the tribal/religious/political elders seem to explain everything as well as anything they've ever heard, and so those stories get all the weight and all the acceptance.

It's unavoidable for us as humans with Bayesian-ish brains to start coming up with stories to explain phenomena, even when evidence is lacking. We just need to be careful to cultivate an awareness for when our priors may be mistaken, for when our stories don't have sufficiently reductionist internal causal structure to explain what they are meant to explain, and for when we probably haven't even considered hypotheses that are anywhere close to the true explanation.

Replies from: Astor, TAG
comment by Astor · 2021-11-25T08:57:50.475Z · LW(p) · GW(p)

I am eager to explore your answer. Why do you think that "stories have the minimum level of internal complexity to explain the complex phenomena we experience"? Is it only because you suppose we internalize phenomena as stories? Do you have any data or studies on that? What's your understanding of a story? Isn't a straightforward description not even less complex because you do not need a full-blown plot to depict something like a chair?

Replies from: jacopo, Jon Garcia
comment by jacopo · 2021-11-25T10:35:30.559Z · LW(p) · GW(p)

I notice that while a lot of the answer is formal and well-grounded, "stories have the minimum level of internal complexity to explain the complex phenomena we experience" is itself a story :) Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking "why x? ...and why y?". So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding. Obviously, this is also a story, so discount it accordingly...

Replies from: Jon Garcia
comment by Jon Garcia · 2021-11-25T17:19:38.723Z · LW(p) · GW(p)

I notice that while a lot of the answer is formal and well-grounded, "stories have the minimum level of internal complexity to explain the complex phenomena we experience" is itself a story :)

Yep. That's just how humans think about it: complex phenomena require complex explanations. "Emergence," as complexity arising from the many simple interactions of many simple components, I think is a pretty recent concept for humanity. People still think intelligent design makes more intuitive sense than evolution, for instance, even though the latter makes astronomically fewer assumptions and should be favored a priori by Occam's Razor.

Replies from: Raven
comment by Raven · 2021-11-26T05:30:27.201Z · LW(p) · GW(p)

I don't have anything to add, but this phenomenon was discussed in greater detail in Explain/Worship/Ignore. https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore [LW · GW]

comment by Jon Garcia · 2021-11-25T15:17:26.309Z · LW(p) · GW(p)

By "story," I mean something like a causal/conceptual map of an event/system/phenomenon, including things like the who, what, when, where, why, and how. At the level of sentences, this would be a map of all the words according to their semantic/syntactic role, like part of speech, with different slots for each role and connections relating them together. At the level of what we would normally call "stories," such a story map would include slots for things like protagonist, antagonist, quest, conflict, plot points, and archetypes, along with their various interactions.

In the brain, these story maps/graphs could be implemented as regions of the cortex. Just as some cortical regions have retinotopic or somatotopic maps, more abstract regions may contain maps of conceptual space, along with neural connections between subregions that represent causal, structural, semantic, or social relationships between items in the map. Other brain regions may learn how to traverse these maps in systematic ways, giving rise to things like syntax, story structure, and action planning.

I've suggested before (https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/?commentId=PHYKtp7ACkoMf6hLe [LW · GW]) that I think these sorts of maps may be key to understanding things like language and consciousness. Stories that can be loaded into and from long-term memory or transferred between minds via language can offer a huge selective advantage, both to individual humans and to groups of humans. I think the recogition, accumulation, and transmission of stories is actually pretty fundamental to how human psychology works.

Replies from: Astor
comment by Astor · 2021-11-25T20:42:46.126Z · LW(p) · GW(p)

Thank you for explaining it. I really like this concept for stories because it focuses on the psychological aspect of stories as understanding something which sometimes is missing in literary perspectives. How would you differentiate between a personal understanding of a definition and a story? Would you?

My main approach to stories is to define them more abstractly as a rhetorical device for representing change. This allows me to differentiatie between a story (changes), a description (states) and an argument (logical connections of assertions). I suppose, in your understanding, all of them would be some kind of story? This differentiation could also be helpful in understanding the process of telling a story versus giving a description.

Unfortunately, you did not explain how your answer relates to "stories have the minimum level of internal complexity to explain the complex phenomena we experience". In your answer you do not compare stories to other ways of encoding information in the brain. Are there any others, in your opinion?

comment by TAG · 2021-11-25T23:38:24.936Z · LW(p) · GW(p)

Unfortunately, evolution didn’t equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of

..and any effort to push against that, to assign more probability to the unknown hypotheses, is an effort in the direction of modest epistemology.

comment by cquenelle · 2021-11-25T01:56:29.342Z · LW(p) · GW(p)

Thanks for your essay, it was encouraging and inspiring!

What you have observed seems to accurately reflect the world and the way people function (not just on the internet).  When I did a google search for "the need to believe" I found links that seemed interesting and relevant.  I have a working theory about the human brain which seems to fit the evidence that I see in my life, and what I have read.

The human brain is a giant pattern-matching machine.  It operates most of the time on incomplete data.  But the brain doesn't express patterns as abstract theories, it expresses those observed patterns as "belief".  We observe evidence, and we form a belief about the world in a very unscientific way.

There is no genetic, neurological process for "possible pattern, but not enough data"

Science itself (and by extension rationality itself) seems to be something humans invented to overcome the normal operating mode of the human brain which naturally operates more as a social instrument governed by sophistry and cognitive bias.

Another thing that ties in more specifically to the internet is the need to grab people's interest.  Claiming that an accepted pattern is not true or not useful is unlikely to attract attention or support.  Claiming that a different pattern (moral, ethical, social, etc) fits reality better will be more engaging to readers because of the nature of human brains that I described above.