Incorrect hypotheses point to correct observations

post by Kaj_Sotala · 2018-11-20T21:10:02.867Z · score: 86 (35 votes) · LW · GW · 19 comments


  1. The Consciousness Researcher and Out-Of-Body Experiences
  2. The Artist and the Criticism
  3. The Traditionalist and the Anthropologist
  4. The Martial Artist and the Ki

1. The Consciousness Researcher and Out-Of-Body Experiences

In his book Consciousness and the Brain, cognitive neuroscientist Stansilas Dehaene writes about scientifically investigating people’s reports of their out-of-body experiences:

… the Swiss neurologist Olaf Blanke[ did a] beautiful series of experiments on out-of-body experiences. Surgery patients occasionally report leaving their bodies during anesthesia. They describe an irrepressible feeling of hovering at the ceiling and even looking down at their inert body from up there. [...]
What kind of brain representation, Blanke asked, underlies our adoption of a specific point of view on the external world? How does the brain assess the body’s location? After investigating many neurological and surgery patients, Blanke discovered that a cortical region in the right temporoparietal junction, when impaired or electrically perturbed, repeatedly caused a sensation of out-of-body transportation. This region is situated in a high-level zone where multiple signals converge: those arising from vision; from the somatosensory and kinesthetic systems (our brain’s map of bodily touch, muscular, and action signals); and from the vestibular system (the biological inertial platform, located in our inner ear, which monitors our head movements). By piecing together these various clues, the brain generates an integrated representation of the body’s location relative to its environment. However, this process can go awry if the signals disagree or become ambiguous as a result of brain damage. Out-of-body flight “really” happens, then—it is a real physical event, but only in the patient’s brain and, as a result, in his subjective experience. The out-of-body state is, by and large, an exacerbated form of the dizziness that we all experience when our vision disagrees with our vestibular system, as on a rocking boat.
Blanke went on to show that any human can leave her body: he created just the right amount of stimulation, via synchronized but delocalized visual and touch signals, to elicit an out-of-body experience in the normal brain. Using a clever robot, he even managed to re-create the illusion in a magnetic resonance imager. And while the scanned person experienced the illusion, her brain lit up in the temporoparietal junction—very close to where the patient’s lesions were located.
We still do not know exactly how this region works to generate a feeling of self-location. Still, the amazing story of how the out-of-body state moved from parapsychological curiosity to mainstream neuroscience gives a message of hope. Even outlandish subjective phenomena can be traced back to their neural origins. The key is to treat such introspections with just the right amount of seriousness. They do not give direct insights into our brain’s inner mechanisms; rather, they constitute the raw material on which a solid science of consciousness can be properly founded.

The naive hypotheses that out-of-body experiences represented the spirit genuinely leaving the body, were incorrect. But they were still pointing to a real observation, namely that there are conditions which create a subjective experience of leaving the body. That observation could then be investigated through scientific means.

2. The Artist and the Criticism

In art circles, there’s a common piece of advice that goes along the lines of:

When people say that they don’t like something about your work, you should treat that as valid information.

When people say why they don’t like it or what you could do to fix it, you should treat that with some skepticism.

Outside the art context, if someone tells you that they're pissed off with you as a person (or that you make them feel good), then that's likely to be true; but the reason that they give you may not be the true reason.

People have poor introspective access [LW · GW] to the reasons why they like or dislike something; when they are asked for an explanation, they often literally fabricate their reasons. Their explanation is likely false, even though it’s still pointing to something in the work having made them dislike it.

3. The Traditionalist and the Anthropologist

The Scholar’s Stage blog post “Tradition is Smarter Than You Are“, quotes Joseph Henrich’s The Secret of Our Success which reports that many folk traditions, such as not eating particular fish during pregnancy, are adaptive: not eating that fish during pregnancy is good for the child, mother, or both. But the people in question often do not know why they follow that tradition:

We looked for a shared underlying mental model of why one would not eat these marine species during pregnancy or breastfeeding—a causal model or set of reasoned principles. Unlike the highly consistent answers on what not to eat and when, women’s responses to our why questions were all over the map. Many women simply said they did not know and clearly thought it was an odd question. Others said it was “custom.” Some did suggest that the consumption of at least some of the species might result in harmful effects to the fetus, but what precisely would happen to the fetus varied greatly, though a nontrivial segment of the women explained that babies would be born with rough skin if sharks were eaten and smelly joints if morays were eaten. Unlike most of our interview questions on this topic, the answers here had the flavor of post-hoc rationalization: “Since I’m being asked for a reason, there must be a reason, so I’ll think one up now.” This is extremely common in ethnographic fieldwork, and I’ve personally experienced it in the Peruvian Amazon with the Matsigenka and with the Mapuche in southern Chile.

The people’s hypotheses for why they do something is wrong. But their behavior is still pointing to the fish in question being bad to eat during pregnancy.

4. The Martial Artist and the Ki

In Types of Knowing, Valentine writes:

Another example is the “unbendable arm” in martial arts. I learned this as a matter of “extending ki“: if you let magical life-energy blast out your fingertips, then your arm becomes hard to bend much like it’s hard to bend a hose with water blasting out of it. This is obviously not what’s really happening, but thinking this way often gets people to be able to do it after a few cumulative hours of practice.
But you know what helps better?
Knowing the physics.
Turns out that the unbendable arm is a leverage trick: if you treat the upward pressure on the wrist as a fulcrum and you push your hand down (or rather, raise your elbow a bit), you can redirect that force and the force that’s downward on your elbow into each other. Then you don’t need to be strong relative to how hard your partner is pushing on your elbow; you just need to be strong enough to redirect the forces into each other.
Knowing this, I can teach someone to pretty reliably do the unbendable arm in under ten minutes. No mystical philosophy needed.

The explanation about magical life energy was false, but it was still pointing to a useful trick that could be learned and put to good use.

Observations and the hypotheses developed to explain them often get wrapped up, causing us to evaluate both as a whole. In some cases, we only hear the hypothesis rather than the observation which prompted it. But people usually don’t pull their hypotheses out of entirely thin air; even an incorrect hypothesis is usually entangled [LW · GW] with [LW · GW] some correct observations. If we can isolate the observation that prompted the hypothesis, then we can treat the hypothesis as a burdensome detail [LW · GW] to be evaluated on its own merits, separate from the original observation. At the very least, the existence of an incorrect but common hypothesis suggests to us that there’s something going on that needs to be explained.


Comments sorted by top scores.

comment by Said Achmiz (SaidAchmiz) · 2018-11-20T21:38:36.451Z · score: 33 (11 votes) · LW(p) · GW(p)

In art circles, there’s a common piece of advice that goes along the lines of:

When people say that they don’t like something about your work, you should treat that as valid information.

When people say why they don’t like it or what you could do to fix it, you should treat that with extreme skepticism.

This is also very, very true in UX design (and all similar fields such as print design, etc.).

Edit: This is why “I didn’t like X” or “X seems ugly” or “I have a hard time reading X” is extremely valuable feedback, and any designer is always happy to hear it. On the other hand, “X is designed wrong because [criticism of specific design decision]” is basically worthless feedback, and almost never helps in any way.

Edit 2: Note that the above is the opposite of what people’s intuitions tell them constitutes valuable feedback. Non-designers often think that “I didn’t like it” or “it’s ugly” is unhelpful, and they try to be more helpful by making specific criticisms (like “the text is justified; it shouldn’t be”). Coming from a layperson, this attempt to be helpful is actually the diametric opposite of an improvement, turning useful feedback into useless advice.

Edit 3: The most useful feedback is the one that tells me what is the specific problem you are experiencing. The subjective nature of the feedback is important!

comment by cousin_it · 2018-11-20T22:50:25.051Z · score: 15 (8 votes) · LW(p) · GW(p)

It's especially bad in video games. Any popular game will have many reviews with clueless game design advice, instead of immediate reactions. That's one reason watching playtesters can be more useful than asking for feedback.

comment by Said Achmiz (SaidAchmiz) · 2018-11-20T23:38:55.769Z · score: 4 (3 votes) · LW(p) · GW(p)

Indeed, user testing is far superior to asking for “feedback” (and this is also the reason why conducting user tests is a non-trivial skill set—the tests must be set up properly, and users must be prepared, prompted, and guided properly, so as to elicit reactions and not evaluations or suggestions etc.).

Of course, one does regularly receive user feedback outside of the context of formal user testing, and such feedback can be quite useful (especially if it’s spontaneous)—but not, unfortunately, if the “pure subjective reactions aren’t helpful; try to give suggestions” meme is in effect.

comment by Raemon · 2018-11-21T02:30:38.593Z · score: 5 (4 votes) · LW(p) · GW(p)

I haven't actually put too much into this sub-question, curious about your thoughts: is it actually that important that users don't attempt to give feedback, or merely that the feedback-receiver makes sure to translate in their head all instances of "this is bad because Y, you should do X because Z" into "I didn't like this experience, something about it is off?"

comment by Said Achmiz (SaidAchmiz) · 2018-11-21T02:53:29.259Z · score: 13 (8 votes) · LW(p) · GW(p)

The problem with such translation attempts is that information is lost.

The thing is that comments of the form “X is bad because Y” can be prompted by any number of things.

Some of those are some form of bad experience that user had with your product. (However, you don’t know what sort of bad experience; the given reasons why X is supposedly bad are, as noted in the OP, generally confabulated.)

But some of those possible triggers for “X is bad because Y” comments are not any specific bad experience the user had, but rather some belief the user picked up about X being bad, some guideline gleaned from some listicle somewhere, some vague association based on a passing fad, some outright misunderstanding, some attempt to give general advice, etc., etc.

So what I want to know is, in what way did the user attempt to interact with the product, and what difficulty, dissatisfaction, or other undesirable experience ensued? That is the critical data.

But if the answer turns out to be “none, really, I didn’t even try using your thing and have no desire to use it, I’m just saying that aspect X of your thing is bad”, then there’s not much reason to take that as a “user is dissatisfied” data point.

comment by Raemon · 2018-11-21T02:55:58.016Z · score: 4 (3 votes) · LW(p) · GW(p)

Ah, makes sense.

comment by Motasaurus · 2018-11-20T22:01:26.556Z · score: 3 (3 votes) · LW(p) · GW(p)

I wonder if the desire to justify is the result of social stigma surrounding being the wrong sort of person [LW · GW]. If to be a full person is to have reasons for your actions, and to be nice to others, then it makes sense to me that the desire to give reasons for "mean" criticism is a result of wearing the correct model of personhood.

comment by linkhyrule5 · 2019-09-16T17:42:10.299Z · score: 2 (1 votes) · LW(p) · GW(p)

That, and the fact that when making decisions, it's *really important* to have non-subjective reasons -- or if you have subjective reasons, you still have objective reasons why they matter, like "if I don't like someone on a personal level, I really shouldn't spend the rest of my life with them" in dating.

So people are used to a mode of thought where a subjective opinion means "you're not done explaining"/"you haven't spent enough mental effort on the problem," and they engage the -- honestly, very productive, very healthy -- same mechanisms they use when justifying a command decision. It just happens to be mis-applied in this case.

comment by Pattern · 2018-11-21T06:36:10.426Z · score: 1 (1 votes) · LW(p) · GW(p)

That explanation adds something to my understanding of the post you linked to. Thanks.

comment by thomascolthurst · 2018-11-30T21:56:45.379Z · score: 13 (4 votes) · LW(p) · GW(p)

Especially for the study of consciousness and mental states, I associate this useful tactic with Daniel Dennett's term "heterophenomenology".

comment by Said Achmiz (SaidAchmiz) · 2018-12-01T02:02:41.846Z · score: 10 (5 votes) · LW(p) · GW(p)

I also thought of heterophenomenology when reading this. (It is, I think, quite a useful concept, and it would be good if someone wrote an LW post explaining it. I will do it at some point, if no one else gets there first.)

comment by yagudin · 2018-11-22T06:45:24.804Z · score: 13 (6 votes) · LW(p) · GW(p)

An example from Feynman's «The Character of Physical Law»:

The next guy who did something great was Maxwell, who obtained the laws of electricity and magnetism. What he did was this. He put together all the laws of electricity, due to Faraday and other people who came before him, and he looked at them and realized that they were mathematically inconsistent. In order to straighten it out he had to add one term to an equation. He did this by inventing for himself a model of idler wheels and gears and so on in space. He found what the new law was – but nobody paid much attention because they did not believe in the idler wheels. We do not believe in the idler wheels today, but the equations that he obtained were correct. So the logic may be wrong but the answer right.
comment by Gram Stone · 2018-11-22T01:14:06.993Z · score: 7 (4 votes) · LW(p) · GW(p)

Succubi/incubi and the alien abduction phenomenon point to hypnagogia, and evo-psych explanations of anthropomorphic cognition are often washed down with arguments that anthropomorphism causes good enough decisions while being technically completely false; there's an old comment by JenniferRM talking about how surprisingly useful albeit wrong it would be to model pathogens as evil spirits.

comment by Ruby · 2019-12-02T07:47:46.748Z · score: 5 (2 votes) · LW(p) · GW(p)

I've found myself referencing this post repeatedly since reading it. It's improved my reaction to ideas and models that seem definitely wrong. Now instead of just thinking "that's clearly wrong", I'm moved to ask "but what true observations are leading to this model?" and it feels like I see more value in even wrong models. Sometimes, I think, I learn to see where they're not wrong.

I also want to give this post credit for feeling quite original to me. Many other posts are refinements or clarifications of ideas which exists in some form elsewhere. While I can't be sure, this posts feels like it really cemented something new in me rather than just helping a thing I already knew about stick.

All in all, I really like this post.


comment by Elo · 2018-11-20T21:43:30.538Z · score: -39 (12 votes) · LW(p) · GW(p)

(not to Kaj, but to the fraction of lesswrong that is being dismissive)

Stop dismissing mystical information and start being curious.

comment by Ben Pace (Benito) · 2018-11-21T00:36:41.454Z · score: 27 (10 votes) · LW(p) · GW(p)

(Speaking as a mod) Elo, this is the third time (1 [LW(p) · GW(p)], 2 [LW(p) · GW(p)]) in the past two weeks that you've joined a discussion with a two-line comment telling people off people for not trying or being engaged with mystical things.

The top calibre of people on LW are people who are very often willing to follow the evidence, reasoning and empirical testing where it leads them, in the face of strong preconceptions. Telling them off without providing substantive reasons why they're wrong isn't an effective strategy to help change their minds. A better way to do this would be to write a post explaining the reasons why their current strategies can be improved.

  • One way would be for you to helpfully bridge the gap by giving an object-level explanation of the mechanisms by which some of the mystical ideas are useful. Here are some great [LW · GW] examples [LW · GW] of curated posts (that I really like) that attempt to demystify* why certain types of meditation are useful, that caused me to think differently about my own mind.
  • On the meta level, there's a spectrum of when we should engage more with an idea and when we should move past it, and if you can make a cogent meta-level argument that many folks' current heuristics here are mistaken, maybe then they'd take the action that seems right.

When I repeatedly read short comments that attempt to punish someone for the same reason with no substance, it sends me strong signals of being uncooperative. I think there are much more cooperative methods of achieving your goals, and I encourage you to use them.


*pardon the word choice.

comment by Elo · 2019-03-09T20:07:57.371Z · score: 5 (2 votes) · LW(p) · GW(p)

Of note, I did stop commenting in this way and shifted to different types of communication.

comment by jimrandomh · 2018-11-20T23:37:06.635Z · score: 11 (6 votes) · LW(p) · GW(p)

This would be better if you waited for someone to write an actually-dismissive comment, rather than projecting dismissiveness onto people who haven't commented.

comment by Kaj_Sotala · 2018-11-20T22:00:19.543Z · score: 10 (5 votes) · LW(p) · GW(p)

(Although there's an obvious connection, I didn't really write this post as a commentary for the "mysticism wars", and would rather not have the post be "politicized" by being overly associated with them.)