Medlife Crisis: "Why Do People Keep Falling For Things That Don't Work?"

post by RomanHauksson (r) · 2023-02-21T06:22:23.608Z · LW · GW · 5 comments

This is a link post for https://www.youtube.com/watch?v=NmJsCaQTXiE

Contents

5 comments

In this video, Dr. Rohin Francis introduces the term "mechanistic bias" to describe the phenomenon where people tend to believe that a medical treatment would work if it has a plausible sounding mechanism of action, even when a randomized control trial would likely fail to demonstrate its effectiveness. The human body is really complicated and most treatments don't do anything, so a good explanation for how a treatment works is not enough; you have to put it to the test.

I really like this idea and believe it may translate well into other fields. For example, people may be too eager to believe in theoretical political structures when, if they were put into practice, would likely fail for reasons we can't predict because human society is really complicated and hard to model.

5 comments

Comments sorted by top scores.

comment by localdeity · 2023-02-21T11:35:45.480Z · LW(p) · GW(p)

I will be generally skeptical here.  If someone wants to coin a new term, or gain attention by accusing people of being systematically wrong in a new way, they'd better back it up.

the phenomenon where people tend to believe that a medical treatment would work if it has a plausible sounding mechanism of action, even when a randomized control trial would likely fail to demonstrate its effectiveness

This is an odd way of putting it.  If I understand the scenario correctly, the trial has not yet been run, and person doesn't know that "a randomized controlled trial would likely fail to demonstrate its effectiveness"—in fact, they expect the opposite.  One might rephrase it as "tend to believe [...] even though the treatment probably won't work".  This then raises the question of why you think it probably won't work.  Which you answer:

The human body is really complicated and most treatments don't do anything, so a good explanation for how a treatment works is not enough; you have to put it to the test.

Ok, so, it's an empirical fact that we've investigated many proposed treatments in the past with explanations as plausible-sounding as this hypothetical treatment, and the majority of them have proven ineffective?  That's good to know, and probably surprising to me.  It seems that "being ignorant of this empirical fact" is an excellent explanation for people having the wrong expectations here.  I would change the statement to this:

the phenomenon where people are ignorant of the fact—and have expectations counter to it—that many proposed treatments with equally plausible-sounding mechanisms have been tested and the majority proven not to work

My next question is "What people are we talking about here?"  If they were investors deciding whether to fund biotech startups, or whoever's in charge of approving research grants, then, yeah, it would be surprising if they hadn't learned such important facts about medical research.

If these are random civilians... then I don't think it's very surprising.  At what point would it be taught to them?  Would it come up in their lives?  On the first question, to the extent normal people are taught anything about medicine, it seems to be primarily "there are certified medical professionals, they know more than you, you should shut up and listen to them and not attempt to think for yourself"; in keeping with this, any medically proven drugs they do interact with are presented as fait accompli; if they do ask how the drug works, I expect they're given a few sentences of plausible-sounding explanation.

Which then explains their expectations: (a) "plausible-sounding explanation from medical person" = "working drug" in their experience, (b) they're probably never told much about all the drugs that failed in the past (maybe they hear about thalidomide, but that was "toxic side effects" rather than "ineffectual").  Under what circumstances would they learn about the zillions of drugs that turn out ineffectual?  Maybe there are news stories about promising drug candidates, but I suspect follow-up stories that say "it didn't work" get less attention (and are therefore often not written).

This brings us to the "Would it come up in their lives?" question, and, glancing at the timestamps in the video... it probably is indeed talking about civilians, and pseudoscience and homeopathy.  Ah.  Ok, well... Perhaps it would be worth teaching people about the above empirical fact about medical research, so they're less vulnerable to those peddling pseudo-medicine.  Sounds fine, I'm in favor of teaching people more facts if they're true.  I have no hope of being able to mold today's schools in any desirable direction, but you are welcome to try.

It seems like there is an additional claim that might be summarized as "people have a too-high prior expectation that we can come up with a simple plan to do a thing and it'll work without anything going severely wrong".  Which... maybe.  I would say:

  • There are plenty of fields in which a human who says "I have a plan to do thing X I haven't specifically done before; it should consist of these simple steps A B C" is in fact >90% likely able to execute X.  I'd say this is true for programming, for example.  "The planning fallacy" refers to the fact that they've usually forgotten several intermediate substeps and didn't know about several more, so X will take longer than expected, but they still manage to do it.
  • In much of society, we deliberately construct things to be mechanically simple.  I mean, it's a tautology that the houses, cars, working drugs, machines, etc. that we use must be simple enough for humans to have designed and built, and generally they're simple enough to use and maintain as well (sometimes requiring expert help).  So one's experience of the world, and lots of entire careers, are in domains where you can in fact expect things to behave relatively simply.
  • There are probably fields where naive people make the opposite error—they don't realize how completely and thoroughly people have worked out how to do xyz and troubleshoot all the problems people have encountered along the way, and they fail to ask for help or try to google for solutions.
  • I guess one could say that medicine is a field where you unavoidably have to interact with an extremely complex thing (biology), and we do it anyway because the rewards of getting it right are so high.

I also worry that the claim is more along the lines of "people have a too-high prior expectation that the world is comprehensible, that science can be done and is worthwhile".  Because I'd be very, very suspicious of anyone trying to push a claim like that without a lot of qualifiers.

I really like this idea and believe it may translate well into other fields. For example, people may be too eager to believe in theoretical political structures when, if they were put into practice, would likely fail for reasons we can't predict because human society is really complicated and hard to model.

I agree that people often have terrible ideas (and terribly unjustified overconfidence in those ideas) about government, and that this is a major problem, and that "human society is really complicated and hard to model" is part of the explanation here.

But for government, there are additional problems that seem possibly more important.  You're interacting with a system that contains intelligent beings—some possibly more intelligent than you as individuals, and market-like organizations that are almost certainly way smarter than you in aggregate—with different goals than you, in some cases actively opposed to you.  If we look up unintended consequences on Wikipedia, specifically the perverse results section... we have the advantage of hindsight, of course, but many of them look like very straightforward cases of "regulator imposes a rule on some rubes, the rubes respond in the obvious way to the new incentives, and the regulator is presumably surprised and outraged".

Why does this keep happening?  Perhaps the regulators really are that incompetent at regulating; I can think of reasons to expect this (typical mind fallacy, having contempt for the rubes, not having "security mindset", not thinking they or their advisors should put serious effort into considering the rubes' incentives and their options).  Also perhaps principal-agent issues—did the regulators who made these blunders get punished for it? Perhaps their actual job is getting reelected and/or promoted, and the effectiveness of the regulations is irrelevant to it (and they can claim that the fault lies with the real villains who e.g. mislabeled orphans as mentally ill to receive more funding).  The latter would explain the former: if they don't need to be competent at regulating, why would they be?  I suspect that's how it is.[1]  "The interests of the rulers aren't aligned with those of the people" is ultimately the biggest problem, in my view.

And if we move from people whose job nominally is regulating, to ordinary people who don't rule over anything except maybe (arguably) their own children... well, there's even less reason to expect people to have optimized their ability to predict the results of hypothetical national-scale policies.  Casual political discussion is an area where there are lots of applause lights [LW · GW] and people importing opinions without very critical examination from their party.  They might end up with good information about specific issues (or at least the subset of that information that their party wants to talk about), but that seems unlikely to translate into good prediction about hypothetical policies in general.

If naive people's intuitions about a field are wrong—people who've never studied the field nor intend to work in it—this doesn't strike me as particularly noteworthy.  If it matters that their intuitions are wrong, because the decisions are being made by rank amateurs (or those who aren't even trying) who either don't realize their ignorance or don't care, then that is a problem.

Anyway, in the human body... You could sort of metaphorically say that your cells are intelligent (they embody eons of evolutionary experience), and the interests of individual cells aren't necessarily aligned to yours (cancer), and cells will reject your attempts to interfere and fight back against you (blood-brain barrier and stuff; the immune system; autoimmune problems).  But I think it's pretty different.  The drug treatments fail because—I don't know, but I take it it's something like "there are 30,000 nearby chemical processes, most of which we aren't aware of, at least one of which ended up interfering with our drug".  Those processes already existed; the human body isn't going to invent new ones for the purpose of foiling a new drug (except possibly an irate immune system inventing antibodies).  It's "take a step and hope you didn't squash any of the thousands of toes you can't see", versus "security mindset: your law is providing these smart people a $1 million incentive to find a loophole".

  1. ^

    But perhaps, for every case like this, there were ten other regulations with equally plausible failure modes that didn't end up happening for obscure reasons.  I dunno.  I'm not sure how someone would do a comprehensive survey on regulations and evaluate them in this way, but it might be interesting.  (One should also weight by the severity of the failure; it's possible that a 10% failure rate would be unacceptably high.)  There's plenty more I could say, but it would be offtopic.

comment by Dalmert · 2023-02-21T07:53:33.488Z · LW(p) · GW(p)

I strong-upvoted this, but I fear you won't see a lot of traction on this forum for this idea.

I have a vague understanding of why, but I don't think I heard compelling enough reasons from other LWers yet. If someone has some, I'd be happy to read them or be pointed towards them.

I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I've read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.

Please don't consider this very vague recollection as anywhere close to a steelman.

I think this was motivated by how much bits of information can be taken in even with human-like senses, and how a single bit of information can halve a set of hypotheses. And where I did not see sufficient motivation for this argument for yet: this can indeed be true for very valuable bits of information, but are we assuming that any entity will easily be able to receive those very valuable bits? Surely a lot of bits are redundant and give no novel information, and some bits are very costly to attain. Sometimes you are lucky if you can even just so much as eliminate a single potential hypothesis, and even that is costly and requires interacting with the universe instead of just passively observing it.

But let's hear it from others!

(I'm not sure if this spectrum of positions have any accepted names, maybe rationalist vs empiricist?)

Replies from: None, Richard_Kennaway
comment by [deleted] · 2023-02-21T08:13:58.621Z · LW(p) · GW(p)

I think I've read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.

EY is probably wrong.  While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits.  When there are thousands of variables and finite and noisy data (like most medical data), superintelligences will very likely be almost as stymied as humans are*.

Of course, what a superintelligence could do is ask for the smallest number of experiments to deconfuse the various competing theories, and/or analyze far more data than any living human is capable of.  A superintelligence could recalculate their priors or flush their priors.  They could ultimately solve medical problems at a pace that humans cannot.

*another way to look at it.  Imagine a 'sherlock holmes' set of reasoning.  Now realize that for every branch in a story where sherlock 'deduces that this pipe tobacco combined with these footprints mean..." there are thousands of other valid possibilities that also fit the data.  Weak data creates a very large number of permutations of valid world states consistent with it.  A human may get "stuck" on the wrong branch, lacking the cognitive capacity to consider the others, while a superintelligence may be able to consider thousands of the possibilities in memory.  Either way, neither knows which branches are correct.

What EY is correct is a superintelligence could then consider many possible experiments, and find the ones that have the most information gain.  Perfect experiments that give perfect clean bits reduce the number of permutations by half with each clean bit of information gain. (note that EY, again, is probably wrong in that there may often not be experiments that produce data that clean)

comment by Richard_Kennaway · 2023-02-21T09:28:38.382Z · LW(p) · GW(p)

I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I've read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.

Most notably, this [LW · GW].

Also this [LW(p) · GW(p)], and in this [LW(p) · GW(p)] comment on it.

comment by JNS (jesper-norregaard-sorensen) · 2023-02-22T10:33:01.719Z · LW(p) · GW(p)

I really don't like the term "mechanistic bias", to me it implies that the human body is not mechanistic and that mechanistic explanations are wrong.

The failure here is not that people "buy" a mechanistic action (along the line of symptom X is because of Y, and treatment Z will change Y and lead to symptom X going away or be lessened). 

That in itself is fine, the problem is that people do not understand that the human body is very complicated. Which means that for a lot of things we really don't know the root cause, and the more "wrong" we are about the root cause, the more wrong a potential treatment will be.

Basically we do not have a good model of the human body, and pharmacodynamics, pharmacokinetics are often in the "we think/suspect/believe category".

IMO "complexity bias" / "mechanistic complexity bias" captures the failure more precisely.

A personal anecdote:

I suffer from severe and debilitating migraines. And for years, well decades actually, my doctor(s) tried what feels like everything.

You go through lists of drugs, 1st choice, 2nd choice etc. and I ended up trying drugs on the list where the "evidence" for effectiveness often was apocryphal.

Conversations with a doctor about them usually sounded like this "We think you should try X, you see X affects Y (well really a-Z, but mainly Y - we think), and Y might be a cause" which to me sounds a lot like "plausible sounding mechanism of action"

What ended up working in the end was something on the last list, but I got it prescribed for a totally unrelated thing.

And in retrospect there was hints that a drug doing what this drug does might be worth trying.