Reason isn't magic
post by Benquo · 2019-06-18T04:04:58.390Z · LW · GW · 19 commentsThis is a link post for http://benjaminrosshoffman.com/reason-isnt-magic/
Contents
19 comments
Here's a story some people like to tell about the limits of reason. There's this plant, manioc, that grows easily in some places and has a lot of calories in it, so it was a staple for some indigenous South Americans since before the Europeans showed up. Traditional handling of the manioc involved some elaborate time-consuming steps that had no apparent purpose, so when the Portuguese introduced it to Africa, they didn't bother with those steps - just, grow it, cook it, eat it.
The problem is that manioc's got cyanide in it, so if you eat too much too often over a lifetime, you get sick, in a way that's not easily traceable to the plant. Somehow, over probably hundreds of years, the people living in manioc's original range figured out a way to leach out the poison, without understanding the underlying chemistry - so if you asked them why they did it that way, they wouldn't necessarily have a good answer.
Now a bunch of Africans growing and eating manioc as a staple regularly get cyanide poisoning.
This is offered as a cautionary tale against innovating through reason, since there's a lot of information embedded in your culture (via hundreds of years of selection), even if people can't explain why. The problem with this argument is that it's a nonsense comparison.
First of all, it's not clear things got worse on net, just that a tradeoff was made. How many person-days per year were freed up by less labor-intensive manioc handling? Has anyone bothered to count the hours lost to laborious traditional manioc-processing, to compare them with the burden of consuming too much cyanide? How many of us, knowing that convenience foods probably lower our lifespans relative to slow foods, still eat them because they're ... more convenient?
How many people didn't starve because manioc was available and would grow where and when other things wouldn't?
If this is the best we can do for how poorly reason can perform, reason seems pretty great.
Second, we're not actually comparing reason to tradition - we're comparing changing things to not changing things. Change, as we know, is bad. Sometimes we change things anyway - when we think it's worth the price, or the risk. Sometimes, we're wrong.
Third, the actually existing Portuguese and Africans involved in this experiment weren't committed rationalists - they were just people trying to get by. It probably doesn't take more than a day's reasoning to figure out which steps in growing manioc are really necessary to get the calories palatably. Are we imagining that someone making a concerted effort to improve their life through reason would just stop there?
This is being compared with many generations of trial and error. Is that the standard we want to use? Reasoning isn't worth it unless a day of untrained thinking can outperform hundreds of years of accumulated tradition?
It gets worse. This isn't a randomly selected example - it's specifically selected as a case where reason would have a hard time noticing when and how it's making things worse. In this particular case, reason introduced an important problem. But life is full of risks, sometimes in ways that are worse for traditional cultures. Do we really want to say that reasoning isn't the better bet unless it outperforms literally every time, without ever making things locally worse? Even theoretically perfect Bayesian rationality will sometimes recommend changes that have an expected benefit, but turn out to be harmful. Not even tradition meets this standard! Only logical certainties do - provided, that is, we haven't made an error in one of our proofs.
We also have to count all the deaths and other problems averted by reasoning about a problem. Reasoning introduces risks - but also, risks come up even when we're not reasoning about them, just from people doing things that affect their environments. There's absolutely no reason to think that the sort of gradual iteration that accretes into tradition never enters a bad positive feedback loop. Even if you think modernity is an exceptional case of that kind of bad feedback loop, we had to have gotten there via the accretion of premodern tradition and iteration!
The only way out is through. But why did we have this exaggerated idea of what reason could do, in the first place?
19 comments
Comments sorted by top scores.
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-12-31T19:39:19.186Z · LW(p) · GW(p)
1. Manioc poisoning in Africa vs. indigenous Amazonian cultures: a biological explanation?
Note that while Josef Henrich, the author of TSOOS, correctly points out that cassava poisoning remains a serious public health concern in Africa, he doesn't supply any evidence that it wasn't also a public health issue in Amazonia. One author notes that "none of the disorders which have been associated with high cassava diets in Africa have been found in Tukanoans or other indigenous groups on cassava-based diets in Amazonia."
Is this because Tukanoans have superior processing methods, or is it perhaps because Tukanoan metabolism has co-evolved through conventional natural selection to eliminate cyanide from the body? I don't know, but it doesn't seem impossible.
2. It's not that hard to tell that manioc causes health issues.
Last year, the CDC published a report about an outbreak of cassava (manioc) poisoning including symptoms of "dizziness, vomiting, tachypnea, syncope, and tachycardia." These symptoms began to develop 4-6 hours after the meal. They reference another such outbreak from 2017. It certainly doesn't take "20 years," as Scott claims, to notice the effects.
There's a difference between sweet and bitter cassava. Peeling and thorough cooking is enough for sweet cassava, while extensive treatments are needed for bitter cassava. The latter gives better protection against insects, animals, and thieves, so farmers sometimes like it better.
Another analysis says that "A short soak (4 h) has no effect, but if prolonged (18 to 24 h), the amounts of cyanide can be halved or even reduced by more than six times when soaked for several days." Even if the level is cut by 1/6, is this merely slowing, or actually preventing the damage?
Wikipedia says that "Spaniards in their early occupation of Caribbean islands did not want to eat cassava or maize, which they considered insubstantial, dangerous, and not nutritious."
If you didn't know the difference between sweet and bitter cassava, it sounds like you'd discover your error the first time you happened to make flour from bitter cassava root in dramatic fashion. You could distinguish them by seeing their effect on insects and animals.
Moreover, sweet cassava is the product of thousands of years of selective breeding, though "bitter cultivars appear to have been more important than sweet." How could indigenous Amazonian cultures have run a selective breeding program for thousands of years, unless they knew what qualities they were breeding for?
3. Cultural selection might still be relevant, but maybe not for manioc/cassava
It's far from obvious to me that Amazonian manioc processing is best described as a result of blind cultural evolution. It seems far more likely that it's the result of careful observation, cause-and-effect reasoning, and traditions that pass down conscious knowledge from generation to generation.
In addition to Benquo's objections, I'd add that the persistence of African cassava poisoning may be due partly to the (purely conjectural) idea that Amazonian indigenous people may have developed a more cyanide-resistant metabolism over the course of their ancient relationship with the plant.
Cultural evolution might still be relevant to explain why Tukanoan and other indigenous cultures seem to have a "healthier relationship" with the plant. They've had thousands of years to develop cultural forms that work to consistently process manioc root, and to avoid famine situations in their local environment that might cause them to trade convenience for safety.
Anthropologists of modern cultures look at blue zones to see what people are eating and how they're living in the places where life expectancy is longest. This is a correlational waggling of the eyebrows. A culture that's found a way to consistently eat a healthy diet might be an object for our admiration, as opposed to a culture that is consistently trading food safety for convenience. It’s a “nice place to live.”
Ancient cultures have found complex systems that work for their local context through millennia of tinkering that you could never invent through rational thought. These systems are not just complicated. Each new development depends on pre-existing social dynamics. But we don’t therefore need to appeal to a blind force of cultural selection to explain how they develop.
Bring in an external shock, such as modernity, or bring a single practice to a culture in a different environment, and you risk changing the context in which that culture is a good adaptation for its members' wellbeing. It can’t adjust instantly to make itself a “nice place to live” again.
I've only read Scott's book review of TSOOS. It's possible that Scott, or Benquo, is missing the point, and that Henrich was just trying to illustrate this aspect of cultural evolution.
4. Quick takes on other examples
Arrow making: Why do we need to posit a blind force of cultural evolution to explain complex procedures for Fuegan arrow-making? They could have just made arrows in an obvious, easy way, then experimented with refinements slowly over time, keeping what worked and throwing out the rest. The end result is an arrow that works better than anything even a brilliant tinkerer could come up with on their own.
Fire making: It seems totally implausible to me that fire-making was discovered "maybe once." Lightning strikes would have been a source of fire around the world, just for starters. There was always an opportunity to get more fire, even if it was lost. Perhaps the reason aboriginal Tasmanians "lost the ability to make fire" is because they simply didn't want it anymore, for reasons that are peculiar to their culture.
Even if we take it as fact that fire-making was discovered a few times, or even once, this isn't clear evidence that fire-making is particularly hard to discover. It just means that it may have been so important to most cultures (except the Tasmanians), that they took pains to ensure that methods for fire-making were passed down generation to generation. If they failed, they either died out, or were taught by somebody else. Fire-making wasn't invented so few times because it's hard, but because it was unnecessary.
5. Overall thoughts on cultural evolution
It seems obviously true to me that cultural transmission of knowledge is the secret of our success. it's just that this isn't a mysterious process by which some indigenous cultures blindly stumbled their way into particular technologies, such as fire-making, arrow-making, or manioc processing. Indigenous cultures observe, experiment, and transmit acquired knowledge from generation to generation. Outsiders cannot re-invent that total body of knowledge and accumulated physical technology at the drop of a hat.
So we don't need "evolution" to explain the accumulation and transmission of this body of knowledge, or the difficulty of replicating it. Indigenous technology didn't develop through blind stumbling akin to genetic mutation.
It develops through deliberate tinkering and conscious observation of cause and effect. To participate in it depends on having enough accumulated knowledge, cooperation across a whole culture, and having enough physical artifacts to carry out the processes you've invented, and teaching these things to extremely plastic children's brains.
What's interesting here is that indigenous people do eventually sometimes seem to lose the ability to explain the technologies of their culture. Scott gives the example of Fijian women who are prohibited from eating sharks. Sharks happen to contain birth-defect-causing chemicals. But the women don't seem to have conscious awareness of that fact.
Is this because they don't understand why they don't eat shark, or are they just engaging in the time-honored tradition of messing with anthropologists just for the fun of it? What if some of the materially incorrect explanations are just stories that "overlay" the actual material understanding that shark meat isn't good for you?
Fijian girls telling anthropologists that eating shark meat makes children be born with shark skin reminds me of parents telling children that they shouldn't make funny faces because they'll get stuck that way. Parents know that the real reason is that it's socially obnoxious, and Fijian women might know that the real reason for not eating shark meat while pregnant is that it's not good for the baby.
These objections aside, it does seem possible that an oral tradition might be enough to transmit a huge number of useful technologies and processes, but not high-bandwidth enough to transmit all the reasons for them. Fuegans understand that they should generally stick with traditional arrow-making practices those practices represent an enormous amount of ancestral tinkering, but perhaps haven't passed down an oral history of the 17 different types of wood their greatX20-grandfather tried before hitting on the best choice.
If those causal stories are lost, it seems implausible that cultural transmission is so reliable that indigenous people never, ever tinker with tradition. Even if they develop the optimal method of making arrows, perhaps they occasionally experiment with other methods, observe that they do indeed work worse, and go back to the old ways. Maybe sometimes it works better, and they keep the modification.
But that's not how evolution works. Mutations accumulate by mechanisms that are actually blind and random, and the optimal methods are found by actually killing off the least-successful and rewarding the most successful with better reproductive success.
Cultures develop by tinkering, seeing what works, and keeping it. It's the rational minds of a culture's people, not random mutations and natural selection, that optimize the culture itself.
It does seem to me that a certain amount of cultural stability and smallness is helpful to optimize a culture. You have to see what effect a certain change has on your relationships, your material outcomes, or your society as a whole. In a modern context where our individual relationships are changing more often as we move, change jobs, and explore our self-actualization more frequently, perhaps the instability makes it more difficult to understand the role that any one thing plays?
But then again, maybe we'll just continue finding cultural and technological solutions for that faster rate of change. Maybe people will recognize the optimal rate of change for themselves personally, and seek to align their life choices with it. They won't move too often unless they can clearly recognize the benefits in moving. They won't change their religion or their job too often unless they have a good causal story as to why it will help them.
6. What does this mean for "rationalism?"
Well, score one for Elizabeth's epistemic spot checks. [LW · GW]
To be autocontrarian, I wonder if our subculture's embrace of a misleading interpretation of TSOOS is in fact an example of the phenomenon itself. Through blind "rational" thought and failures to reason correctly, we arrive at a story that's superficially compelling yet wrong. Because there's so little bandwidth to pin down the exact theoretical claims and subject them to rigorous empirical study, and because Henrich has a sort of authorial "ownership" over the concept, there's a possibility that, like Intelligent Design and Marxism, it's somewhat unfalsifiable. There are interested parties in finding new equivocations to salvage the argument.
Unlike arrow-making, manioc processing, hunting practices, and fire-making, macro explanations for culture don't make limited, material claims that lend themselves to easy falsification. So they hang around. It's hard to say the effect they have on individual people or on societies. I'm not confident that Marxist ideology was a necessary condition for the 20th century deaths in countries that identified themselves as Stalinist/Maoist.
We don't want to create a rationalist culture that's unfriendly to new ideas. But we do want one that recognizes how extremely limited our bandwidth is for testing them.
My sense is that one original rationalist impulse was to borrow well-established ideas in scientific, mathematical, and humanities literature and create a lifestyle based on them. It's low-hanging fruit. Examples include Bayesian statistics, the efficient market hypothesis, understanding how an exponential graph works, and political liberalism.
The replication crisis showed us the danger of using new scientific ideas as we go about it. The "cultural evolution" hypothesis is even more poorly vetted.
It seems like it would be good to adopt a sort of "developmental approach" to ideas.
- If an idea is brand new and from outside the academic literature, treat it with kid gloves. It may not be in its most compelling form. But don't adopt it as a lifestyle. Play with it and put it down.
- If an idea is a new and trendy idea that has been published in scientific literature, treat it with deep skepticism, but not toxic rejection out of hand.
- Old ideas with a strong body of scientific literature behind them are the prime examples of concepts we should try to understand and adopt.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-01-02T20:21:26.020Z · LW(p) · GW(p)
Follow-up:
Cultural evolution could still be a loose explanation for how cultural forms displace each other over time, when conscious cause-and-effect thinking is an insufficient explanation.
Modernity's displacement of many indigenous cultures is a classic example. While we can acknowledge on a moral level that the ongoing genocides of indigenous cultures are horrific, we can also acknowledge that they were made possible by cultural forms among the conquerors that gave them a military edge over their rivals. While it may be that Tukanoan culture was and is a "nicer place to be" than modern Colombia and Brazil, optimizing for a pleasant cultural existence is not the same as optimizing for maximum cultural spread.
We can try to identify the most widespread cultural forms, assume that their size indicates that something about them may help their culture to replicate itself, and ask what that reason may be. With manioc consumption, we might say that its spread into Africa is because it's a more convenient crop to satisfy calorie needs than the alternative. African peoples who adopt it and are thereby saved from a famine will survive; those people who reject it and succumb to a famine will not; and the culture of manioc will thereby spread due to "cultural evolution."
This hypothesis suggests some interesting other phenomena as well.
Why do some cultural forms seem to persist in spite of tremendous utilitarian pressure to adopt a new cultural form that would raise the status or wealth of a group of people?
Well, perhaps there are cultural forms that tend to thrive particularly among the most low-status or relatively poor people, due perhaps to some quirk in human psychology. If that cultural form is attractive to such people, it would survive even better if it tends to keep them poor and low-status, either by directly making them poor and low-status, or causing them to engage in behaviors that tend to have that effect. Call it a "poverty virus" or a "poverty meme."
So "cultural evolution" isn't just an hypothesis to explain why the most dominant cultures tend to spread, and it seems unwise to assume it means that technological or cultural advances are typically the result of blind stumbling in the dark rather than conscious experimentation. It's an explanation for why any culture, no matter how small or toxic, might persist. It points toward cultural ecosystems, cultural niches, cultural food chains.
Traditionalism certainly fits within this dynamic. Cultures that threw out traditional knowledge would, even today, tend to die out. But even if we can find many examples of cultures that failed to prosper due to experiments that proved unhelpful, does this suggest that rationality was a net harmful force in traditional cultures?
I don't think so. Experimentation is a risk/reward question. Sometimes, a culture is going to take an experimental risk, and it will end disastrously, bringing the culture down with it. But if, on net, experimentation has a net positive risk/reward ratio, then experimentation is overall helpful. We can't determine that experimentation and rationality would historically "get you killed" just by listing examples of hypothetical situations in which that might have been true.
Experimentation and rational thought is incompatible with traditionalism. Nor is blind tinkering responsible for indigenous technological advances listed in Scott's review of TSOOS.
Rationality and deliberate experimentation is how that traditional knowledge accumulated in the first place. I know that in indigenous cultures, there's a conscious recognition that traditional knowledge is a real, meaningful source of practical guidance. Traditionalism and rationality go hand in hand.
comment by Benquo · 2021-01-15T15:41:34.116Z · LW(p) · GW(p)
This post makes a straightforward analytic argument clarifying the relationship between reason and experience. The popularity of this post suggests that the ideas of cultural accumulation of knowledge, and the power of reason, have been politicized into a specious Hegelian opposition to each other. But for the most part neither Baconian science nor mathematics (except for the occasional Ramanujan) works as a human institution except by the accumulation of knowledge over time.
A good follow-up post would connect this to the ways in which modernist ideology poses as the legitimate successor to the European Enlightenment, claiming credit for the output of Enlightenment institutions, and then characterizing its own political success as part of the Enlightenment. Steven Pinker's "Enlightenment Now" might be a good foil.
comment by fiddler · 2020-12-29T05:01:23.919Z · LW(p) · GW(p)
This seems to me like a valuable post, both on the object level, and as a particularly emblematic example of a category ("Just-so-story debunkers") that would be good to broadly encourage.
The tradeoff view of manioc production is an excellent insight, and is an important objection to encourage: the original post and book (haven't read in the entirety) appear to have leaned to heavily on what might be described as a special case of a just-so story: the phenomena is a behavior difference is explained as an absolute by using a post-hoc framework, and then doesn't evaluate the meaning of the narrative beyond the intended explanatory effect.
This is incredibly important, because just-so stories have a high potential to deceive a careless agent. Let's look at the recent example of a AstroZeneca's vaccine. Due to a mistake, one section of the vaccine arm of the trial was dosed with a half dose followed by a full dose. Science isn't completely broken, so the possibility that this is a fluke is being considered, but potential causes for why a half-dose full-dose regime (HDFDR) would be more effective have also been proposed. Figuring out how much to update on these pieces of evidence is somewhat difficult, because the selection effect is normally not crucial to evaluating hypotheses in the presence of theory.
To put it mathematically, let A be "HDFDR is more effective than a normal regime," B be "AstroZeneca's groups with HDFDR were more COVID-safe than the treatment group," C be "post-B, a explanation that predicts A is accepted as fact," and D be "pre-B, a explanation that predicts A is accepted as the scientific consensus.
We're interested in P(A|B), P(A|(B&C)), and P(A|(B&D)). P(A|B) is fairly straightforward: By simple application of Bayes's theorem, P(A|B)=P(B|A)*P(A)/(P(A)*P(B|A)+P(¬A)*P(B|¬A). Plugging in toy numbers, let P(B|A)=90% (if HDFDR was more effective, we're pretty sure that the HDFDR would have been more effective in AstroZeneca's trial), P(A)=5% (this is a weird result that was not anticipated, but isn't totally insane). P(B|¬A)=10% (this one is a bit arbitrary, and it depends on the size/power of the trials, a brief google suggests that this is not totally insane). Then, P(A|B)=0.90*0.05/(0.9*0.05+0.95*0.1)=0.32
Next, let's look at P(A|B&C). We're interested in finding the updated probability of A, after observing B and then observing C, meaning we can use our updated prior: P(A|C)=P(C|(A&B))*P(A|B)/(P(C|(A&B))*P(A|B) + P(C|(¬A)&B) * P(¬A|B)). If we slightly exaggerate how broken the world is for the sake of this example, and say that P(C|A&B)=0.99 and P(C|¬A&B)=0.9 (If there is a real scientific explanation, we are almost certain to find it, if there is not, then we'll likely still find something that looks right), then this simplifies to 0.99*0.32/(0.99*0.32+ 0.9 * 0.68), or 0.34: post-hoc evidence adds very little credence in a complex system in which there are sufficient effects that any result can be explained.
This should not, however, be taken as a suggestion to disregard all theories or scientific explorations in complex systems as evidence. Pre-hoc evidence is very valuable: P(A|D&B) can be first evaluated by evaluating P(A|D)=P(D|A)*P(A)/(P(A)*P(D|A)+P(¬A)*P(D|¬A). As before, P(A)=0.05. Filling in other values with roughly reasonable numbers: P(D|¬A)=0.05 (coming up with an incorrect explanation with no motivation is very unlikely), P(D|A)=0.5 (there's a fair chance we'll find a legitimate explanation with no prior motivation). These choices also roughly preserve the log-odds relationship between P(C|A&B) and P(C|¬A&B). Already, this is a 34% chance of A, which further demonstrates the value of pre-registering trials and testing hypotheses.
P(A|B&D) then equals P(B|(A&D))*P(A|D)/(P(D|(A&B))*P(A|D)) + P(B|(¬A)&D) * P(¬A|D)). Notably, D has no impact on B (assuming a well-run trial, which allows further generalization), meaning P(B|A&D)=P(B|A), simplifying this to P(B|(A))*P(A|D)/(P(B|(A))*P(A|D)) + P(B|(¬A)) * P(¬A|D)), or 0.9*0.34/( 0.9*0.34+0.1*0.66), or 0.82. This is a stark difference from the previous case, and suggests that the timing of theories is crucial in determining how a Bayesian reasoner ought to evaluate statements. Unfortunately, this information is often hard to acquire, and must be carefully interrogated.
In case the analogy isn't clear, in this case, the equivalent of a unexpected regime being more effective is that reason apparently breaks down and yields severely suboptimal results: the hypothesis that reason is actually less useful than culture in problems with non-monotonically increasing rewards as the solution progresses is a possible one, but because it was likely arrived at to explain the results of the manioc story, the existence of this hypothesis is weak evidence to prefer it over the hypothesis with more prior probability mass: that different cultures value time in different ways.
Obviously, this Bayesian approach isn't particularly novel, but I think it's a useful reminder as to why we have to be careful about the types of problems outlined in this post, especially in the case of complex systems where multiple strategies are potentially legitimate. I strongly support collation on a meta-level to express approval for the debunking of just-so stories and allowing better reasoning. This is especially true when the just-so story has a ring of truth, and meshes well with cultural narratives.
comment by Raemon · 2019-07-22T20:18:36.057Z · LW(p) · GW(p)
I'm curating this post alongside Scott's previous Book Review: The Secret of Our Success [LW · GW].
One object level reason to curate both is that Scott's post highlights some important details and questions about how culture and reason interface, and this one offers a concrete, non-mysterious response that I found usefully clarifying.
There's a meta level thing where I think it's sort of using for LW readers who don't keep up to date as much about how ongoing conversations played out, to have a good repository of the highlights of that conversation.
comment by Aaro Salosensaari (aa-m-sa) · 2020-12-10T08:25:02.895Z · LW(p) · GW(p)
It gets worse. This isn't a randomly selected example - it's specifically selected as a case where reason would have a hard time noticing when and how it's making things worse.
Well, the history of bringing manioc to Africa is not the only example. Scientific understanding of human nutrition (alongside with disease) had several similar hiccups along the way, several which have been covered in SSC (can't remember the post titles where):
There was a time when Japanese army lost many lives to beriberi during Russo-Japanese war, thinking it was a transmissible disease, several decades [1] after the one of the first prominent Japanese young scholars with Western medical training discovered it was a deficiency related to nutrition with a classical trial setup in Japanese navy (however, he attributed it -- wrongly -- to deficiency of nitrogen). It took several decades to identify vitamin B1. [2]
Earlier, there was a time when scurvy was a problem in navies, including the British one, but then British navy (or rather, East India Company) realized citrus fruits were useful preventing scurvy, in 1617 [3]. Unfortunately it didn't catch on. Then they discovered it again with an actual trial and published the results, in 1740-50s [4]. Unfortunately it again didn't catch on, and the underlying theory was also as wrong as the others anyway. Finally, against the scientific consensus at the time, the usefulness of citrus was proven by a Navy read admiral in 1795 [5]. Unfortunately they still did not have proper theory why the citrus was supposed to work, so when the Navy managed to switch to using lime juice with minimal vitamin C content [6], then managed reason themselves out of use of citrus, and scurvy was determined as a result of food gone bad [7]. Thus Scott's Arctic expedition was ill-equipped to prevent scurvy, and soldiers in Gallipoli 1915 also suffered from scurvy.
Story of discovering vitamin D does not involve as dramatic failings, but prior to discovery of UV treatment and discovery of vitamin D, John Snow suggested the cause was adulterated food [8]. Of course, even today one can easily find internet debates about what is "correct" amount of vitamin D supplement if one has not sunlight in winter. Solving B12 deficiency induced anemia appears a true triumph of the science, as a Nobel prize was awarded for dietary recommendation for including liver in the diet [9] before B12 (present in liver) was identified [10].
Some may notice that we have now covered many of the significant vitamins in human diet. I have not even started with the story of Semmelweis.
And anyway, I dislike the whole premise of casting the matter about "being for reason" or "against reason". The issue with manioc, scurvy, beriberi, and hygiene was that people had unfortunate overconfidence in their per-existing model of reality. With sufficient overconfidence, rationalization or mere "rational speculation", they could explain how seemingly contradictory experimental results actually fitted in their model, and thus claim the nutrition-based explanations as an unscientific hogwash, until the actual workings of vitamins was discovered. (The article [1] is very instructive about rationalizations Japanese army could come up to dismiss Navy's apparent success with fighting beriberi: ships were easier to keep clean, beriberi was correlated with spending time on contact with damp ground, etc.)
While looking up food-borne diseases while writing this comment, I was reminded about BSE [11], which is hypothesized to cause vCJD in humans because humans thought it was a good idea to feed dead animals to cattle to improve nutrition (which I suppose it does, barring prion disease). I would view this as a failing from not having a full model what side-effects behavior suggested by the partial model would cause.
On the positive side, sometimes the partial model works well enough: It appears that miasma theory of disease like cholera was the principal motivator for building modern sewage systems. While it is today obvious cholera is not caused by miasma, getting rid of smelly sewage in orderly fashion turned out to be a good idea nevertheless [12].
I am uncertain if I have any proper suggested conclusion, except for that, in general, mistakes of reason are possible and possibly fatal, and social dynamics may prevent proper corrective action for a long time. This is important to keep in mind when making decisions, especially novel and unprecedented, and when evaluating the consequences of action. (The consensus does not necessarily budge easily.)
Maybe a more specific conclusion could be: If one has only evidently partial scientific understanding of some issue, it is very possible acting on it can have unintended consequences. It may even not be obvious where the holes in the scientific understanding are. (Paraphrasing the response to Semmelweis: "We don't exactly know what causes childbed fever, it manifests in many different organs so it could be several different diseases, but the idea of invisible corpse particles that defy water and soap is simply laughable.")
[1] https://pubmed.ncbi.nlm.nih.gov/16673750/
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725862/
[3] https://en.wikipedia.org/wiki/John_Woodall
[4] https://en.wikipedia.org/wiki/James_Lind
[5] https://en.wikipedia.org/wiki/Alan_Gardner,_1st_Baron_Gardner
[6] https://en.wikipedia.org/wiki/Scurvy#19th_century
[7] https://idlewords.com/2010/03/scott_and_scurvy.htm
[8] https://en.wikipedia.org/wiki/Rickets#History
[9] https://www.nobelprize.org/prizes/medicine/1934/whipple/facts/
[10] https://en.wikipedia.org/wiki/Vitamin_B12#Descriptions_of_deficiency_effects
[11] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopathy
comment by Ben Pace (Benito) · 2020-12-10T04:26:09.744Z · LW(p) · GW(p)
This is a simple and valuable addition to the discussion of cultural evolution that Scott and others have discussed and has been nominated for the review.
comment by Jimdrix_Hendri · 2019-07-24T02:35:05.252Z · LW(p) · GW(p)
We should select comparisons aimed at the getting the best result, not to make things easy on ourselves:
What if the Europeans had thought: "Hmm. The natives are following a procedure we don't understand with regard to casava. Their explanation doesn't make sense according to our own outlook, but it is apparent that they have a lot of experience. It may pay to be prudent rather than disregarding their rituals as superstitions."
Had the Europeans taken this attitude, they may have discovered the toxicity of yucca, experimented with imitating the leaching procedure or, at least, have introduced it slowly, since reliance on a monoculture exposes a population to other risks as well. In either case, wouldn't the Africans likely have been better off?
In case this seems like a special case, consider the impact of the introduction of potatoes to Ireland. As for the long-term, unquantifiable dangers of introducing genetically modified species into the environment on a massive scale; only time will tell.
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-06-24T19:47:47.591Z · LW(p) · GW(p)
I don't quite understand. Perhaps "reasoning" got it worse than "tradition" did. Then people learned what was wrong. And now they still insist on doing it not according to "tradition"? How is it different at all from setting up a new tradition and not bothering anymore?
comment by Stuart_Armstrong · 2019-06-18T12:01:53.046Z · LW(p) · GW(p)
Post was good, but I'd recommend adding an introductory paragraph to the link on LessWrong.
comment by Dr_Manhattan · 2019-06-19T14:34:39.633Z · LW(p) · GW(p)
Second, we're not actually comparing reason to tradition - we're comparing changing things to not changing things. Change, as we know, is bad.
Request for clarification: isn't "reasonable solution" always a "change" when compared to preexisting tradition?
Replies from: dxu↑ comment by dxu · 2019-06-19T18:52:10.883Z · LW(p) · GW(p)
Did you read the linked article?
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2019-06-20T19:10:45.161Z · LW(p) · GW(p)
Do you mean Zvi's "Change is bad"?
comment by habryka (habryka4) · 2019-06-18T20:02:40.981Z · LW(p) · GW(p)
I also liked this post. Would be happy to copy its full text over to LW if you are interested, since I can imagine including it in a bunch of future sequences (and also a bunch of other things, like making it discoverable in search)
Replies from: Benquocomment by SKEM · 2022-04-07T18:51:49.227Z · LW(p) · GW(p)
I have often wondered about a related topic - why some people feel threatened by reason, or get angry when you ask them to be reasonable.
This goes even so far (this has actually happened) that people get upset with me because I "impose reasonableness on myself". I would get called cold, distant, heartless (even if I'm, say, crying, which would traditionally be counted as indicating a rather strong emotional response) just because I won't throw a tantrum - which, on the surface one might think would be in their interest - so why the heck get angry about it?
My best idea so far is that they don't like the heightened standards, feel that they will apply to them too and want to blur things out so they stay comfortably uncertain.
I've been told to "cut myself some slack", to "not be so harsh to myself" (in situations where I seriously don't think that was called for) and when I said "I don't need slack, I need an answer" or "I'm not hard on myself just because I say what I just did was wrong" or somesuch, I would just get a frustrated look in return.
Let's say, these observations are what I would expect to see happen when people actually want as low pixle resolution as possible, so no mistakes can be called out, and desire strongly to live in the world where we're all equal but I'm entitled to all kinds of special treatmendt and I deserve all kinds of things but don't have to earn them and there are no dumb questions - ever! - and left is right when I feel like it, and crystals and love in harmony for eternity amen.