Losing Faith In Contrarianism
post by omnizoid · 2024-04-25T20:53:34.842Z · LW · GW · 44 commentsContents
46 comments
Crosspost from my blog.
If you spend a lot of time in the blogosphere, you’ll find a great deal of people expressing contrarian views. If you hang out in the circles that I do, you’ll probably have heard of Yudkowsky [LW(p) · GW(p)] say that dieting doesn’t really work, Guzey say that sleep is overrated, Hanson argue that medicine doesn’t improve health, various people argue for the lab leak, others argue for hereditarianism, Caplan argue that mental illness is mostly just aberrant preferences and education doesn’t work, and various other people expressing contrarian views. Often, very smart people—like Robin Hanson—will write long posts defending these views, other people will have criticisms, and it will all be such a tangled mess that you don’t really know what to think about them.
For a while, I took a lot of these contrarian views pretty seriously. If I’d had to bet 6-months ago, I’d have bet on the lab leak, at maybe 2 to 1 odds. I’d have had significant credence in Hanson’s view that healthcare doesn’t improve health until pretty recently, when Scott released his post explaining why it is wrong.
Over time, though, I’ve become much less sympathetic to these contrarian views. It’s become increasingly obvious that the things that make them catch on are unrelated to their truth. People like being provocative and tearing down sacred cows—as a result, when a smart articulate person comes along defending some contrarian view—perhaps one claiming that something we think is valuable is really worthless—the view spreads like wildfire, even if it’s pretty implausible.
Sam Atis has an article titled The Case Against Public Intellectuals. He starts it by noting a surprising fact: lots of his friends think education has no benefits. This isn’t because they’ve done a thorough investigation of the literature—it’s because they’ve read Bryan Caplan’s book arguing for that thesis. Atis notes that there’s a literature review finding that education has significant benefits, yet it’s written by boring academics, so no one has read it. Everyone wants to read the contrarians who criticize education—no one wants to read the boring lit reviews that say what we believed about education all along is right.
Sam is right, yet I think he understates the problem. There are various topics where arguing for one side of them is inherently interesting, yet arguing for the other side is boring. There are a lot of people who read Austian economics blogs, yet no one reads (or writes) anti-Austrian economics blogs. That’s because there are a lot of fans of Austrians economics—people who are willing to read blogs on the subject—but almost no one who is really invested in Austrian economics being wrong. So as a result, in general, the structural incentives of the blogosphere favor being a contrarian.
Thus, you should expect the sense of the debate you get, unless you peruse the academic literature in depth surrounding some topic, to be wildly skewed towards contrarian views. And I think this is exactly what we observe.
I’ve seen the contrarians be wrong over and over again—and this is what really made me lose faith in them. Whenever I looked more into a topic, whenever I got to the bottom of the full debate, it always seemed like the contrarian case fell apart.
It’s easy for contrarians to portray their opponents as the kind of milquetoast bureaucrats who aren’t very smart and follow the consensus just because it is the consensus. If Bryan Caplan has a disagreement with a random administrator, I trust that Bryan Caplan’s probably right, because he’s smarter and cares more about ideas.
But what I’ve come to realize is that the mainstream view that’s supported by most of the academics tends to be supported by some really smart people. Caplan’s view isn’t just opposed by the bureaucrats and teachers—it’s opposed by the type of obsessive autist who does a lit review on the effect of education. And while I’ll bet in favor of Caplan against Campus administrators, I would never make a mistake like betting against the obsessive high-IQ autists.
Sam Atis—a super forecaster—had a piece arguing against The Case Against Education, but it got eaten by a substack glitch. Reading his piece left me pretty sure that Bryan was wrong—especially after consulting a friend who knows quite a bit about these things. After reading it, I came away pretty confident that Caplan was wrong.
This is very far from the only case; I’ve watched the contrarian’s cases fall apart over and over again. Reading Alexey Guzey’s theses on sleep [LW · GW] left me undecided—but then Natania’s counter-theses on sleep [LW · GW]left me quite confident that Guzey is wrong. Guzey’s case turns out to be shockingly weak and opposed by a quite major mountain of evidence.
Similarly, now that I’ve read through Scott’s response to Hanson on medicine, I’d bet at upwards of 9 to 1 odds that Hanson is wrong about it. There’s an abundance of evidence that medicine has dramatically improved health outcomes, from well-done randomized trials to the fact that people are surviving more from almost all diseases. Hanson’s studies don’t even really support what he says when examined closely.
Similarly, the lab leak theory—one of the more widely accepted and plausible contrarian views—also doesn’t survive careful scrutiny. It’s easy to think it’s probably right when your perception is that the disagreement is between people like Saar Wilf and government bureaucrats like Fauci. But when you realize that some of the anti-lab leak people are obsessive autists who have studied the topic a truly mind-boggling amount, and don’t have any social or financial stake in the outcome, it’s hard to be confident that they’re wrong.
I read through the lab-leak debates in some depth, reading Scott’s blog, Rootclaim’s response, Scott’s response, and various other pieces. And my conclusion was that the lab-leak view was far, far less plausible than the zoonosis view. The lab leak view has no good explanation of why all the early cases were at the wet market and why the heat map clearly shows the wet market as the place where the pandemic started.
The contrarian’s enemy is not only random conformists. It’s also ridiculously smart people who have studied the topic in incredible depth and concluded that they’re wrong. And as we all know from certain creative offshoots of rock, paper, scissors, high-IQ mega autists beats public intellectual.
I read through the Caplan v Alexander debate about mental illness. And I concluded that Caplan wasn’t just wrong, he was clearly and egregiously wrong (I even wrote an article about it). This is not to beat up on Caplan—I generally think he’s one of the better contrarians. But the consensus view often turns out to be right on these things.
Similarly, there are a lot of people like Steve Sailer and Emil Kierkegaard arguing that there are racial gaps in intelligence, based on genetics. But when I read them on other stuff, they’re just not great thinkers. In contrast, while Jay M’s blog isn’t as popular or as fun to read for most people, he has a good piece arguing pretty convincingly against the genetic explanation of the gap. The author isn’t a conformist—his other articles express various controversial views about race. Yet he did a thorough deep dive into the literature and concluded that the environmental explanation is most plausible. I’ve also chatted with him and he’s very smart and good at thinking, unlike, I think, Kirkegaard and Sailer (I could be wrong about that—I don’t know them that well). I don’t have the statistical acumen to really evaluate the debate, but I do get the same sense—that while popular contrarians with widely read blogs say one thing, the balance of evidence doesn’t support that view.
Many more people read Kirkegaard and Sailer because expressing the conformist view on the topic is much less interesting than expressing the contrarian view. Most of the people who believe the gap is environmental don’t much want to argue about it, so almost all the people who write things about it are people who believe the genetic explanation of the gap. Very few people want to read articles saying “here are 10,000 words showing that the view you reject by calling it racist pseudoscience is actually conflicted by the majority of the evidence.”
I could run through more examples but the point should be clear. Whenever I look more into contrarian theories, my credence in them drops dramatically and the case for them falls apart completely. They spread extremely rapidly as long as they have even a few smart, articulate proponents who are willing to write things in support of them. The obsessive autists who have spent 10,000 hours researching the topic and writing boring articles in support of the mainstream position are left ignored.
44 comments
Comments sorted by top scores.
comment by faul_sname · 2024-04-26T04:02:56.758Z · LW(p) · GW(p)
It strikes me that there's a rather strong selection effect going on here. If someone has a contrarian position, and they happen to be both articulate and correct, they will convince others and the position will become less surprising over time.
The view that psychology and sociology research has major systematic issues at a level where you should just ignore most low-powered studies is no longer considered a contrarian view.
comment by Viliam · 2024-04-26T10:26:12.459Z · LW(p) · GW(p)
I guess in the average case, the contrarian's conclusion is wrong, but it is also a reminder that the mainstream case is not communicated clearly, and often exaggerated or supported by invalid arguments. For example:
- it's not that "dieting doesn't work", but that people naively assume that dieting is simple and effective ("if you just stop eating chocolate and start exercising for one hour every day, you will certainly lose weight", haha nope), even when the actual weight-loss research shows otherwise;
- it's not that "medicine doesn't improve health", but while some parts of medicine are very useful, other parts may be neutral or even harmful, and we often see that throwing more money at medicine does not actually improve the outcomes;
- it's not that "education doesn't work", but if you filter your students by intelligence and hard work, of course they will have better outcomes in life regardless of how good is your teaching, so the impact of education is probably vastly overestimated, and this also explains why so many pedagogical experiments succeed at a pilot project (when you try them with a small group of smart and motivated students) and then fail in mainstream education (when you try the same thing with average or below-average students);
- it's not that "opening the borders completely is a good idea", but a lot of potential value is lost by closing the borders for people who are neither fanatics nor criminals and could easily integrate to the new society.
There is also an opposite bad extreme to contrarians, the various "I fucking love science... although I do not understand it... but I enjoy attacking people on social networks who seem to disagree with the scientific consensus as I understand it" people. The ones who are sure that the professor or the doctor is always right, and that the latest educational fad is always correct.
Replies from: Jiro↑ comment by Jiro · 2024-04-29T20:56:11.769Z · LW(p) · GW(p)
I guess in the average case, the contrarian’s conclusion is wrong, but it is also a reminder that the mainstream case is not communicated clearly, and often exaggerated or supported by invalid arguments.
This enables sanewashing and motte-and-bailey arguments.
comment by Said Achmiz (SaidAchmiz) · 2024-04-26T02:19:59.230Z · LW(p) · GW(p)
Similarly, the lab leak theory—one of the more widely accepted and plausible contrarian views—also doesn’t survive careful scrutiny. It’s easy to think it’s probably right when your perception is that the disagreement is between people like Saar Wilf and government bureaucrats like Fauci. But when you realize that some of the anti-lab leak people are obsessive autists who have studied the topic a truly mind-boggling amount, and don’t have any social or financial stake in the outcome, it’s hard to be confident that they’re wrong.
This is a very poor conclusion to draw from the Rootclaim debate. If you have not yet read Gwern’s commentary on the debate, I suggest that you do so. In short, the correct conclusion here is that the debate was a very poor format for evaluating questions like this, and that the “obsessive autists” in question cannot be relied on. (This is especially so because in this case, there absolutely was a financial stake—$100,000 of financial stake, to be precise!)
comment by Matthew Barnett (matthew-barnett) · 2024-04-25T23:49:52.158Z · LW(p) · GW(p)
Similarly, now that I’ve read through Scott’s response to Hanson on medicine, I’d bet at upwards of 9 to 1 odds that Hanson is wrong about it.
I'm broadly sympathetic to this post. I think a lot of people adjacent to the LessWrong cluster tend to believe contrarian claims on the basis of flimsy evidence. That said, I am fairly confident that Scott Alexander misrepresented Robin Hanson's position on medicine in that post, as I pointed out in my comment here. So, I'd urge you not to update too far on this particular question, at least until Hanson has responded to the post. (However, I do think Robin Hanson has stated his views on this topic in a confusing way that reliably leads to misinterpretation.)
Replies from: Linch↑ comment by Linch · 2024-04-27T21:50:02.137Z · LW(p) · GW(p)
Rebuttal here!
Anyway, if the message someone received from Hanson's writings on medicine was "yay Hanson", and Scott's response was "boo Hanson," then I agree people should wait for Hanson's rebuttal before being like "boo Hanson."
But if the message that people received was "medicine doesn't work" (and it appears that many people did), then Scott's writings should be an useful update, independent of whether Hanson's-writings-as-intended was actually trying to deliver that message.
↑ comment by Matthew Barnett (matthew-barnett) · 2024-04-27T21:53:10.410Z · LW(p) · GW(p)
But if the message that people received was "medicine doesn't work" (and it appears that many people did), then Scott's writings should be an useful update, independent of whether Hanson's-writings-as-intended was actually trying to deliver that message.
The statement I was replying to was: "I’d bet at upwards of 9 to 1 odds that Hanson is wrong about it."
If one is incorrect about what Hanson believes about medicine, then that fact is relevant to whether you should make such a bet (or more generally whether you should have such a strong belief about him being "wrong"). This is independent of whatever message people received from reading Hanson.
Replies from: Linchcomment by Douglas_Knight · 2024-04-27T18:42:56.813Z · LW(p) · GW(p)
What does it mean to claim that these people are contrarians?
Is there a consensus position at all? For any existing policy, you could claim that there is some kind of centrist compromise that it's a good policy, so people who propose changing policy, like Hanson and Caplan, are defying that compromise. But there is not really any explicit consensus goal of most policies, so claiming existing institutions are a bad compromise because they pursue multiple goals and separating those goals is not in defiance of any consensus. Caplan, Hanson, and Sailer are offensive because they feel we should try to understand the world and try steer it. They may be wrong, but the people opposed to them rarely offer an opposing position, but are rather opposed to any position. It seems to me that the difference between true and false is much smaller than the gap between argument and pseudoscience. Maybe Sailer is wrong, but the consensus position that he is peddling pseudoscience is much more wrong and much more dangerous.
Sailer rarely argues for genetic causes, but leaves that to the psychologists. He believes it and sometimes he uses the hypothesis, but usually he uses the hypotheses 1-4 that Turkheimer, Harden, and Nisbett concede. Spelling out the consequences of those claims is enough to unperson him. Maybe he's wrong about these, but he's certainly not claiming to be a contrarian. And people who act like these are false rarely acknowledge an academic consensus. Or compare Jay: it's very hard to distinguish genetic effects from systemic effects, so when Jay argues that racial IQ gaps aren't genetic, he is (explicitly!) arguing that they are caused by racial differences in parenting. Sailer often claims this (he thinks it's half the effect), but people hate this just as much as anything else he says. Calling him a contrarian and focusing attention on one claim seem like an attempt to mislead.
That is a very clear example, but I think something similar is going on in the rest. Guzey seems to have gone overboard in reaction to Matthew Walker's book Why We Sleep. Did that book represent a consensus? I don't know, but it was concrete enough to be wrong, which seems to me much better than an illusion of a consensus.
Replies from: Viliam↑ comment by Viliam · 2024-04-27T23:16:34.743Z · LW(p) · GW(p)
Thanks for the link. While it didn't convince me completely, it makes a good point that as long as there are some environmental factors for IQ (such as malnutrition), we should not make strong claims about genetic differences between groups unless we have controlled for these factors.
(I suppose the conclusion that the genetic differences between races are real, but also entirely caused by factors such as nutrition, would succeed to make both sides angry. And yet, as far as I know, it might be true. Uhm... what is the typical Ashkenazi diet?)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2024-04-28T02:57:44.681Z · LW(p) · GW(p)
Uhm… what is the typical Ashkenazi diet?
comment by Logan Zoellner (logan-zoellner) · 2024-04-26T01:21:57.417Z · LW(p) · GW(p)
Sam Atis—a super forecaster—had a piece arguing against The Case Against Education
If it's this piece, I would be interested to know why you found it convincing. He doesn't address (or seem to have even read) any of Brian's arguments. His argument basically boils down to "but so many people who work for universities think it's good".
Replies from: matthew-barnett, omnizoid↑ comment by Matthew Barnett (matthew-barnett) · 2024-04-26T05:10:04.885Z · LW(p) · GW(p)
The next part of the sentence you quote says, "but it got eaten by a substack glitch". I'm guessing he's referring to a different piece from Sam Atis that is apparently no longer available?
↑ comment by omnizoid · 2024-04-26T05:42:37.634Z · LW(p) · GW(p)
It's not that piece. It's another one that got eaten by a Substack glitch unfortuantely--hopefully it will be back up soon!
Replies from: ChristianKl, Bohaska↑ comment by ChristianKl · 2024-04-26T11:58:22.123Z · LW(p) · GW(p)
What makes you believe that Substack is to blame and not him unpublishing it?
comment by ChristianKl · 2024-04-26T01:09:59.250Z · LW(p) · GW(p)
Similarly, there are a lot of people like Steve Sailer and Emil Kierkegaard arguing that there are racial gaps in intelligence, based on genetics. But when I read them on other stuff, they’re just not great thinkers. In contrast, while Jay M’s blog isn’t as popular or as fun to read for most people, he has a good piece arguing pretty convincingly against the genetic explanation of the gap.
The linked article says:
I do not believe that this is the important disagreement. Now, some (maybe even many) environmentalists argue that genetic differences play no role in the cognitive ability gap (e.g., Nisbett 2005), but I believe these environmentalists are mistaken to argue for such a strong position.
So the linked article says that Steve Sailer and Emil Kierkegaard are right when they say that there are racial gaps in intelligence based on genetics. Basically, he says there's a gap but wants to debate about its size.
Replies from: omnizoid↑ comment by omnizoid · 2024-04-26T05:40:41.659Z · LW(p) · GW(p)
He thinks it's very near zero if there is a gap.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-04-26T11:57:36.141Z · LW(p) · GW(p)
He explicitly says that the people who argue that there's no gap are mistaken to argue that. He argues for the gap being small, not nonexistent. He does not use the term "near zero" himself.
Replies from: tailcalled↑ comment by tailcalled · 2024-04-27T17:29:38.463Z · LW(p) · GW(p)
I feel like if there's one side arguing the genetic gap is x, and one side arguing the genetic gap is 0, the natural dichotomization is whether the genetic gap is larger or smaller than x/2.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-04-28T01:56:31.803Z · LW(p) · GW(p)
Instead of thinking about how you can divide a discussion into two sides you can also focus on "what's actually true". In that case, it would make sense to end with an estimation of the size of the real gap.
If we, however, look at "what people argue", https://www1.udel.edu/educ/gottfredson/30years/Rushton-Jensen30years.pdf assumes the two categories culture-only (0% genetic–100% environmental) and the hereditarian (50% genetic–50% environmental).
Jay M defines the environmental model as <33% genetic and the genetic model as >66% genetic. What Rushton called the hereditarian position is right in the middle between Jay's environmental and genetic model.
Replies from: tailcalled↑ comment by tailcalled · 2024-04-28T07:49:42.434Z · LW(p) · GW(p)
Definitely relevant to figure out what's true when one is only talking about the object level, but the OP was about how trustworthy contrarians are compared to the mainstream rather than simply being about the object level.
Replies from: ChristianKl, ChristianKl↑ comment by ChristianKl · 2024-04-28T23:11:47.244Z · LW(p) · GW(p)
Generally, hedgehogs are less trustworthy than foxes. If you see a debate as being about either believing in a mainstream hedgehog position or a contrarian hedgehog position you are often not having the most accurate view.
Instead of thinking that either Matthew Walker or Guzey is right, maybe the truth lies somewhere in the middle and Guzey is pointing to real issues but exaggerating the effect.
I think most of the cases that the OP lists are of that nature that there's an effect and that the hedgehog contrarian position exaggerates that effect.
↑ comment by ChristianKl · 2024-04-29T16:36:32.406Z · LW(p) · GW(p)
comment by Shankar Sivarajan (shankar-sivarajan) · 2024-04-26T01:26:04.037Z · LW(p) · GW(p)
I doubt you could have picked a worse example to make your point that contrarian takes are usually wrong than racial differences in IQ/intelligence.
comment by Said Achmiz (SaidAchmiz) · 2024-04-26T00:37:01.651Z · LW(p) · GW(p)
Hmm, this sounds like an awfully contrarian take to me.
comment by romeostevensit · 2024-04-25T21:50:55.166Z · LW(p) · GW(p)
I think binary examples are deceptive in the reversed stupidity is not intelligence sense. Thinking through things from first principles is most important in areas that are new or rapidly changing where there are fewer references classes and experts to talk to. It's also helpful for areas where the consensus view is optimized for someone very unlike you.
comment by Templarrr (templarrr) · 2024-04-26T09:50:23.256Z · LW(p) · GW(p)
There are 2 topics mixed here.
- Existence of the contrarians.
- Side effects of their existence.
My own opinion on 1 is that they are necessary in moderation. They are doing the "exploration" part in the "exploration-exploitation dilemma". By the very fact of their existence they allow the society in general to check alternatives and find more optimal solutions to the problems comparing to already known "best practices". It's important to remember that almost everything that we know now started from some contrarian - once it was a well established truth that Monarchy is the best way to rule the people and democrats were dangerous radicals.
On the 2 - it is indeed a problem that contrarian opinions are more interesting on average, but the solution lies not in somehow making them less attractive - but by making more interesting and attractive conformist materials. That's why it is paramount to have highly professional science educators and communicators, not just academics. My own favorites are vlogbrothers (John and Hank Green) in particular and their team in Complexly in general.
comment by Mo Putera (Mo Nastri) · 2024-04-26T04:31:07.527Z · LW(p) · GW(p)
You might also be interested in Scott's 2010 post warning of the 'next-level trap' so to speak: Intellectual Hipsters and Meta-Contrarianism [LW · GW]
A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.
A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.
...
Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:
- KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5
What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions [? · GW] even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.
If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.
comment by Dagon · 2024-04-25T21:25:53.822Z · LW(p) · GW(p)
I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they're exaggerating for effect. They may say something in a forceful tone and imply that it's completely obvious and irrefutable, but that's rhetoric rather than truth.
In fact, if they're saying "the mainstream and common belief should move some amount toward this idea", I tend to agree with a lot of it (not all - there's a large streak of "contrarian success on some topics causes very strong pressure toward more contrarianism" involved).
comment by Jacob G-W (g-w1) · 2024-04-25T23:27:46.050Z · LW(p) · GW(p)
Thank you for writing this! It expresses in a clear way a pattern that I've seen in myself: I eagerly jump into contrarian ideas because it feels "good" and then slowly get out of them as I start to realize they are not true.
comment by Garrett Baker (D0TheMath) · 2024-04-27T19:38:39.876Z · LW(p) · GW(p)
I agree that contrarians 'round these parts are often wrong more often than academic consensus, but the success of their predictions about AI, crypto, and COVID prove to me its still worth listening to them, trying to be able to think like them, and probably taking their investment advice. That is, when they're right, they're right big-time.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-04-27T20:28:14.675Z · LW(p) · GW(p)
contrarianism is not what lead people to be right about those things.
comment by Ben (ben-lang) · 2024-05-17T09:27:28.741Z · LW(p) · GW(p)
Nice post. Gets at something real.
My feeling is that a lot of contrarians get "pulled into" a more contrarian view. I have noticed myself in discussions propose a (specific, technical point correcting a detail of a particular model). Then, when I talk to people about it I feel like they are trying to pull me towards the simpler position (all those idiots are wrong, its completely different from that). This happens with things like "ah, so you mean...", which is very direct. But also through a much more subtle process, where I talk to many people, and most of them go away thinking "Ok, specific technical correction on a topic I don't care about that much." and most of them never talk or think about it again. But the people who get the exaggerated idea are more likely to remember.
comment by tailcalled · 2024-04-26T06:11:40.558Z · LW(p) · GW(p)
I'm convinced by the mainstream view on COVID origins and medicine.
I'm ambivalent on education - I guess if done well, it'd consistently have good effects, and that currently, it on average has good effects, but also the effect varies a lot from person to person, so simplistic quantitative reviews don't tell you much. When I did an epistemic spot check on Caplan's book, it failed terribly (it cited a supposedly-ingenious experiment that university didn't improve critical thinking, but IMO the experiment had terrible psychometrics).
I don't know enough about sleep research to disagree with Guzey on the basis of anything but priors. In general, I wouldn't update much on someone writing a big review, because often reviews include a lot of crap information.
I might have to read Jayman's rebuttal of B-W genetic IQ differences in more detail, but at first glance I'm not really convinced by it because it seems to focus on small sample sizes in unusual groups, so it's unclear how much study noise, publication bias and and sampling bias effects things. At this point I think indirect studies are getting obsolete and it's becoming more and more feasible to just directly measure the racial genetic differences in IQ.
However I also think HBDers have a fractal of bad takes surrounding this, because they deny the phenotypic null hypothesis [LW · GW] and center non-existent abstract personality traits like "impulsivity" or "conformity" in their models.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2024-04-27T05:25:45.563Z · LW(p) · GW(p)
I couldn't swallow Eliezer's argument, I tried to read Guzey but couldn't stay awake, Hanson's argument made me feel ill, and I'm not qualified to judge Caplan.
comment by Ebenezer Dukakis (valley9) · 2024-04-28T04:58:06.526Z · LW(p) · GW(p)
You contrast the contrarian with the "obsessive autist", but what if the contrarian also happens to be an obsessive autist?
I agree that obsessively diving into the details is a good way to find the truth. But that comes from diving into the details, not anything related to mainstream consensus vs contrarianism. It feels like you're trying to claim that mainstream consensus is built on the back of obsessive autism, yet you didn't quite get there?
Is it actually true that mainstream consensus is built on the back of obsessive autism? I think the best argument for that being true would be something like:
-
Prestige academia is full of obsessive autists. Thus the consensus in prestige academia comes from diving into the details.
-
Prestige academia writes press releases that are picked up by news media and become mainstream consensus. Science journalism is actually good.
BTW, the reliability of mainstream consensus is to some degree a self-defying prophecy. The more trustworthy people believe the consensus to be, the less likely they are to think critically about it, and the less reliable it becomes.
comment by niplav · 2024-04-26T13:53:00.405Z · LW(p) · GW(p)
The obsessive autists who have spent 10,000 hours researching the topic and writing boring articles in support of the mainstream position are left ignored.
It seems like you're spanning up three different categories of thinkers: Academics, public intellectuals, and "obsessive autists".
Notice that the examples you give overlap in those categories: Hanson and Caplan are academics (professors!), while the Natália Mendonça is not an academic, but is approaching being a public intellectual by now(?). Similarly, Scott Alexander strikes me as being in the "public intellectual" bucket much more than any other bucket.
So your conclusion, as far as I read the article, should be "read obsessive autists" instead of "read obsessive autists that support the mainstream view". This is my current best guess—"obsessive autists" are usually not under much strong pressure to say politically palatable things, very unlike professors.
comment by Jacob G-W (g-w1) · 2024-04-25T23:32:34.467Z · LW(p) · GW(p)
I think I've noticed some sort of cognitive bias in myself and others where we are naturally biased towards "contrarian" or "secret" views because it feels good to know something that others don't know / be right about something that so many people are wrong about.
Does this bias have a name? Is this documented anywhere? Should I do research on this?
GPT4 says it's the Illusion of asymmetric insight, which I'm not sure is the same thing (I think it is the more general term, whereas I'm looking for one specific to contrarian views). (Edit: it's totally not what I was looking for) Interestingly, it only has one hit on lesswrong [LW · GW]. I think more people should know about this (the specific one about contrarianism) since it seems fairly common.
Edit: The illusion of asymmetric insight is totally the wrong name. It seems closer to the illusion of exclusivity although that does not feel right (that is a method for selling products, not the name of a cognitive bias that makes people believe in contrarian stuff because they want to be special).
Replies from: gjm↑ comment by gjm · 2024-04-26T09:26:41.875Z · LW(p) · GW(p)
Please don't write comments all in boldface. It feels like you're trying to get people to pay more attention to your comment than to others, and it actually makes your comment a little harder to read as well as making the whole thread uglier.
Replies from: g-w1↑ comment by Jacob G-W (g-w1) · 2024-04-26T11:46:32.750Z · LW(p) · GW(p)
Noted, thanks.
comment by mako yass (MakoYass) · 2024-04-25T22:45:35.392Z · LW(p) · GW(p)
It may be useful to write about how a consumer can distinguish contrarian takes from original insights. Until that's a common skill, there will remain a market for contrarians.
comment by StartAtTheEnd · 2024-04-28T12:59:00.563Z · LW(p) · GW(p)
It all depends on the topic. It's unlikely that the consensus about objective fields like mathematics or physics are wrong. The more subjective, controversial, and political something is, and the more profit and power lies in controlling the consensus, the more skepticism is appropriate.
The bias on Wikipedia (used as an example) is correlated in this manner, CW topics have a lot of misinformation, while things that people aren't likely to feel strongly about are written more honestly.
If some redpills or blackpills turned out to be true, or some harsh-sounding aspects of reality related to discrimination, selection, biases or differences in humans turned out to be true, or some harsh philosophy like "suffering is good for you", "poverty is negatively correlated with virtuous acts" or "People unconsciously want to be ruled" turned out to be true, would you hear about it from somebody with a good reputation?
I also think it's worth noting that both the original view and the contrarian view might be overstated. That education isn't useless nor as good as we make it out to be. I've personally found myself annoyed at exaggerations like "X is totally safe, it never has any side-effects" or "People basically never do Y, it is less likely than being hit by lightning" (despite millions of people participating because it's relevant for their future, thousands of which are mentally ill by statistic necessity). This has made me want to push back, but the opposing evidence is likely exaggerated or cherry-picked as well, since people feel strongly about various conflicts.
The optimization target is Truth only to the extent that Truth is rewarded. If something else has a higher priority, then the truth will be distorted. But due to the broken-windows theory, it might be better to trust society too much rather than too little. I don't want to spread doubt, it might be harmful even in the case that I'm right.
Replies from: andrew-burns↑ comment by Andrew Burns (andrew-burns) · 2024-04-28T21:18:51.859Z · LW(p) · GW(p)
The roundness of the earth is not a point upon which any political philosophy hinges, yet flat earthism is a thing. The roundness is not subjective, it isn't controversial, and it does not advance anyone's economic interest. So why do people engage in this sort of contrarianism? I speculate that the act of being a contrarian signals to others that you question authority. The bigger the consensus challenged, the more disdain for authority shown. One's willingness to question authority is often used as a proxy for "independent thinking." The thought is that someone who questions authority might be more likely to accept new evidence. But questioning authority is not the same as being an independent thinker, and so, when taken to its extreme, it leads to denying reality, because isn't reality the ultimate authority?
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2024-04-30T18:41:28.447Z · LW(p) · GW(p)
That's a great example of something which doesn't follow the dynamics that I mentioned! I think that your example relates to the dynamics of cults and religions. They do blend into politics a little bit as they're fed by a distrust in the system and in authorities in general. But I agree that the earth being flat would be a strange thing to lie about, unlike microchips, electromagnetic harassment, UFOs, lizard-people, and the cure of cancer.
There's other related ideas like "secret knowledge", but at this level, we're practically talking about symptoms of paranoid skizophrenia. But flat earthers seem more common than the rate of skizophrenia would suggest, so I'm not sure how to explain this gap.
Maybe these "independent thinkers" just hate authority, by which I mean that they're not the non-conformists that they appear to be. But being entirely alone in ones beliefs is quite painful, so if the only group which shares ones hatred of authority believes that the earth is flat, maybe the desire to fit in is strong enough that one deceive themselves. And people who believe in one conspiracy seem likely to believe in multiple theories, which is very likely an important piece of information if you want to understand these people.
Another guess is that nihilism is too painful. You know that "I want to believe" poster? I think we should take the word "want" literally. If you can't believe in god, but find the idea of an inert, material universe too painful to bear, you will look for signs of magic or anything interesting. Luck, karma, aura, chakra, fate, - anything to spice up your life, anything to add additional meaning and possibilities to life. A large-scale conspiracy could fill this need. You'd also go from being a crazy loser to being a warrior fighting against the corrupt, deceptive system. In other words, a conspiracy like this being true would elevate the importance of the individual.