Posts
Comments
But the genre also comes with a lot of strong beliefs that do not replicate. (Talking for 10 minutes with someone who reads Taleb's tweets regularly makes me want to scream.)
By this criterion, absolutely no one should be using LessWrong as a vehicle for learning. The Malcolm Gladwell reader you proposed might have been a comparable misinformation vehicle, in, say, 2011, but as of 2022 LessWrong is by a chasmic margin worse about this. It's debatable whether the average LessWrong user even reads what they're talking about anymore.
I can name a real-life example: in a local discord of about 100 people, Aella argued that the MBTI is better understood holistically under the framework of Jungian psychology, and that looking at the validity of each subtest (e.g. "E/I", "N/S", ""T/F", "J/P") is wrongly reductive. This is not just incorrect, it is the opposite of true; it fundamentally misunderstands what psychometric validity even is. I wrote a fairly long correction of this, but I am not sure anyone bothered to read it — most people will take what community leaders say at face value, because the mission statement of the ingroup of LessWrong is "people who are rational" and the thinking goes that someone who is rational, surely, would have taken care of this. (This was not at all the case.)
I don't think further examples will help, but they are abundant throughout this sphere; there is a reason I spent 30 minutes of that audio debunking the pseudoscientific and even quasi-mystical beliefs common to Alexander Kruel's sphere of influence.
A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention -- the famous gorilla experiment -- but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.
Yes, it's possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.
Anyway -- that aside, I actually don't disagree much at all with most of what you said.
The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson's book ("Ship of Fools" p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I've ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.
1. https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012
2. https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4
3. https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364
4. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
5. https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ
6. https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317
7. https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339
there is very little, with respect to rationality, learned here that will not be learned through these texts.
note: if your comment contains the phrase "my tribe" it's best autodeleted on the grounds that you are a clueless dweeb who is part of the problem
note: if your comment contains the phrase "my tribe" it's best autodeleted on the grounds that you are a clueless dweeb who is part of the problem
argon, please use a consistent name across all media. if I had known this Big Steve account was you, it would have saved me a lot of time.
• courts of laws aren't primarily truthseeking practices. as in, courts fulfill a governmental function primarily and the truth is auxiliary. they can be truthseeking, but they aren't by design. the adversarial system for example is antithetical to truthseeking, because attorneys have no obligation to the whole truth. lying by omission is permitted.
• even if they were, what you're trying to get at - analogizing the role of demographic qualities to licensing credentials - doesn't hold here. a steelman of your argument would be that licensing means lawyers can more deftly handle certain kinds of evidence, so their licensure is a shortcut because we know to listen to them over someone else. this would NOT mean that their licensure makes some claim correct. for this analogy to hold, "you're just saying that because you're not a lawyer" would have to be a coherent objection. there are very few instances where this objection would be relevant to the truth of any claim and even in those objections, the truth of the claim that "you're just saying that because" rebuts wouldn't depend on who is "allowed in the discourse", it would be a descriptive explanation of the origin of their interlocutor's ignorance.
there are categories of rebuttals and demands for evidence where the biggest issue in fulfilling them is time.
if you need a non political example, a common phenomenon of this kind is a document dump in legal practice. (but you shouldn't; we are going to engage politics all the time and you're going to need to be able to process politics rationally.)
misdirection is too broad and does not describe this precisely. stalling is the right focus, and seeing this as "dominance and emotional reactions" is missing the point or grossly misreading the situation. there is nothing in this dialogue that would allow you to make an inference about dominance, and emotionality is probably a factor, but it can be for just about every fallacy and the point of fallacies isn't to describe emotions - the point of fallacies is to identify problematic epistemic categories.
if you need another example, there is this phenomenon of taking objection to something by virtue of how it is characterized until the pet preference for characterization is reached (characterization roulette), a la:
A: "I'm not sure wearing bright colors like red is a good idea if you don't want to be seen in a crowd."
B: "it's not red"
A: "okay, purple"
B: "it's not purple either"
A: "okay, fuchsia"
B: "but I don't think bright fuchsia will stand out that much anyway, because lots of people in that area of town wear bright colors"
a non-stalling version of this is:
A: "I'm not sure wearing bright colors like red is a good idea if you don't want to be seen in a crowd."
B: "lots of people in that area of town wear bright colors"
the clear issue here is time, because it's unreasonable to think that A won't eventually reach the color that meet's B's satisfaction. since it's time-based, "stalling" is how best to describe this.
the burden of evidence doesn't change by who is "allowed" into a discussion. if I make a claim about the migration patterns of birds the evidence required for this claim is going to be the same regardless of who is hearing it. if I make a claim about the inequalities in society this doesn't change. they're both empirical claims and both have the same kinds of evidential requirements. if someone is trying to "maintain appropriate boundaries", whatever that means, making empirical claims is probably the opposite of that.
this idea of being "allowed in the discourse" is a nonsequitur. if you need to be "allowed" to make truth claims, what you're doing isn't a truthseeking or epistemic practice, so it's not something that terms relating to epistemic hygiene describe in any sense and only relevant in as an instance of misinterpretation. this is akin to saying that if someone says you can't dance with them at an event that a canned line about their refusal to provide evidence, such as "you can google it" — perhaps "it" refers to the dance steps — doesn't necessarily describe bad epistemic practice. well, duh. but dancing isn't an epistemic activity, so it's a left-field objection anyway.
people who procrastinate, including me and probably you and most people reading this, do so in a semi-intentional state where they're half-aware and might be more aware if prompted but can easily suppress awareness further too. intention is not binary. at the point of performance, procrastinators (so, all of us) aren't actively thinking "I'm procrastinating" nor are they aware that they're explicitly making choices to do that. but, if a person interrupts them to let them know they're doing that, their consciousness might be shaken enough to stop the behavior. (of course, we can train ourselves to do that too, and it's obviously much harder.)
I didn't address intentionality because I don't think the binary states of intentional or unintentional are helpful in stopping it. most people don't make fallacies or other acts of bad epistemic practice in completely intentional or completely unintentional modes. they often have some vague awareness that what they're doing is off but, like someone who is procrastinating, they're probably not going to scrutinize their intentions further unless they have the vocabulary and concepts to do so quickly. this purpose of post is to provide both.
it's still stalling, because they should start with the argument that would best rebut the same claim by someone who was "allowed in the discourse." this modification is trivial and doesn't make epistemic stalling a non-thing, i.e. people obviously do in fact do this.
Your "wrong but not obviously and completely wrong" line made me think that the "obviously and completely" part is what makes people who are well-versed in a subject demand that everyone should know [knowledge from subject] when they hear someone express obvious-and-complete ignorance or obvious-and-complete wrongness in/of said subject. I've witnessed this a few times, and usually the thought process is something like "wow, it's unfathomable that someone should express such ignorance of something that is so obvious to me. There should clearly be a class to make sure this doesn't happen." After reading what you wrote about compartmentalized knowledge and connected knowledge, this type of situation makes much more sense.