What Motte and Baileys are rationalists most likely to engage in?

post by Chris_Leong · 2021-09-06T15:58:36.378Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    54 Zack_M_Davis
    15 ametipo
    4 tangren
    3 TAG
    1 frontier64
    -1 ozziegooen
None
No comments

Motte and Bailey is a concept by Nicholas Shackel that Scott Alexander has popularised. He describes this as follows:

The original Shackel paper is intended as a critique of post-modernism. Post-modernists sometimes say things like “reality is socially constructed”, and there’s an uncontroversially correct meaning there. We don’t experience the world directly, but through the categories and prejudices implicit to our society; for example, I might view a certain shade of bluish-green as blue, and someone raised in a different culture might view it as green. Okay.

Then post-modernists go on to say that if someone in a different culture thinks that the sun is light glinting off the horns of the Sky Ox, that’s just as real as our own culture’s theory that the sun is a mass of incandescent gas. If you challenge them, they’ll say that you’re denying reality is socially constructed, which means you’re clearly very naive and think you have perfect objectivity and the senses perceive reality directly.

The writers of the paper compare this to a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you retreat to an obvious, uncontroversial statement, and say that was what you meant all along, so you’re clearly right and they’re silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Sometimes Motte and Baileys arguments are the result of bad faith, but I suspect in many cases that those making them have no idea that they are engaging in such a strategy. In fact, it seems highly likely that one or more Motte and Baileys would be popular among rationalists. What are the most common such Motte and Baileys?

Answers

answer by Zack_M_Davis · 2021-09-06T16:29:10.757Z · LW(p) · GW(p)

The very concept of a "rationalist" is an egregious one! What is a rationalist, really? The motte: "one who studies the methods of rationality, systematic methods of thought that result in true beliefs and goal achievement". The bailey: "a member of the social ingroup of Eliezer Yudkowsky and Scott Alexander fans, and their friends."

comment by Rob Bensinger (RobbBB) · 2021-09-08T02:26:54.895Z · LW(p) · GW(p)

Yeah, this already bothered me some, but your way of putting it here makes it bother me more.

I think the motte/bailey often runs in the other direction, though, for modesty-ish reasons: there's a temptation to redefine 'rationalist' as a social concept, because it looks more humble to say 'I'm in social circle X' than to say 'I'm part of important project X' or 'I'm a specialist in X', when you aren't doing X-stuff as part of a mainstream authority like academia.

I think there are two concepts I tend to want labels for, which I sometimes use 'rationalist' to refer to (though I hope I'm not switching between these in a deceptive/manipulative way!):

  • 'One who successfully develops and/or applies the methods of getting systematically better at mapping reality (and, optionally, steering reality) from inside a human brain.'
  • 'One who is highly acquainted with the kinds of ideas in the Sequences, and with related ideas that have been a major topic on LW (e.g., the availability heuristic and reductionism and conservation of expected evidence, but also ems, tractability/importance/neglectedness, updateless decision theory, ugh fields, ideas from Alicorn's Twilight fanfic...).'

I think the latter concept is pretty useful and natural, though I could maybe be convinced that 'rationalist' is a bad name for it.

I think it's mainly a memetic or cultural concept in my mind, like 'postmodernist' or 'Chicago economist' or 'the kind of person who's been to a coding boot camp': shibboleths are a big deal for who my brain tags this way, but mere social adjacency isn't. It's closer to 'how much background knowledge can I assume if we start talking about simulation shutdown at a party?', rather than 'how much does e.g. Scott Alexander like this person, spend time with them, etc.?'.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-09-08T16:53:10.908Z · LW(p) · GW(p)

Thinking about it more: I can imagine a group that tries to become unusually good at learning true things in a pretty domain-general way, so they call themselves the Learnies, or the Discoverers, or the Trutheteers.

If a group like that succeeds in advancing its art, then it should end up using that art to discover at least some truths about the world at large that aren't widely known. This should be a reliable consequence of 'getting good at discovering truths'. And in many such worlds, it won't be trivial for the group to then convince the entire rest of the world to believe the things they learned overnight. So five years might pass and the Learnies are still the main group that is aware of, say, the medical usefulness of some Penicillium molds (because they and/or the world they're in is dysfunctional in some way).

It seems natural to me that the Learnies' accumulated learnings get mentally associated with the group, even though penicillin doesn't have any special connection to the art of learning itself. So I don't think there's anything odd about rationalists being associated with 'knowledge we accumulated in the process of applying rationality techniques', or about my wanting a way to figure out to what extent someone at a party has acquired that knowledge. I think this example does illustrate a few issues, though:

  • First, obviously, it's important to be able to distinguish 'stuff the Learnies learned that's part of the art of learning' from 'other stuff the Learnies learned'. Possibly it would help here to have more names for the clusters of nonstandard beliefs rationalists tend to have, so I'm somewhat less tempted to think 'is this person familiar with the rationalist-ish content on ems?' vs. 'is this person familiar with the techno-reductionist content on ems?' or whatever.
  • Second, I've been describing the stuff above as "knowledge". But what if there's a dispute about whether the Learnies' worldly discoveries are true? In that case, maybe it's logically rude to call yourself 'Learnies', because (a) it maybe rhetorically tilts the debate in your favor, and (b) it's just not very informative. (Example: A world where pro-penicillin people call themselves Penicillists and anti-penicillin people call themselves Anti-Penicillists is better than a world where pro-penicillin people call themselves Accuracyists and anti-penicillin people call themselves Realityists.)
    • Considerations like this update me toward using the term 'aspiring rationalist' more, because it hopefully makes it tilts debate less toward the presupposition that 'whoever identifies themselves as a truth-seeker' is correct about whatever thing they think their rationality techniques have helped them learn.
    • (Though I think 'aspiring rationalist' still tilts the debate some. But, thinking about it again, maybe I actually prefer the world where more people make one flavor or another of 'truth-seeking' part of their identity? It might be at least somewhat easier to call people out on bullshit and make them reflect and update, if it were more common for people to take pride in their truth-seeking and knowledgeableness, however unwarranted.)
  • Then there's the question of whether the Learnies' commitment to the art of learning obliges them to never Glomarize or stay-out-of-the-fray about politically controversial things they learn. I take it that something along these lines is your main criticism of rationalists calling themselves 'rationalists', but I'm not sure exactly what norm you're advocating. Currently I don't find this compelling -- like, I do think it's epistemically risky for groups to selectively avoid talking about important politicized topics, and I think it's important to try to find ways to counter resultant epistemic distortions in those cases (and to seriously consider whether staying-out-of-the-fray is just a dumb strategy). But I guess I just disagree with some of the examples you've provided, and agree with others but don't think they're as serious or deeply-epistemic-corrupting as you do? Mostly, I just don't think I've read enough of the stuff you've written to understand your full argument; but I can at least give my current epistemic state.
comment by Zack_M_Davis · 2021-09-07T03:14:23.506Z · LW(p) · GW(p)

It doesn't help when Yudkowsky actively encourages this confusion! As he Tweeted today: "Anyways, Scott, this is just the usual division of labor in our caliphate: we're both always right, but you cater to the crowd that wants to hear it from somebody too modest to admit that, and I cater to the crowd that wants somebody out of that closet."

Just—the absolute gall of that motherfucker! I still need to finish my memoir about why I don't trust him the way I used to, but it's just so emotionally hard—like a lifelong devout Catholic denouncing the Pope. But what can you do when the Pope is actually wrong? My loyalty is to the truth, not to him.

Replies from: ChristianKl
comment by ChristianKl · 2021-09-07T13:01:44.062Z · LW(p) · GW(p)

This doesn't seem to be about the term rationalist at all. It seems to be about which rhetorical style different people prefer. Eliezer makes in a much more confident and more polarizing way then Scott. 

Replies from: TAG, tangren
comment by TAG · 2021-09-08T20:28:09.152Z · LW(p) · GW(p)

In my experience Scott has an epistemic style where he assumes and seeks out contrary information, and Eliezer does not...he's more into early cognitive closure. It's not just tone, it's method.

comment by tangren · 2021-09-08T14:59:36.765Z · LW(p) · GW(p)

No, not really? I generally ignore anything Scott writes which could be described as 'agreeing with Yud' -- it's his other work I find valuable, work I wouldn't expect Yud to write in any style.

comment by Chris_Leong · 2021-09-08T07:10:48.620Z · LW(p) · GW(p)

I made a similar, but slightly different argument in Pseudo-Rationality [LW · GW]:

"Pseudo-rationality is the social performance of rationality, as opposed to actual rationality."

answer by ametipo · 2021-09-08T00:16:30.459Z · LW(p) · GW(p)

I've been Rationalist-adjacent for over 10 years now by my ideals, but have never taken part in the community (until this post, hello!) precisely because I find this fallacy throughout a lot of Rationalist discourse and it has put me off.

The motte: "Here is some verifiable data that suggests my hypothesis. It is incomplete, and I may be wrong. I am but a humble thinker, calling out into the darkness, looking for a few pinpricks of truth's light."

The bailey: "The limitations in my data and argument are small enough that I can confidently make a complex conclusion at the end, to some confidence interval. Prove my studies wrong if you disagree. If you respond to my argument with any kind of detectable emotion I will take this as a sign of your own irrationality and personal failings."

In my reading the bailey tends to come out in a few similar Rationalist argument styles. What they all have in common is that some lip service is usually paid to the limitations of the argument, but the poster still goes on as if their overall argument is probable and valid, instead of a fundamentally unsupported post-hoc rationalization built on sand. I tend to see:

  1. The poster makes an arbitrary decision to evaluate the core hypothesis by proxying it onto a set of related, but fundamentally different, metrics from the actual thesis, where the proxy metrics are easily testable and the actual thesis is very broad. The evaluation that follows using the chosen metrics is reasonable, but the initial choice to even use those metrics as a proxy for the thesis question is subjective, unjustified, and the conclusion would have gone another way had different and arguably just as justifiable proxy metrics been chosen instead. The  proxy is never mentioned. Or if it is, it's is hand-waved away as "of course there are other ways to evaluate this question..."  But assuming that your toy metrics equate to a wider narrative is a fundamental error. Analysis is limited to the scope of what it's analyzing to stay accurate. 
  2. The poster shows their work with some math (usually probabilities) to prove a real-world point, but the math is done on a completely hypothetical thought experiment. Can't argue with math! The entire meat of this hinges on the completely unjustified implication that the real world is enough like the thought experiment that the probabilities from one are relevant to both. But the thought experiment came from the poster's mind, and its similarity to reality is backed up by nothing. There is no more inherent reason why probabilities derived from a hypothetical example would apply to reality than random numbers thrown into the comment box would be, but because there's some math work included it's taken as more accurate than the poster saying "I think the world is like X" outright.
  3. Using Bayesian reasoning and confidence intervals to construct a multi-point argument of mostly-unproven assertions that all rely on each other, so that the whole is much weaker than the sum of its parts. The argument is made as if the chance of error at each successive step is additive rather than compounding, and as if the X% confidence interval the author assigns at each unproven assertion is the actual real probability of it being true. But in reality, confidence intervals are a post-hoc label we give to our own subjective feelings when evaluating a statement that we believe but haven't proven. The second you label an unsupported statement with one of these you've acknowledged that you've left what you're sure of as objective reality. Each successive one in an argument makes the problem worse, because the error compounds. It would be more honest and objective for the argument to stop at the very first doubtful point and leave it there with a CI for future discussion. But instead I see a lot of "of course, this can't be really known right now, but I think it's 65% likely given the inconclusive data I have so far, and if we assume that it's true for the sake of argument..." and then it continues further into the weeds for another few thousand words.  

Obviously this comment is critical, but I do mean this with good humor and I hope it is taken as such. The pursuit of truth is an ideal I hold important.

(An aside: the characterization of post-modern argument in the OP is only accurate in the most extreme and easily parodied of post-modernist thinkers. Most post-modernists would argue that social constructs are subjective narratives told on top of an objective world, and that many more things are socially constructed than most people believe. That the hypothetical about the sun is used as an example of bad post-modernist thought, instead of any of the actual arguments post-modernists make in real life, is a bit of a tip-off that it's not engaging with a steel man.)

comment by tslarm · 2021-09-08T00:52:14.038Z · LW(p) · GW(p)

An aside: the characterization of post-modern argument in the OP is only accurate in the most extreme and easily parodied of post-modernist thinkers. Most post-modernists would argue that social constructs are subjective narratives told on top of an objective world, and that many more things are socially constructed than most people believe. That the hypothetical about the sun is used as an example of bad post-modernist thought, instead of any of the actual arguments post-modernists make in real life, is a bit of a tip-off that it's not engaging with a steel man.

I think Scott's claim (back in 2014) would be that you've just articulated the post-modernist motte, and in fact people often do make arguments and pronouncements that (at least implicitly) depend on the thing that you see as a weakman and he sees as the bailey. (I haven't read enough of the relevant stuff to take a position here; Scott's cynical account rings true to me, but that could be because what rises to my attention is disproportionately the extreme and easily-parodied stuff, and then I lazily pattern-match the rest without giving it a fair chance.)

edit: to be fair, I can see a potential motte-and-bailey on the anti-pomo side. (Bailey: the sun hypothetical, although made up, is a pretty accurate characterisation of how postmodernists argue. Motte: that was just a throwaway tongue-in-cheek example, a punchy way to illustrate the main point of the post; you're taking it too literally if you bother pushing back against it. Or alternatively, Bailey: that is how postmodernists argue. Motte: that is how a small proportion of postmodernist philosophers, and a bunch of random people inspired by postmodernism, argue.) So I think it's fair enough to suggest that the absence of real examples is a red flag.

comment by Rob Bensinger (RobbBB) · 2021-09-08T17:13:54.571Z · LW(p) · GW(p)

It would be more honest and objective for the argument to stop at the very first doubtful point and leave it there with a CI for future discussion.

This seems fine until you have to make actual decisions under uncertainty. Most decisions have multiple uncertain factors going into them, and I think it's genuinely useful to try to quantify your uncertainty in such cases (even if it's very rough, and you feel the need to re-run the analysis in several different ways to check how robust it is, etc.).

What would you propose doing in such cases? I'd be interested to see an example of how you'd go about it.

One option might be 'do the rationalist-ish thing when you're forced to because it's decision-relevant; but when you're just analyzing an interesting intellectual puzzle for fun, don't do the rationalist-ish thing'. My main worry there would be that only using a skill when you're forced to gives you less practice with the skill. Sharing quantitative arguments online also makes it easy for others to express disagreement, point out errors you made, etc., which I think is important for improving and getting more calibrated (and for figuring out what's true in the first place -- but it sounds like we disagree there).

Apologies if I misunderstood what you're recommending somewhere -- an example or two of blog posts you think are making this mistake might help. Possibly I'd agree if I saw the actual cases you had in mind!

Replies from: ametipo
comment by ametipo · 2021-09-08T19:44:28.670Z · LW(p) · GW(p)

One option might be 'do the rationalist-ish thing when you're forced to because it's decision-relevant; but when you're just analyzing an interesting intellectual puzzle for fun, don't do the rationalist-ish thing'

 

This is the closest to what I was trying to say, but I would scope my criticism even more narrowly. To try and put it bluntly and briefly: Don't choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout. 

The motte with this argument style, that your conclusion is the best you can do given your limited data, is true and I agree. Because of that this is a genuinely good technique for decision making in a limited space, as you mention. What I see as the bailey though, that your conclusion is actually probable in a real and objective sense, and that you've proven it to be so with supporting logic and data, is what doesn't follow to me. Because you haven't falsified anything in an objective sense, there is no guaranteed probability or likelihood that you are correct, and you are more likely to be incorrect the more times in your argument you've chosen to deliberately suspend disbelief for one of your hypotheses to carry onward. Confidence intervals are a number you're applying to your own feelings, not actual odds of correctness, so can't be objectively used to calculate your chance of being right overall.

Put another way, in science it is totally possible and reasonable for a researcher to have an informed hypothesis that multiple hypothetical mechanisms in the world all exist, and that they combine together to cause some broader behavior that so far has been unexplained. But if this researcher were to jump to asserting that the broader behavior is probably happening because of all these hypothetical mechanisms, without first actively validating all the individual hypotheses with falsifiable experiments, we'd label their proposed broad system of belief as a pseudoscience. The pseudoscience label would still be true even if their final conclusion turned out to be accurate, because the problem here is with the form (assuming multiple mechanisms are real without validating them) rather than the content (the mechanisms themselves). This becomes better or worse the more of these hypothetical but unproven mechanisms need to exist and depend on each other for the researcher's final conclusion to be true.

I hear you on examples, but since I don't like posts that do this I don't have any saved to point at unfortunately. I can go looking for new ones that do this if you think it would still be helpful though.

Replies from: TAG
comment by TAG · 2021-09-08T21:02:08.641Z · LW(p) · GW(p)

To try and put it bluntly and briefly: Don’t choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.

I agree with what you are saying...but my brief version would be "don't confuse absolute plausibility with relative plausibility".

comment by Chris_Leong · 2021-09-08T01:15:09.813Z · LW(p) · GW(p)

Yeah, it isn't really engaging with a steelman. But then again, the purpose of the passage is to explain a very common dynamic that occurs in post-modernism. And I guess it'd be hard, considering a similar situation, to explain a dynamic that sometimes makes government act dysfunctional, whilst also steelmanning that.

Although I don't think its accurate to say that its not representative of what post-modernists really argue - maybe it doesn't accurately represent what philosophers argue - but it seems to fairly accurately represent what everyday people who are a fan of post-modernism would say. And I guess there's a tension between addressing the best version of an argument and addressing the version that most comes up in real life.

Replies from: ametipo
comment by ametipo · 2021-09-08T02:27:12.246Z · LW(p) · GW(p)

The implied claim that I took from the passage (perhaps incorrectly) is that motte and bailey is a fallacy inherent to post-modernist thought in general, rather than a bad rhetorical technique that some post-modernists commenters engage in on the internet. From that it should be easier, not harder, to cite real-world examples of it since the rhetorical fallacy is actually widespread and representative of post-modern thought. The government example isn't analogous, as it would have at least been a real-world example and the person in that hypothetical wouldn't be trying to argue that the dysfunctional dynamic is inherent to all government. But the quote chose to make up an absurd post-modernist claim about the sun being socially constructed to try and prove a claim that post-modernism is absurd.

I made my aside because I am a relatively everyday person who is a general fan of post-modernism, or at least the concept of social construction as I've described, and I have a strong suspicion that whatever specific real-world examples the author is pattern-matching as denying objective reality probably have a stronger argument for being a socially constructed than they're aware of. Or at least able to hand-wave as absurd as easily as their sun hypothetical.

This is all just an aside of an aside though, and I somewhat regret putting it in the body of my post and distracting from the rest. People generally do make terrible arguments on the internet, so in terms of sheer volume I do agree that bad arguments abound.

answer by tangren · 2021-09-06T16:39:22.613Z · LW(p) · GW(p)

I think there's a tendency to assume the rationalist community has all the answers (e.g. The Correct Contrarian Cluster [LW · GW]), which seems (a) wrong to me on the object-level, but also (b) at odds with a lot of other rationalist ideas.

If you point this out, you might hear someone say they're "only an aspiring rationalist", or "that's in the sequences", or "rationalists already believe that". Which can seem like a Motte and Bailey, if it doesn't actually dent their self-confidence at all.

comment by TAG · 2021-09-08T20:34:01.058Z · LW(p) · GW(p)

(b) at odds with a lot of other rationalist ideas.

The great strength of Rationalism...yes, I'm saying something positive. .is that it's flaws can almost always be explained using concepts from its own toolkit.

comment by Rob Bensinger (RobbBB) · 2021-09-08T17:02:34.932Z · LW(p) · GW(p)

I'm not sure what you mean by "has all the answers". I could imagine a rationalist thinking they're n standard deviations above the average college-educated human on some measure of 'has accurate beliefs about difficult topics', and you disagreeing and think they're average, or thinking their advantage is smaller. But that just seems like an ordinary disagreement to me, rather than a motte-and-bailey.

It seems at odds with rationalist ideas to assume you're unusually knowledgeable, but not to conclude you're unusually knowledgeable. 'I'm average' is just as much a claim about the world as 'I'm exceptional', and requires you to stick your neck out just as much -- if you underestimate your ability, you're making just as much of an epistemic mistake as if you overestimate your ability.

Replies from: tangren
comment by tangren · 2021-09-09T00:39:47.730Z · LW(p) · GW(p)

Well, the motte is "I'm very epistemically humble", and the bailey is "that's why I'm always right".

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-09-09T01:22:13.675Z · LW(p) · GW(p)

If rationalists think they're right n% of the time and they're not, then that's condemnable in its own right, regardless of whether there's a motte-and-bailey involved.

If rationalists think they're right n% of the time and they are right n% of the time, but you aren't allowed to be honest about that kind of thing while also being humble, the so much the worse for humility. There are good forms of humility [LW · GW], but the form of 'humility' that's about lying or deceiving yourself about your competence level is straightforwardly bad [? · GW].

Regardless, I don't think there's any inconsistency with being an 'aspiring rationalist'. Even if you're the most rational human alive, you probably still have enormous room to improve. Humans just aren't that good at reasoning and decision-making yet.

Replies from: Dagon
comment by Dagon · 2021-09-09T23:03:58.148Z · LW(p) · GW(p)

The motte and bailey related to this I see is "I'm humble, and often wrong.  But not on <whatever specific topic is at hand>."

answer by TAG · 2021-09-09T16:16:54.582Z · LW(p) · GW(p)

Bayes!

The Bailey is that Bayes is just maths, and you therefore can't disagree with it.

When it is in inevitably pointed out that self described Bayesians don't use explicit maths that much, then they fall back to the Motte .. Bayes is actually just a bunch of heuristics for probabilistic reasoning.

answer by frontier64 · 2021-09-08T02:20:25.713Z · LW(p) · GW(p)

I think there's a common Motte and Bailey with religion

Motte: Christianity and other religions in general are almost certainly untrue. Adherents to religions have killed many people worldwide. The modern world would be better if more religious followers learned rationality and became atheists.

Bailey: The development and continued existence of religion has on the whole been a massive net negative for humanity and we would be better off if the religions never existed and people were always atheists.

I don't even think the bailey is outright stated that often by smart rationalist as much as it is sometimes implied and only stated outright by zealous, less-smart atheists. The zealous atheists are likely succumbing to the affect heuristic and automatically refute the assertion that religion may have been a net positive historically even if it is no longer worthwhile. But they most often defend the claim that religion was terrible for humanity by citing to the Motte.

comment by lsusr · 2021-09-08T05:41:55.484Z · LW(p) · GW(p)

Bailey: "Religion is harmful and untrue."

Motte: "Christianity and Islam (and occasionally Orthodox Judaism) are harmful and untrue."

Replies from: Sherrinford
comment by Sherrinford · 2021-09-10T18:48:40.790Z · LW(p) · GW(p)

Shouldn't it be the other way round?

Replies from: lsusr
comment by lsusr · 2021-09-10T19:15:44.558Z · LW(p) · GW(p)

Yes. Fixed. Thanks.

answer by ozziegooen · 2021-09-07T16:21:10.975Z · LW(p) · GW(p)

I feel like both sides of the "White Fragility" debate have some of this going on.

I don't feel like I've exactly seen rationalists on these sides (in large part because the discussion generally hasn't been very prominent), but I've seen lots of related people on both sides, and I expect rationalists to have similar beliefs to those people. (Myself included) 

https://www.lesswrong.com/posts/pqa7s3m9CZ98FgmGT/i-read-white-fragility-so-you-don-t-have-to-but-maybe-you?commentId=wEuAmC2kYWsCg4Qsr

No comments

Comments sorted by top scores.