Rationalists, Post-Rationalists, And Rationalist-Adjacents
post by orthonormal · 2020-03-13T20:25:52.670Z · LW · GW · 43 commentsContents
A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions. A post-rationalist is someone who believes the rationalist project is misguided or impossible, but who likes to use some of the tools and concepts developed by the rationalists. A rationalist-adjacent is someone who enjoys spending time with some clusters of rationalists (and/or enjoys discussing some topics with rationalists), but who is not interested in doing the whole rationalist thing themself. None 43 comments
Epistemic status: Hortative. I'm trying to argue for carving reality at a new joint.
I think it's lovely and useful that we have labels, not just for rationalist, but for rationalist-adjacent and for post-rationalist. But these labels are generally made extensionally [LW · GW], by pointing at people who claim those labels, rather than intensionally, by trying to distill what distinguishes those clusters.
I have some intensional definitions that I've been honing for a long time. Here's the biggest one.
A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions.
By "unified" I mean decompartmentalized [LW · GW]- if there's a domain where the model gives two incompatible predictions, then as soon as that's noticed it has to be rectified in some way.
And it's important that it be probabilistic- it's perfectly consistent to resolve a conflict between predictions by saying "I currently think the answer is X with about 60% probability, and Y with about 25% probability, and with about 15% probability I'm missing the correct option or confused about the nature of the question entirely".
The Sequences are aimed at people trying to do exactly this thing, and Eliezer focuses on how to not go horribly wrong in the process (with a special focus on not trusting one's own sense of obviousness).
Being a rationalist isn't about any specific set of conclusions- it's not about being an effective altruist, or a utilitarian, or even an atheist. It's about whether one is trying to do that thing or not. Even if one is doing a terrible job of it!
Truth-seeking is a prerequisite, but it's not enough. It's possible to be very disciplined about finding and assembling true facts, without thereby changing the way one thinks about the world. As a contrast, here's how the New York Times, whose fact-checking quality is not in dispute, decides what to report:
By and large, talented reporters scrambled to match stories with what internally was often called “the narrative.” We were occasionally asked to map a narrative for our various beats a year in advance, square the plan with editors, then generate stories that fit the pre-designated line.
The difference between wielding a narrative and fitting new facts into it, and learning a model from new facts, is the difference between rationalization and rationality [LW · GW].
"Taking weird ideas seriously" is also a prerequisite (because some weird ideas are true, and if you bounce off of them you won't get far), but again it's not enough. I shouldn't really need to convince you of that one.
Okay, then, so what's a post-rationalist?
The people who identify as such generally don't want to pin it down, but here's my attempt at categorizing at least the ones who make sense to me:
A post-rationalist is someone who believes the rationalist project is misguided or impossible, but who likes to use some of the tools and concepts developed by the rationalists.
Of course I'm less confident that this properly defines the cluster, outside of groups like Ribbonfarm where it seems to fit quite well. There are people who view the Sequences (or whatever parts have diffused to them) the way they view Derrida: as one more tool to try on an interesting conundrum, see if it works there, but not really treat it as applicable across the board.
And there are those who talk about being a fox rather than a hedgehog (and therefore see trying to reconcile one's models across domains as being harmful), and those who talk about how the very attempt is a matter of hubris, that not only can we not know the universe, we cannot even realistically aspire to decent calibration.
And then, of course:
A rationalist-adjacent is someone who enjoys spending time with some clusters of rationalists (and/or enjoys discussing some topics with rationalists), but who is not interested in doing the whole rationalist thing themself.
Which is not a bad thing at all! It's honestly a good sign of a healthy community that the community appeals even to people for whom the project doesn't appeal, and the rationalist-adjacents may be more psychologically healthy than the rationalists.
The real issue of contention, as far as I'm concerned, is something I've saved for the end: that not everyone who self-identifies as a rationalist fits the first definition very well, and that the first definition is in fact a more compact cluster than self-identification.
And that makes this community, and this site, a bit tricky to navigate. There are rationalist-adjacents for whom a double-crux on many topics would fail because they're not interested in zooming in so close on a belief. There are post-rationalists for whom a double-crux would fail because they can just switch frames on the conversation any time they're feeling stuck. And to try to double-crux with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.
I don't yet know of an intervention for signaling that a conversation is happening on explicitly rationalist norms- it's hard to do that in a way that others won't feel pressured to insist they'd follow. But I wish there were one.
43 comments
Comments sorted by top scores.
comment by Said Achmiz (SaidAchmiz) · 2020-03-14T09:18:46.530Z · LW(p) · GW(p)
I want to specifically object to the last part of the post (the rest of it is fine and I agree almost completely with both the explicit positive claims and the implied normative ones).
But at the end, you talk about double-crux, and say:
And to try to double-crux with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.
Well, and why did you think you could take it for granted in the community? I don’t think that’s justified at all—post-rationalists and rationalist-adjacents aside!
For instance, while I don’t like to label myself as any kind of ‘-ist’—even a ‘rationalist’—the term applies to me, I think, better than it does to most people. (This is by no means a claim of any extraordinary rationalist accomplishments, please note; in fact, if pressed for a label, I’d have to say that I prefer the old one ‘aspiring rationalist’… but then, your given definition—and I agree with it—requires no particular accomplishment, only a perspective and an attempt to progress toward a certain goal. These things, I think, I can honestly claim.) Certainly you’ll find me to be among the first to argue for the philosophy laid out in the Sequences, and against any ‘post-rationalism’ or what have you.
But I have deep reservations about this whole ‘double-crux’ business, to say the least; and I have commented on this point, here on Less Wrong, and have not seen it established to my satisfaction that the technique is all that useful or interesting—and most assuredly have not seen any evidence that it ought to be taken as part of some “rationalist canon”, which you may reasonably expect any other ‘rationalist’ to endorse.
Now, you did say that you’d feel infuriated by having double-crux fail in either of those specific ways, so perhaps you would be ok with double-crux failing in any other way at all? But this does not seem likely to me; and, in any case, my own objection to the technique is similar to what you describe as the ‘rationalist-adjacent’ response (but different, of course, in that my objection is a principled one, rather than any mere unreflective lack of interest in examining beliefs too closely).
Lest you take this comment to be merely a stream of grumbling to no purpose, let me ask you this: is the bit about double-crux meant to be merely an example of a general tendency (of which many other examples may be found) for Less Wrong site/community members to fail to endorse the various foundational concepts and techniques of ‘LW-style’ rationality? Or, is the failure of double-crux indeed a central concern of yours, in writing this post? How important is that part of the post, in other words? Is the rest written in the service of that complaint specifically? Or is it separable?
Replies from: orthonormal, Raemon↑ comment by orthonormal · 2020-03-14T20:09:41.823Z · LW(p) · GW(p)
There's a big difference between a person who says double-cruxing is a bad tool and they don't want to use it, and someone who agrees to it but then turns out not to actually be Doing The Thing.
And it's not that ability-to-double-crux is synonymous with rationality, just that it's the best proxy I could think of for what a typical frustrating interaction on this site is missing. Maybe I should specify that.
Replies from: Raemon↑ comment by Raemon · 2020-03-14T21:56:53.269Z · LW(p) · GW(p)
I would hazard a guess that you might have written the same comment with "debate" or "discussion" instead of double-crux, if doublecrux hadn't been invented. Double crux is one particular way to resolve a disagreement, but I think the issue of "not willing to zoom in on beliefs" and "switching frames mid-conversation" come up in in other conversational paradigms.
(I'm not sure whether Said would have related objections to "zooming in on beliefs" or "switching frames" being Things Worth Doing, but seemed worth examining the distinction)
Replies from: orthonormal, SaidAchmiz↑ comment by orthonormal · 2020-03-15T02:00:04.320Z · LW(p) · GW(p)
I think it would be very connotatively wrong to use those. I really need to say "the kind of conversation where you can examine claims together, and both parties are playing fair and trying to raise their true objections and not moving the goalposts", and "double-crux" points at a subset of that. It doesn't literally have to be double-crux, but it would take a new definition in order to have a handle for that, and three definitions in one post is already kind of pushing it.
Any better ideas?
Replies from: Kaj_Sotala, Raemon↑ comment by Kaj_Sotala · 2020-03-15T18:05:02.380Z · LW(p) · GW(p)
"Collaborative truth-seeking"?
There are rationalist-adjacents for whom collaborative truth-seeking on many topics would fail because they're not interested in zooming in so close on a belief. There are post-rationalists for whom collaborative truth-seeking would fail because they can just switch frames on the conversation any time they're feeling stuck. And to try to collaborate on truth-seeking with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.
↑ comment by Said Achmiz (SaidAchmiz) · 2020-03-15T05:38:52.215Z · LW(p) · GW(p)
Unless I am misunderstanding, wouldn’t orthonormal say that “switching frames” is actually a thing not to do (and that it’s something post-rationalists do, which is in conflict with rationalist approaches)?
Replies from: Raemon↑ comment by Raemon · 2020-03-15T06:01:00.889Z · LW(p) · GW(p)
I believe the claim he was making (which I was endorsing), was to not switch frames in the middle of a conversation in a sort of slippery goal-post-moving way (especially repeatedly, without stopping to clarify that you're doing that). That can result in poor communication.
I've previously talked a lot about noticing frame differences, which includes noticing when it's time to switch frames, but within the rationalist paradigm, I'd argue this is a thing you should do intentionally when it's appropriate for the situation, and flag when you're doing it, and make sure that your interlocutor understands the new frame.
Replies from: orthonormal↑ comment by orthonormal · 2020-03-15T17:40:00.522Z · LW(p) · GW(p)
I agree with this comment.
The rationalist way to handle multiple frames is to either treat them as different useful heuristics which can outperform naively optimizing from your known map, or as different hypotheses for the correct general frame, rather than as tactical gambits in a disagreement.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-03-22T00:12:15.464Z · LW(p) · GW(p)
There's a set of post-rationalist norms where switching frames isn't a conversational gambit, it's expected and part of generative process for solving problems and creating closeness. I would love to see people be able to switch between these different types of norms, as it can be equally frustrating when you're trying to vibe with people who can only operate through rationalist frames.
comment by cousin_it · 2020-03-15T08:04:32.229Z · LW(p) · GW(p)
In terms of conversation style, I'd define a "rationalist" as someone who's against non-factual objections to factual claims: "you're not an expert", "you're motivated to say this", "you're friends with the wrong people", "your claim has bad consequences" and so on. An intermediate stage would be "grudging rationalist": someone who can refrain from using such objections if asked, but still listens to them, and relapses to using them when among non-rationalists.
Replies from: gilch, orthonormal↑ comment by orthonormal · 2020-03-15T17:41:10.467Z · LW(p) · GW(p)
I'm not sure our definitions are the same, but they're very highly correlated in my experience.
comment by DanielFilan · 2020-03-14T05:31:13.984Z · LW(p) · GW(p)
A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions.
... the entire world? As far as I can tell, the vast majority of rationalists would like to have an accurate probabilistic model of the entire world, but are only trying to maintain and update small-ish relevant parts of it. For example, I have (as far as I can recall) never tried to build any model of Congolese politics (until writing this sentence, actually, when I stopped to consider what I do believe about it), nor tried to propagate what I know that relates to Congolese politics (which is non-zero) to other topics.
Replies from: orthonormal↑ comment by orthonormal · 2020-03-14T06:17:52.728Z · LW(p) · GW(p)
You have a prior on Congolese politics, which draws from causal nodes like "central Africa", "post-colonialism", and the like; the fact that your model is uncertain about it (until you look anything up or even try to recall relevant details) doesn't mean your model is mute about it. It's there even before you look at it, and there's been no need to put special effort into it before it was relevant to a question or decision that mattered to you.
I'm just saying that rationalists are trying to make one big map, with regions filled in at different rates (and we won't get around to everything), rather than trying to make separate map-isteria.
Replies from: DanielFilan↑ comment by DanielFilan · 2020-03-14T16:37:28.787Z · LW(p) · GW(p)
I agree that my global map contains a region for Congolese politics. What I'm saying is that I'm not trying to maintain that bit of the map, or update it based on new info. But I guess as long as the whole map is global and I'm trying to update the global map, that suffices for your definition?
Replies from: orthonormal↑ comment by orthonormal · 2020-03-14T20:11:41.950Z · LW(p) · GW(p)
It does.
comment by Gordon Seidoh Worley (gworley) · 2020-03-14T02:12:34.597Z · LW(p) · GW(p)
I feel like this fails to capture some import features of each of these categories as they exist in my mind.
- The rationalist category can reasonably includes people who are not building unified probabilistic models, even if LW-style rationalists are Bayesians, because they apply similarly structured epistemological methods even if their specific methods are different.
- The post-rationalist category can be talked about constructively, although it's a bit hard to do this in a way that satisfies everyone, especially rationalists, because it require giving up commitment to a single ontology as the only right ontology.
↑ comment by Raemon · 2020-03-14T06:43:24.072Z · LW(p) · GW(p)
The post-rationalist category can be talked about constructively, although it's a bit hard to do this in a way that satisfies everyone, especially rationalists, because it require giving up commitment to a single ontology as the only right ontology.
What's the distinction between this description, and the way orthonormal described post-rationalists?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-03-15T01:08:55.201Z · LW(p) · GW(p)
I see their description as set up against the definition of rationalist, so an eliminative description that says more about what it is not than what it is.
Replies from: orthonormal↑ comment by orthonormal · 2020-03-15T01:54:11.912Z · LW(p) · GW(p)
Like "non-Evangelical Protestant", a label can be useful even if it's defined as "member of this big cluster but not a member of this or that major subcluster". It can even have more unity on many features than the big cluster does.
↑ comment by philh · 2020-03-16T15:54:59.251Z · LW(p) · GW(p)
The post-rationalist category can be talked about constructively, although it’s a bit hard to do this in a way that satisfies everyone, especially rationalists, because it require giving up commitment to a single ontology as the only right ontology.
To clarify syntax here, and then to ask the appropriate follow up question... do you mean more like
- In order to give a satisfying constructive definition of post rationalists, one must give up commitment to a single ontology
(Which would be surprising to me - could you elaborate?)
or more like
- The constructive definition of post rationalist would include something like "a person who has given up commitment to a single ontology"
(In which case I don't see why it would be hard to give that definition in a way that satisfies rationalists?)
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-03-16T16:27:04.939Z · LW(p) · GW(p)
To some extent I mean both things, though more the former than the latter.
I'll give a direction answer, but first consider this not perfect comparison that I think gives some flavor of how it seemed to me the OP is approaching the post-rationalist category such that it might evoke the feeling in a self-identified rationalist the sort of feeling a post-rationalist would have seeing themselves explained the way they are here.
Let's give a definition for a pre-rationalist that someone who was a pre-rationalist would endorse. They wouldn't call themselves a pre-rationalist, of course, more likely they'd call themselves something like a normal, functioning adult. They might describe themselves like this, in relation to epistemology:
A normal, functioning adult is someone who cares about the truth.
They then might describe a rationalist like this:
A rationalist is someone who believes certain kinds of or ways of knowing truth are invalid, only special methods can be used to find truth, and other kinds of truths are not real.
There's a lot going on here. The pre-rationalist is framing things in ways that make sense to them, which is fair, but it also means they are somewhat unfair to the rationalist because in their heart what they see is some annoying person who rejects things that they know to be true because it doesn't fit within some system that the rationalist, from the pre-rationalist's point of view, made up. They see the rationalist as a person disconnected from reality and tied up in their special notion of truth. Compare the way that to a non-rationalist outsider rationalists can appear arrogant, idealistic, foolish, unemotional, etc.
I ultimately think something similar is going on here. I don't think this is malicious, only that orthonormal doesn't have an inside view of what it would mean to be a post-rationalist and so offers a definition that is defined in relation to being a rationalist, just as a pre-rationalist would offer a definition of rationalist set up in contrast to their notion of what it is to be "normal".
So yes I do mean that "in order to give a satisfying constructive definition of post rationalists, one must give up commitment to a single ontology" because this is the only way to give such a definition from the inside and have it make sense.
I think the problem is actually worse than this, which is why I haven't proffered my own definition here. I don't think there's a clean way to draw lines around the post-rationalist category and have it capture all of what a post-rationalist would consider important because it would require making distinctions that are in a certain sense not real, but in a certain sense are. You might say that the post-rationalist position is ultimately a non-dual one as way of pointing vaguely in the direction of what I mean, but it's not that helpful a pointer because it also is only a useful one if you have some experience to ground what that means.
So if I really had to try to offer a constructive definition, it would look something like a pointer to what it is like to think in this way so that you could see it for yourself, but you'd have to do that seeing all on your own, not through my words, it would be highly contextualized to fit the person I was offering the definition to, and in the end it would effectively have to make you, at least for a moment, into a post-rationalist, even if beyond that moment you didn't consider yourself one.
Now that I've written all this, I realize this post in itself might serve as such a pointer to someone, though not necessarily you, philh.
↑ comment by Raemon · 2020-03-14T22:23:48.766Z · LW(p) · GW(p)
Also:
The rationalist category can reasonably includes people who are not building unified probabilistic models, even if LW-style rationalists are Bayesians, because they apply similarly structured epistemological methods even if their specific methods are different.
I think this is the part of the post where orthonormal is explicitly drawing a boundary that isn't yet consensus. (So, yes, there's probably a disagreement here).
I think there is a meaningful category of "people who use similarly structured epistemological methods, without necessarily having a unified probability model." There's a separate, smaller category of "people doing the unified probabilistic model thing."
One could argue that either of those makes sense to call a rationalist, but you at least need to reference those different categories sometimes.
comment by orthonormal · 2020-03-24T19:53:50.471Z · LW(p) · GW(p)
I wish I'd remembered to include this in the original post (and it feels wrong to slip it in now), but Scott Aaronson neatly paralleled my distinction between rationalists and post-rationalists when discussing interpretations of quantum mechanics:
But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally different conceptions of what a scientific theory is supposed to do for you. Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?
Scott tries his best to give a not-answer and be done with it, which is in keeping with my categorization of him as a prominent rationalist-adjacent.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-04-04T21:16:26.628Z · LW(p) · GW(p)
Wait. Does it mean that, given that I prefer instrumentalism over realism in metaphysics and Copenhagen over MWI in QM (up to some nuances [LW · GW]), I am a post-rationalist now? That doesn't feel right. I don't believe that the rationalist project is "misguided or impossible", unless you use a very narrow definition of the "rationalist project". Here [LW(p) · GW(p)] and here [LW(p) · GW(p)] I defended what is arguably the core of the rationalist project.
Replies from: orthonormal↑ comment by orthonormal · 2020-04-05T01:36:48.078Z · LW(p) · GW(p)
It's not the same distinction, but I expect it's correlated. The correlation isn't strong enough to prevent people from being in the other quadrants, of course. :-)
comment by Nick_Tarleton · 2020-03-15T20:35:02.330Z · LW(p) · GW(p)
if there's a domain where the model gives two incompatible predictions, then as soon as that's noticed it has to be rectified in some way.
What do you mean by "rectified", and are you sure you mean "rectified" rather than, say, "flagged for attention"? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn't try to be consistent. I believe 'immediately update your model somehow when you notice an inconsistency' is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don't think this belief is opposed to "rationalism", which should only require not indefinitely tolerating inconsistency.)
Replies from: orthonormal↑ comment by orthonormal · 2020-03-16T03:51:48.998Z · LW(p) · GW(p)
The next paragraph applies there: you can rectify it by saying it's a conflict between hypotheses / heuristics, even if you can't get solid evidence on which is more likely to be correct.
Cases where you notice an inconsistency are often juicy opportunities to become more accurate.
comment by waveman · 2020-03-14T00:16:41.191Z · LW(p) · GW(p)
I found the framework in the book "Action Inquiry" by Bill Torbert very helpful in this context. In Torbert's framework, a typical young rationalist would be in "expert' mode. There are many good things to be had in later levels.
A brief outline of the framework is here https://www.madstonblack.com.sg/wp-content/uploads/2016/05/Cook-Greuter-maturity-stages.pdf
The post-rationalists, in this view, do not think that rationalism is wrong but that it has limitations. For example a more relaxed attitude to the truth of one's theories can leave you more open to new information and to other ways of seeing things. A recent conversation I had with a young rationalist illustrates this. He criticised me for denying the 'science' showing, in his view, that statins are highly beneficial medications, and felt I was succumbing to woo-woo in being sceptical. I tried to argue that it is not a simple matter of science versus woo-woo. The scientific process is influenced by financial incentives, career incentives, egos, ideologies, the sometimes excessive influence of high-status figures especially in medicine; the ideal of open and complete publication of data and methods and results are by no means met. At the same time one should not assume that with 15 minutes + google you can do better than a highly trained specialist.
Replies from: orthonormal↑ comment by orthonormal · 2020-03-14T04:04:37.064Z · LW(p) · GW(p)
Re: your anecdote, I interpret that conversation as one between a person with a more naive view of how the world works and one with a more sophisticated understanding. Both people in such a conversation, or neither of them, could be rationalists under this framework.
comment by philh · 2020-03-14T14:07:50.510Z · LW(p) · GW(p)
if there’s a domain where the model gives two incompatible predictions, then as soon as that’s noticed it has to be rectified in some way.
This feels excessive to me, but maybe you didn't intend it as strongly as I interpret.
I do think it's the case that if you have incompatible predictions, something is wrong. But I think often the best you can do to correct it is to say something like...
"Okay, this part of my model would predict this thing, and that part would predict that thing, and I don't really know how to reconcile that. I don't know which if either is correct, and until I understand this better I'm going to proceed with caution in this area, and not trust either of those parts of my model too much."
Does that seem like it would satisfy the intent of what you wrote?
Replies from: orthonormal↑ comment by orthonormal · 2020-03-14T20:11:15.753Z · LW(p) · GW(p)
Yes, it does. The probabilistic part applies to different parts of my model as well as to outputs of a single model part.
comment by Shmi (shminux) · 2020-03-14T01:40:18.376Z · LW(p) · GW(p)
This matches my interpretation of the "community". Personally I am more of a post-rationalist type, with the instrumentalist/anti-realist bend philosophically, and think that the concept of "the truth" is the most harmful part of rationality teachings. Replacing "true" with "high predictive accuracy" everywhere in the sequences would be a worthwhile exercise.
comment by Ben Pace (Benito) · 2020-03-13T20:58:36.437Z · LW(p) · GW(p)
I like this, it's simple, it resolved conceptual tensions I had, and I will start using this. (Obvs I should check in in a few months to see if this worked out.)
comment by Chris_Leong · 2020-03-14T01:05:17.220Z · LW(p) · GW(p)
Thanks, I thought this was useful, especially dividing it into 3 categories instead of two
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-08-01T19:35:37.811Z · LW(p) · GW(p)
I strongly disagree with this definition of a rationalist. I think it's way too narrow, and assumes a certain APPROACH to "winning" that is likely incorrect.
comment by Viliam · 2020-03-15T00:41:58.017Z · LW(p) · GW(p)
A related thing I was thinking about for some time: Seems to me that the line between "building on X" and "disagreeing with X" is sometimes unclear, and the final choice is often made because of social reasons rather than because of the natural structure of the idea-space. (In other words, the ideology is not the community; therefore the relations between two ideologies often do not determine the relations between the respective communities.)
Imagine that there was a guy X who said some wise things: A, B, and C. Later, there was another guy Y who said: A, B, C, and D. Now depending on how Y feels about X, he could describe his own wisdom as either "standing on shoulders of giants, such as X", or "debunking of teachings of X, who was foolishly ignorant about D". (Sometimes it's not really Y alone, but rather the followers of Y, who make the choice.) Two descriptions of the same situation; very different connotations.
To give a specific example, is Scott Alexander a post-rationalist? (I am not sure whether he ever wrote anything on this topic, but even if he did, let's ignore it completely now, because... well, he could be mistaken about where he really belongs.) Let's try to find out the answer based on his online behavior.
There are some similarities: He writes a blog outside of LW. He goes against some norms of LW (e.g. he debates politics). He is admired by many people on LW, because he writes things they find insightful. At the same time, a large part of his audience disagrees with some core LW teachings (e.g. all religious SSC readers presumably disagree with LW taking atheism as the obviously rational conclusion).
So it seems like he is in a perfect position to brand himself as something that means "kinda like the rationalists, only better". Why didn't this happen? First, because Scott is not interested in doing this. Second, because Scott writes about the rationalist community in a way that doesn't even allow his fans (e.g. the large part that disagrees with LW) to do this for him. Scott is loyal to the rationalist project and community.
If we agree that this is what makes Scott a non-post-rationalist, despite all the similarities with them, than it provides some information about what being a post-rationalist means. (Essentially, what you wrote in the article.)
Replies from: orthonormal↑ comment by orthonormal · 2020-03-15T01:41:28.421Z · LW(p) · GW(p)
Scott could do all those things and be a rationalist-adjacent. He's a rationalist under my typology because he shares the sincere yearning and striving for understanding all of the things in one modality, even if he is okay with the utility of sometimes spending time in other modalities. (Which he doesn't seem to, much, but he respects people who do- he just wants to understand what's happening with them.)
comment by Dagon · 2020-03-13T22:20:44.117Z · LW(p) · GW(p)
I'm always a bit suspicious of identity or membership labels for humans. There are always overlap, change-over-time, and boundary cases that tend to make me throw up my hands and say "ok, be whatever you want!"
In this post, I'm confused by the phrase "in the sense of this particular community" for a description that does not mention community. The definition seems to be closer to "rationality-seeker" as a personal belief and behavior description than "rationalist" as a member or group actor.
Replies from: orthonormal↑ comment by orthonormal · 2020-03-14T00:10:08.348Z · LW(p) · GW(p)
I'm confused by the phrase "in the sense of this particular community" for a description that does not mention community.
I'm distinguishing this sense of rationalist from the philosophical school that long predated this community and has many significant differences from it. Can you suggest a better way to phrase my definition?