post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Benquo · 2019-08-21T16:25:55.150Z · LW(p) · GW(p)

The rhetorical structure of this post seems to imply a substantially different model of who your audience is, and what sort of appeals will work, than the model explicitly described. Since the question of which arguments will work on your intended audience is actually the whole point of your post, I think you should do an internal double crux on this issue, the results of which I expect will change your entire strategic sense of what's going on in the Rationalist community the value of schisms, etc. I'm happy to spend plenty of time in live conversation on this if you're interested and think seeking that sort of mutual understanding might be worth your time.

Explicitly, it seems like you're saying that the way in which woo ideas are being brought into the community discourse is objectionable, because people are simply adopting new fragmentary beliefs that on the surface blatantly contradict much of the other things they believe, without doing the careful work of translation, synthesis, and rigorous grounding of concepts. You're arguing that we should be alarmed by this trend, and engage in substantial retrenching. This is fundamentally an appeal to rational monism.

But rhetorically, you begin by offering a simple list of the names of things admitted into the conversation, and implicitly ask the reader to agree that this is objectionable before talking about method at all (and you don't go into enough detail on the type of skepticism and rigor you're suggesting for me to have a sense of whether I even agree.) The implied model here is that for most of your readers appeals to reason are futile, and you can at best get them to reject some ideas as foreign.

I think that the second model - the one you used to decide how to write the post - is a better representation of the current state of the Rationalist community than the first one. I don't see much value in preserving or restoring the "integrity" of such a community (constituting in practice the cluster of people vaguely attracted to the Sequences, HPMoR, EA, the related in-person Bay Area community, CFAR, and the MIRI narrative). I see a lot of value in a version of this post clearly targeted to the remnant better-described by the first model. It would be nice if we could communicate about this in a way that didn't hurt the feelings of the MOPs too badly, since they never wanted to hurt anyone.

Replies from: Davis_Kingsley, Benquo
comment by Davis_Kingsley · 2019-08-24T08:50:24.458Z · LW(p) · GW(p)

There's an important part missing in my current draft that has to do with the fact that much of the "esoteric" content is in fact not being openly pushed, but rather smuggled into the community via other methods; I think it's very difficult to translate/synthesize/ground concepts if you aren't being told about them until they've already taken over relevant parts of the community.

This is also why much of the post has a "sound the alarm" feeling. I think that if the community's institutions were operating more properly, it would be much more resilient to things being "smuggled in", which in turn would mean that people trying to spread these ideas would have to make a stronger/more reasoned case for them in order to get traction here.

As for the "list" format -- this post is somewhat based on a talk I gave at the CFAR alumni reunion last year which was much better-received than I'd anticipated. Several people told me they had similar concerns but wasn't sure if it was just them or what, and if we're trying to "get the shields back online" just warning people that this is going on may be sufficient to prompt at least somewhat more careful thinking.

comment by Benquo · 2019-08-21T16:43:14.202Z · LW(p) · GW(p)

I'd put you in a cluster with Lahwran and Said Achmiz on this, if that helps clarify the gestalt I'm trying to identify. By contrast, I'd say that the cluster Benito's pointing to - which I'd say mainly publicly includes me, Jessicata, Zvi Mowshowitz, and Romeo, though obviously also Michael Vassar - is organized around the idea that if you honestly apply loose gestalt thinking, it can very efficiently and accurately identify structures in the world - all of which can ultimately be reconciled - but that this kind of honesty necessarily involves reprogramming ourselves to not be such huge tools, and most people, well, haven't done that, so they end up having to pick sides between perception and logic.

comment by Ben Pace (Benito) · 2019-08-21T01:11:20.758Z · LW(p) · GW(p)

I feel like you think this is a transgressive boundary or otherwise surprising, whereas I think it mostly seems fine. I'll write some thoughts regardless.

You write fairly broadly here. I know (broadly) what you're talking about, and I'm sad about it too, like when I'm regularly in spaces (e.g. CFAR) where I would've expected to have common knowledge that everyone's read the sequences, understands the concept of a technical explanation of a technical explanation, how many bits of information it takes to justify a belief, etc, but I don't. I don't feel like "the epistemics are failing" is the coarse-grained description I'd use, I think there's more details about which bits are going on and why (and which bits actually seem to be going quite excellently!), but I wanted to agree with feeling sad about this particular bit.

many of these ideas have essentially never been justified using the paradigm that the community already operates in

I am not sure whether you feel this way when reading LessWrong though. If you scroll through the curated posts of the last few months [? · GW], I don't expect it seems mostly like a lot of obviously terrible ideas are being treated unsceptically (thought you're welcome to surprise me and say it seems just as bad!).

(A few counterexamples on LessWrong: Oliver wrote an attempted translation [LW(p) · GW(p)] for chakras the other day. Kaj's most popular post (277 karma!) was an attempt to explain a bunch of enlightenment/meditation stuff in non-mysterious term [LW · GW] and has a whole interesting sequence [? · GW] offering explicit models behind things like Internal Family Systems. After Scott began a discussion of cultural evolution, Vaniver wrote a post I found fascinating Steelmanning Divination [LW · GW]. I wrote in pretty explicit language about my experience circling here [LW(p) · GW(p)]. Zvi has written in a 'postmodernist beat-poem' style about things that are out to get you and why choices are bad, but also tries to give simple, microeconomic explanations for how systems (like blackmail [LW · GW] and facebook) can optimise to destroy all value. Back on the cultural evolution frame, Zvi [LW · GW] and Ben [LW · GW] have both elucidated explicit models for why the Sabbath is an important institution one should respect.)

(Not to mention the great obvious straightforward rationality writing, like Abram's recent Mistakes With Conservation of Expected Evidence [LW · GW] and loads more.)

So when you talk about weird/mysterious ideas not being explained in an explicit and clear epistemology, I do want to say I think people on LessWrong are often making that effort, and I think we've tried to signal that we have higher standards here. It's okay to write poetry and so on when you've not yet learned how to make your idea explicit, but the goal is a technical understanding, that comes with an explicit, communicable model.

Replies from: Benquo, Benquo
comment by Benquo · 2019-08-21T16:29:13.099Z · LW(p) · GW(p)

My impression is that for the majority of my audience, my efforts to show how everything adds up to normality are redundant, and mostly they're going by a vague feeling. Overall, it seems to me that there are people trying to do the kind of translational work Davis is asking for, but the community is not, as a whole, applying the sort of discernment that would demand such work. Whether or not this is the problem Davis is trying to identify, I'm worried enough about it that LessWrong has been getting less and less interesting to me as a community to engage with. You're probably by far the person most worth reaching who isn't already in my "faction," such as it is, and Davis is one of the few others who seem to be trying to make sense of things at all.

Replies from: Davis_Kingsley
comment by Davis_Kingsley · 2019-08-24T08:52:14.510Z · LW(p) · GW(p)
Overall, it seems to me that there are people trying to do the kind of translational work Davis is asking for, but the community is not, as a whole, applying the sort of discernment that would demand such work.

Agreed, yeah. This is maybe the main thing I'm getting at -- I'm trying to shock people into realizing "hey, everything isn't fine, things are going wrong" and applying more discernment to what's going on in the community.

comment by Benquo · 2019-08-21T17:55:05.090Z · LW(p) · GW(p)
I don't feel like "the epistemics are failing" is the coarse-grained description I'd use, I think there's more details about which bits are going on and why (and which bits actually seem to be going quite excellently!), but I wanted to agree with feeling sad about this particular bit.

I expect it would be quite useful both here and more generally for you to expand on your model of this.