Slack Club
post by quanticle · 2019-04-16T06:43:22.442Z · LW · GW · 21 commentsThis is a link post for http://www.thelastrationalist.com/slack-club.html
Contents
21 comments
This is a post from The Last Rationalist, which asks, generally, "Why do rationalists have such a hard time doing things, as a community?" Their answer is that rationality selects for a particular smart-but-lazy archetype, who values solving problems with silver bullets and abstraction, rather than hard work and perseverance. This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.
21 comments
Comments sorted by top scores.
comment by Raemon · 2019-04-16T21:18:37.160Z · LW(p) · GW(p)
I’m sort of confused about ‘slack’ not only getting bundled into the cluster of concepts here, but bundled so hard that the post was named after it.
I agree with the general claim about LW selection effects but slack just seems like a thing a) most people need and b) something that broader societal forces are systematically destroying
Replies from: ingres, Raemon↑ comment by namespace (ingres) · 2019-04-17T02:54:29.299Z · LW(p) · GW(p)
I think it's a sort of Double Entendre? It's also possible the author didn't actually read Zvi's post in the first place. This is implied by the following:
Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment.
In the broader nerd culture, slack is a thing from the Church of the Subgenius, where it means something more like a kind of adversarial zero sum fight over who has to do all the work. In that context, the post title makes total sense.
For an example of this, see: https://en.wikipedia.org/wiki/Chez_Geek
Replies from: Raemoncomment by Gordon Seidoh Worley (gworley) · 2019-04-16T16:27:10.615Z · LW(p) · GW(p)
This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.
Obviously there are exceptions to this, but as a first pass this seems pretty reasonable. For example, one thing I feel is going on with a lot of posts on LessWrong and posts in the rationalist diaspora is an attempt to write things the way Eliezer wrote them, specifically with a mind to creating new jargon to tag concepts.
My suspicion is that people see that Eliezer gained a lot of prestige via his writing, this is one of the things he does in his writing (name concepts with unusual names), and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.
I don't have a lot of evidence to back this up, other than to say I've caught myself having the same temptation at times, and I've thought a bit about this common pattern I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don't see it as much in other writing communities.
Replies from: Viliam, RobbBB, Richard_Kennaway↑ comment by Viliam · 2019-04-16T22:18:21.609Z · LW(p) · GW(p)
My suspicion is that people see that Eliezer gained a lot of prestige via his writing ... and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.
I'd like to emphasize the idea "people try to copy Eliezer", separately from the "naming new concepts" part.
It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.
Trying to "copy Eliezer" is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where "being Eliezer" is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)
Instead, the right thing to do is:
- cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn't have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
- try alternative approaches, where the rationalist community seems to have blind spots. Such as Dragon Army, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan's retrospective, to make their own idea of "we want to copy this, we don't want to copy that, and we want to introduce these new ideas", and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)
I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get "likes" and "shares", verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don't make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn't it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-04-17T09:22:22.795Z · LW(p) · GW(p)
The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.
The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.
To the extent that there's a new person who has a similar founder position right now that's Scott Alexander and not anybody who self-identifies as post-rationalist.
Replies from: philh↑ comment by philh · 2019-04-18T14:55:49.386Z · LW(p) · GW(p)
The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.
Here's a 2012 comment (predating the map by two years) in which someone describes himself as a post-rationalist to distinguish himself from rationalists: https://www.lesswrong.com/posts/p5jwZE6hTz92sSCcY/son-of-shit-rationalists-say#ryJabsxh7m9TPocqS [LW(p) · GW(p)]
The post rats may not have popularised the term as well as Scott did, but I think that's mostly just because Scott is way more popular than them.
To the extent that there’s a new person who has a similar founder position right now that’s Scott Alexander and not anybody who self-identifies as post-rationalist.
Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.
And I think Scott has rebranded the movement, in a relevant sense. There's a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don't identify as rationalists.
("Rebranding" might be better termed "forking".)
Replies from: ChristianKl↑ comment by ChristianKl · 2019-04-18T19:50:17.625Z · LW(p) · GW(p)
Will Newsome did you the term before, but I'm not aware of it being used to the extent that it's worthwhile to speak of him as someone who planned on being seen as a founder. If that was his intention he would have written a lot more outside of IRC.
↑ comment by Rob Bensinger (RobbBB) · 2019-04-19T17:44:45.798Z · LW(p) · GW(p)
I agree with a bunch of these concerns. FWIW, it wouldn't surprise me if the current rationalist community still behaviorally undervalues "specialized jargon". (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don't have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it's being discussed a lot here.
When I observe the reasons people ended up 'working smarter' or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between "novel LW posts are useful and high-status" and "conscientiousness and follow-through is useful and high-status" and see-saw between the two.
The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.
It might be that we're doing too much reflection and too much coming up with lenses. It might also be that we're not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don't care whether we're already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.
It might also be that this varies by individual, such that it's best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.
↑ comment by Richard_Kennaway · 2019-04-16T16:43:06.841Z · LW(p) · GW(p)
I've thought a bit about this common pattern [name concepts with unusual names] I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don't see it as much in other writing communities.
I see the pattern a lot in "spiritual" writings. See, for example, the "Integral Spirituality" being discussed in another recent post.
Replies from: gworley, ChristianKl↑ comment by Gordon Seidoh Worley (gworley) · 2019-04-16T18:04:39.626Z · LW(p) · GW(p)
I have two thoughts on this.
One is that different spiritual traditions have their own deep, complex system of jargon that sometimes stretch back thousands of years through multiple translations, schisms, and acts of syncretism. So when you first encounter it you can feel like it's a lot and it's new and why can't these people just talk normally.
Of course, most LW readers live in a world full of jargon even before you add on the LW jargon, much of it from STEM disciplines. People from outside that cluster feel much the same way about STEM jargon as the average LW reader may feel about spiritual jargon. I point this out merely because I realized, when you brought up the spiritual example, that I wasn't given a full account of what's different about rationalists, maybe, in that there's a tendency to make new jargon even when a literature search would reveal existing jargon exists.
Which is relevant to your point and my second thought, which is that you are right, many things we might call "new age spirituality" have the exact same jargon-coining pattern in their writing as rationalist writing does, with nearly ever author striving to elevate some metaphor to the level of word so that it can becomes a part of a wider shared approach to ontology.
This actually seems to suggest then that my story is too specific and pointing to Eliezer's tendency to do this as a cause is maybe unfair: it may be a tendency that exists within many people, and there is something similar about the kind of people or the social incentives that are similar between rationalists and new age spiritualists that produces this behavior.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-16T18:40:39.026Z · LW(p) · GW(p)
I point this out merely because I realized, when you brought up the spiritual example, that I wasn't given a full account of what's different about rationalists, maybe, in that there's a tendency to make new jargon even when a literature search would reveal existing jargon exists.
I don't think this is different for STEM, or cognitive science, or self-help. After having studied both CS and Math and studied some physics in my off-time, everyone constantly invents new names for all the things. To give you a taste, the first paragraph from the Wikipedia article on Tikhonov regularization:
Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.
You will find the same pattern of lots of different names for the exact same thing in almost all statistical concepts in the Wikipedia series on statistics.
↑ comment by ChristianKl · 2019-04-17T09:42:01.947Z · LW(p) · GW(p)
The color coding that was discussed there isn't anything that the integral community came up with. Wilber looked around for existing paradigms of adult development and picked the one he liked best and took their terms.
I understand what Wilber knows when he says blue because I studied spiral dynamics in a context outside of Wilber's work. It's similar towards when rationalists take names of biases from the psychological literature that might not be known by wider society. It's quite different from EY making up new terms.
Wilber's whole idea about being integral is to take existing concepts from other domains.
comment by Dagon · 2019-04-19T18:50:40.391Z · LW(p) · GW(p)
(note: I may be part of the problem - I consider myself a subscriber to and student of the rationalist philiosophy, but not necessarily a member of whatever is meant by "rationalist community". I don't know if your definition includes me or not.)
This topic might benefit from some benchmarking and comparison with other "communties". Which ones seem more effective than rationalists? Which ones seem less? I've been involved with a lot of physical community groups (homeowner associations, local charities, etc.), and in almost no case would I say the community is effective on it's own - some have very effective leaders who manage to get a lot of impact out of the community.
comment by ChristianKl · 2019-04-17T09:19:03.684Z · LW(p) · GW(p)
Cooperation needs trust. Many rationalists are quite open towards people who are a bit strange and who would rejected in many social circles. I talked with multiple people who think that this creates a problem of manipulative people entering the community (especially the Bay Area community) and trying to get other people to help them for their own ends. In an environment like that it's makes sense that members of the community as less willing to share resources with other members of the community and there is less cooperation.
Replies from: Viliam↑ comment by Viliam · 2019-04-17T21:52:19.866Z · LW(p) · GW(p)
Seems to me that we have members at both extremes. Some of them drop all caution the moment someone else calls themselves a rationalist. Some of them freak out when someone suggests that rationalists should do something together, because that already feels too cultish to them.
My personal experience is mostly with the Vienna community, which may be unusual, because I haven't seen either extreme there. (Maybe I just didn't pay enough attention.) I learn about the extremes on the internet.
I wonder what would be the distribution in Bay Area. Specifically, on one axis I would like to see people divided from "extremely trusting" to "extremely mistrusting", and on another axis, how deeply are those people involved with the rationalist community. That is, whether the extreme people are in the center of the community, or somewhere on the fringe.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-04-19T11:44:32.518Z · LW(p) · GW(p)
I don't think it's well modeled as one-dimension of trust. It feels to me like there's something like shallow trust where people are quite open to cooperate on a low level but quite unwilling to commit to bigger projects together.
Replies from: Viliam↑ comment by Viliam · 2019-04-19T22:19:03.339Z · LW(p) · GW(p)
I think I get what you mean.
Maybe this is somehow related to the "openness to experience" (and/or autism). If you are willing to interact with weird people, you can learn many interesting things most people will never hear about. But you are also more likely to get hurt in a weird way, which is probably the reason most people stay away from weird people.
And as a consequence, you develop some defenses, such as allowing interaction only to some specific degree, and no further. Instead of filtering for safe people, you filter for safe circumstances. Which protects you, but also prevents you from from possible gains, because in reality, some people are more trustworthy than others, and it correlates negatively with some types of weirdness.
Like, instead of "I would probably be okay inviting X and Y to my home, but I have a bad feeling about inviting Z to my home", you are likely to have a rule "meeting people in cafeteria is okay, inviting them home is taboo". Similarly, "explaining concepts to someone is okay, investing money together is not".
So on one hand you are willing to tell a complete stranger in cafeteria the story of your religious deconversion and your opinion on Boltzmann brains (which would be shocking for average people); but you will probably never spend a vacation together with people who are closest to you in intellect and values (which average people do all the time).
Replies from: ChristianKl↑ comment by ChristianKl · 2019-04-19T23:18:51.204Z · LW(p) · GW(p)
Yes, I think that's roughly where I'm pointing.
comment by Nicholas / Heather Kross (NicholasKross) · 2021-12-31T07:20:55.425Z · LW(p) · GW(p)
The silver bullet is to use a lot of lead bullets. Big if true, and appealingly elegant...