Guarding Against the Postmodernist Failure Mode

post by NoSignalNoNoise (AspiringRationalist) · 2014-07-08T01:34:33.440Z · LW · GW · Legacy · 78 comments

Contents

78 comments

The following two paragraphs got me thinking some rather uncomfortable thoughts about our community's insularity:

We engineers are frequently accused of speaking an alien language, of wrapping what we do in jargon and obscurity in order to preserve the technological priesthood. There is, I think, a grain of truth in this accusation. Defenders frequently counter with arguments about how what we do really is technical and really does require precise language in order to talk about it clearly. There is, I think, a substantial bit of truth in this as well, though it is hard to use these grounds to defend the use of the term "grep" to describe digging through a backpack to find a lost item, as a friend of mine sometimes does. However, I think it's human nature for members of any group to use the ideas they have in common as metaphors for everything else in life, so I'm willing to forgive him.

The really telling factor that neither side of the debate seems to cotton to, however, is this: technical people like me work in a commercial environment. Every day I have to explain what I do to people who are different from me -- marketing people, technical writers, my boss, my investors, my customers -- none of whom belong to my profession or share my technical background or knowledge. As a consequence, I'm constantly forced to describe what I know in terms that other people can at least begin to understand. My success in my job depends to a large degree on my success in so communicating. At the very least, in order to remain employed I have to convince somebody else that what I'm doing is worth having them pay for it.

 - Chip Morningstar, "How to Deconstruct Almost Anything: My Postmodern Adventure"

The LW/MIRI/CFAR memeplex shares some important features with postmodernism, namely the strong tendency to go meta, a large amount of jargon that is often impenetrable to outsiders and the lack of an immediate need to justify itself to them.  This combination takes away the selective pressure that stops most groups from going totally crazy.  As far as I can tell, we have not fallen into this trap, but since people tend to fail to notice when their in-group has gone crazy, this is at best weak evidence that we haven't; furthermore, even assuming that we are in fact perfectly sane now, it will still take effort to maintain that state.

Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure.  Has anyone here tried to do either of these to a significant degree?  If so, how, and how successfully?

What other approaches can we take to check (and defend) our collective sanity?

78 comments

Comments sorted by top scores.

comment by chaosmage · 2014-07-08T06:59:47.750Z · LW(p) · GW(p)

I'm unlucky enough to know a few postmodernists, and what I find most striking about them is that they try very hard to stay out of conflict with each other.

That makes sense because when they do argue, due to their lack of a clear method for assessing who (if anybody) is in the right, the arguments are unproductive, frustrating, and can get quite nasty.

So I don't think we're too similar to them. That said, the obvious way to check our sanity would be to have outsiders look at us. In order to do that, we'd probably have to convince outsiders to give a fuck about us.

Replies from: Algernoq
comment by Algernoq · 2014-07-13T15:09:02.749Z · LW(p) · GW(p)

As an outsider, here are some criticisms (below). I've read all of HPMOR and some of the sequences, attended a couple of meetups, and am signed up for cryonics. But, I have little interest in reading more of the sequences and no interest in more in-person meetings.

  • Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

  • In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

  • LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

  • Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

  • LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to "training camps" for a few months, then try and fail to start a startup, increasing the likelihood that they'll go back and work for LW at below-market rates and earn less money for the rest of their life due to not having the PhD from the top-10 school). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.

Replies from: AlexanderRM
comment by AlexanderRM · 2014-11-14T22:57:16.242Z · LW(p) · GW(p)

If I may just focus on one of your critiques, I'd like to say that the thing about the cult-like structure... I'm not sure whether that actually results in the cult effect on LW or not, but the general idea both intrigues and terrifies me.

Especially the "contempt for less-rational Normals" thing- I haven't noticed that in myself but the possibility* of that happening by itself is... interesting, with what I know of LW. I have almost never seen anyone on LW really condemning anyone specific as "irrational", except maybe a couple celebrities, or doing anything that could relate to actively urging others to several ties, but I have this image that individuals in LW could potentially often sever ties with people they see as less rational as a result, without anybody actually intending it or even realizing it.

*or at least, my views of people who I perceive of as being less rational is pretty much unchanged from before LW, which is the important part. Especially in the case of social interaction, rather than discussing serious issues. It's possible I might be unusual compared to some nerds on this; I tend to not care too much whether or not the people I interact with are especially smart or even that our interactions are anything but vapid nonsense, as long as I enjoy interacting with them.

comment by 9eB1 · 2014-07-08T09:16:23.829Z · LW(p) · GW(p)

Scott Alexander recently posted a link to this article which was very interesting. After reading it, the difference between postmodernism and LW rationality seems very large. It doesn't directly address your point, but you may find it interesting.

Separately, I think that you are exaggerating the tendencies LW shares with postmodernism. While LessWrongers love going meta (and they seem to love it even more in person than on the site), what you actually see in discussions here and on rationality blogs is requests to go in either the meta or object-level directions as required by the interlocutor. CFAR specifically has lessons on going toward object-level. Comparing the jargon of postmodernism and LessWrong is not really an equal comparison either. Postmodernism is oftentimes intentionally obscure, and sometimes redefines words to very surprising meanings (see the above linked article), while on LessWrong people seem to go to some pains to coin new language only when old language is insufficient, and explicitly consider what appropriate names would be (the major exception to this is perhaps language coined during the time of the sequences that is still widely used). LW doesn't have a strong need to justify itself to outsiders, but members of Less Wrong seem to mostly have explicit desire to spread rationality, so there is some need. Postmodernism, on the other hand, seems like mostly an insiders-only club. Compare Spreading Postmodernism with Spreading Rationality.

Replies from: TheAncientGeek, David_Gerard, TheAncientGeek
comment by TheAncientGeek · 2014-07-08T10:24:26.282Z · LW(p) · GW(p)

LessWrong people seem to go to some pains to coin new language only when old language is insufficient

The pains don't always stretch to learning philosophy, which EY hasn't done, and advises against, with the result that LW jargon in fact often does reinvent philosophical jargon.

Replies from: 9eB1, Emile
comment by 9eB1 · 2014-07-08T19:22:58.821Z · LW(p) · GW(p)

Of course, that's why I said "some pains" and not "great pains." People are aware of the issue and generally avoid it when it's easy to do so, or there will be comments pointing out that something is just a different name for an older term. Also, I excluded Eliezer's sequences and the resulting jargon for a reason.

comment by Emile · 2014-07-08T12:09:34.131Z · LW(p) · GW(p)

LW jargon in fact often does reinvent philosophical jargon.

... but does so in a way that is probably more accessible to the average 21th century geek than the original philosophical jargon was, so it's not a great loss, because there are more geeks that don't understand philosophical jargon than philosophers who don't get geek references.

Replies from: David_Gerard, TheAncientGeek
comment by David_Gerard · 2014-07-08T12:52:56.538Z · LW(p) · GW(p)

It is a great loss, because the original terms are nowhere to be seen. So if someone wants to read, say, non-amateur writing on the idea and its history, they're out of luck.

Replies from: Emile
comment by Emile · 2014-07-08T15:59:33.299Z · LW(p) · GW(p)

I sorta agree - I guess it depends on how valuable it is to be able to read Philosophy; some (Lukeprog, Eliezer) seem to consider it mostly a waste of time, others don't, and I'm not really qualified to tell.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-08T16:48:03.289Z · LW(p) · GW(p)

We're talking here specifically about the amateur philosophy, presented with neologisms as if it's original thought, when it simply isn't. You seem to be saying that it's valuable if EY writes about it but not if professional philosophers do - surely that's not what you mean?

comment by TheAncientGeek · 2014-07-08T13:07:05.061Z · LW(p) · GW(p)

It's a great loss because it prevents constructive dialogue between the two communuties. There is quite a lot that US broken in the sequences...not so much in terms of being wrong as in terms of being unclear, addressing the wring question etc...and it looks likely to stay that way.

Replies from: Emile, Jayson_Virissimo
comment by Emile · 2014-07-08T15:57:54.169Z · LW(p) · GW(p)

There is quite a lot that US broken in the sequences

That was supposed to be "IS", right?

comment by Jayson_Virissimo · 2014-07-09T05:18:30.092Z · LW(p) · GW(p)

It's a great loss because it prevents constructive dialogue between the two communuties.

Yes, this is why I recommend that LWers read Robert Nozick.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T12:31:32.865Z · LW(p) · GW(p)

Well, I like Nozick, but I like a lot of other people.as well.

comment by David_Gerard · 2014-07-08T12:51:50.798Z · LW(p) · GW(p)

on LessWrong people seem to go to some pains to coin new language only when old language is insufficient

Are you sure? One of the biggest problems with LW is inventing jargon for philosophical ideas that have had names for a couple of thousand years. This is problematic if the interested reader wants to learn more.

Replies from: Nornagest
comment by Nornagest · 2014-07-08T16:59:02.843Z · LW(p) · GW(p)

Example? I believe you, but every time I've personally gone looking for a term in the philosophy literature I've found it.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-09T07:52:23.399Z · LW(p) · GW(p)

e.g. "fallacy of grey" is an entirely local neologism.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-07-09T12:09:14.804Z · LW(p) · GW(p)

What's the standard term?

Replies from: None
comment by [deleted] · 2014-07-09T12:20:12.085Z · LW(p) · GW(p)

It's a form of the continuum fallacy.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-09T15:06:02.359Z · LW(p) · GW(p)

gwern holds that it's actually false balance. Might be a mix. But one or both should have been named IMO.

Replies from: None
comment by [deleted] · 2014-07-09T16:09:05.349Z · LW(p) · GW(p)

That's interesting. False balance doesn't seem to replace anything with a continuum. In particular I'm having trouble rephrasing their examples as fallacy of grey examples.

But, eh, I trust gwern.

comment by TheAncientGeek · 2014-07-08T10:43:31.512Z · LW(p) · GW(p)

Organisation A can be like organisation B in every way except their doctrine. It has been remarked, not least by rationalwiki that LW is like Any Rand's Objectivism, although doctrinally they are poles apart.

It is perfectly possible for an organisation to pay lip service to outreach without making the changes and sacrifices needed for real engagement.

Replies from: 9eB1, Luke_A_Somers
comment by 9eB1 · 2014-07-08T20:05:55.945Z · LW(p) · GW(p)

With respect to the point that two organizations CAN be similar except in doctrine, I agree, but I don't think that's true for Less Wrong and postmodernism, hence my comment. I was directly addressing the points of comparison the poster argued for.

If you are speaking of Objectivism the organization led by Ayn Rand rather than Objectivism the collective philosophy of Ayn Rand, the differences are pretty massive. Objectivism was a bona fide cult of personality, while the vast majority of people on Less Wrong have never met Eliezer and he no longer even engages with the site. Watch the first part of this interview and compare it with Less Wrong. Perhaps this could be argued specifically of the rationalists living in the Bay Area, but I don't know enough to say.

The article on rationalwiki has been updated and now seems substantially fairer than it was when I last saw it a few years ago. It doesn't draw any direct comparison to Objectivism, and now says that the "appearance of a cult has faded." That said, I don't put much stock in their opinions on such things.

It doesn't seem to me that people on Less Wrong merely place lip service on outreach (although once again we are certainly in agreement that such a thing is possible!). There seem to be a lot of posts on meetups here, advice on how to get new attendees, etc. Making "changes and sacrifices needed for real engagement" isn't straightforward in practice (and engagement isn't an unqualified good). You have to draw new members without betraying your core principles and without it becoming a place the existing members don't want to participate in.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T18:16:43.772Z · LW(p) · GW(p)

Objectivism did and does have plenty of adherents who never met Rand. Personal contact isn't a prerequisite for a personality cult.

Replies from: 9eB1
comment by 9eB1 · 2014-07-11T00:29:07.959Z · LW(p) · GW(p)

It seems you are correct. I had a definition in mind for a cult of personality which was actually much narrower than what it actually means, upon looking it up. Nonetheless, so far you've implied a lot more than you've actually stated, and your arguments about "what is possible" are less interesting to me than arguments about "what is." Frankly, I find argumentation by implication annoying, so I'm tapping out.

comment by Luke_A_Somers · 2014-07-08T13:27:53.705Z · LW(p) · GW(p)

Quick question: how much do these doctrinal differences matter?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-08T15:03:50.844Z · LW(p) · GW(p)

Matter to whom? If you join that kind of organisation, you are probably looking for answers. If not, maybe not.

comment by RomeoStevens · 2014-07-08T02:56:24.033Z · LW(p) · GW(p)

What other approaches can we take to check (and defend) our collective sanity?

Do rationalists win when confounding factors of intelligence, conscientiousness, and anything else we can find are corrected for?

Do they make more money? Have greater life satisfaction? Fewer avoidable tragedies? Reliably bootstrap themselves out of mental and physical problems?

I'm not sure what the answer is.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-07-08T03:20:29.145Z · LW(p) · GW(p)

I suspect the answer is "no". But I don't know why you would correct for intelligence &c. in your analysis. Attracting a group of intelligent people is kinda hard to pull off and of course many, many tradeoffs will be made to make it possible.

Replies from: None
comment by [deleted] · 2014-07-08T07:38:06.170Z · LW(p) · GW(p)

People who are doing well enough already won't be drawn to something with self-improvement as one of its de facto major selling points.

If rationalists produce valuable memes, those memes are likely to enter popular culture and lose their association with rationalists. Who credits sociology for inventing the term "role model"?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-07-10T11:29:39.660Z · LW(p) · GW(p)

People who are doing well enough already won't be drawn to something with self-improvement as one of its de facto major selling points.

This is probably true in general, but LW overlaps with H+ memes, and H+ is radical self improvement, meaning that LW might attract people who are doing well, but aspire to be doing even better.

Besides, I think the people who look for self-improvement because they are not doing well would be more interested in e.g. tackling depression which is a small minority of LW content.

comment by Mestroyer · 2014-07-08T08:47:05.905Z · LW(p) · GW(p)

This reminds me of this SMBC. There are fields (modern physics comes to mind too) that no one outside of them can understand what they are doing anymore, yet that appear to have remained sane. There are more safeguards against postmodernists' failure mode than this one. In fact, I think there is a lot more wrong with postmodernism than that they don't have to justify themselves to outsiders. Math and physics have mechanisms determining what ideas within them get accepted that imbue them with their sanity. In math, there are proofs. In physics, there are experiments.

If something like this safeguard is going to work for us, our mechanism that determines what ideas spread among us needs to reflect something good, so that producing the kind of idea that passes that filter makes our community worthwhile. This can be broken into two subgoals: making sure the kinds of questions we're asking are worthwhile, that we are searching for the right kind of thing, and making sure that our acceptance criterion is a good one. (There's also something that modern physics may or may not have for much longer, which is "Can progress be made toward the thing you're looking for").

comment by Richard_Kennaway · 2014-07-08T08:31:17.387Z · LW(p) · GW(p)

The LW/MIRI/CFAR memeplex shares some important features with postmodernism, namely the strong tendency to go meta, a large amount of jargon that is often impenetrable to outsiders and the lack of an immediate need to justify itself to them.

Mathematics also has all of these. So I don't think this is a good argument that LW/MIRI/CFAR is doing something wrong.

Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure. Has anyone here tried to do either of these to a significant degree? If so, how, and how successfully?

CFAR workshops? Or is there an outreach treadmill --- anyone who is reached becomes an insider, so outsiders still aren't being reached?

comment by Viliam_Bur · 2014-07-08T09:40:58.154Z · LW(p) · GW(p)

Here is something that could be useful to have, but would require a lot of work and talent. As a side effect, it would solve the problem mentioned in the article:

Rewrite parts of the Sequences, for wider audience.

For example, the Bayesian math. Rewrite the explanation in a way that is easy to read for a high school student, without any LW lingo. A lot of pictures. Sample problems. Then debate the more complex topics, such as how you can never get 0 and 1 as a result of Bayesian updating, conservation of expected evidence, etc. Then distribute the book as part of "raising the sanity waterline", which will also serve as an advertising for CFAR.

It needs to be rewritten to get rid of the jargon and the hyperlink maze. Probably just written again, using the original text only as a list of ideas that should be included. Use the standard and simple vocabulary whenever possible. Design the structure for a linear (book), not hypertext (website) medium.

The result will be probably longer than the corresponding part of the Sequences, but much shorter than the whole Sequences. And it needs to be written by someone else, because Eliezer doesn't have time for this.

Then we can do the same with some other part of LW wisdom.

comment by Stefan_Schubert · 2014-07-08T09:25:00.763Z · LW(p) · GW(p)

Insularity is always dangerous, and too much internal jargon can scare off outsiders. However, postmodernists are quite unlike the LW-community. For instance, postmodernists tend to be anti-scientific and deliberately obscurantist, as Alan Sokal showed by publishing a fake article in a postmodernist journal. Hence I don't think the analogy works very well.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-08T10:27:48.336Z · LW(p) · GW(p)

Wasn't the B.s.l.sk an inadvertent self-Sokal.....a bunch of self declared rationalists taking an absolutely ridiculous idea much too seriously?

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T12:19:33.189Z · LW(p) · GW(p)

Sokal was published by the journal. Eliezer made a decision that LW is not the kind of place to publish a debate about the basilisk.

You can't criticise people at the same time for providing a forum for crazy ideas and for not providing such a forum and argue that both decision basically show that people go wrong.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-08T12:58:26.548Z · LW(p) · GW(p)

EY didn't reject the Basilisk argument for being stupid, he rejected it for being dangerous...people were believing in it, and therefore taking it seriously.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T14:15:45.741Z · LW(p) · GW(p)

When it comes to a political idea like Marxism labeling it as dangerous doesn't mean that one doesn't consider it stupid or wrong. Stupid is also not a good word to describe a complex argument that you consider to be wrong.

When it comes to Sokal, Sokal had no skills as being well educated in postmodernism. In his idea the postmodernists should have noticed that what he says doesn't make any sense. Roko on the other hand is smart and does have an understanding of the local memespace. Roko isn't stupid or ignorant of the ideas he was discussing.

Thinking about blackmail and how to structure a decision theory to avoid being subject to blackmail is also not a worthless endeavor.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-08T15:02:26.402Z · LW(p) · GW(p)

Again, the point is that the Basilisk was taken seriously

comment by ChristianKl · 2014-07-08T08:47:08.099Z · LW(p) · GW(p)

As far as I can tell, we have not fallen into this trap, but since people tend to fail to notice when their in-group has gone crazy

Given the amount of contrarians on LW that open discussions on whether or not LW is a cult, I don't really think we have a problem with lack of self criticism.

Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure. Has anyone here tried to do either of these to a significant degree?

MIRI does engage in writing some academic papers. As far as I understand CFAR it wants to run publishable studies that validate it's approaches. CFAR also sells a service.

If you make predictions about the effects of your actions and check whether you make them successfully you don't need other people to evaluate your arguments. Reality does the evaluating fine. Whether it's prediction book, prediction markets or simply betting with your friends, gathering QS data are all activities that ground you in reality in a way that postmodernists don't ground themselves.

Replies from: TheAncientGeek, AlexanderRM
comment by TheAncientGeek · 2014-07-08T10:51:16.344Z · LW(p) · GW(p)

contrarians

How much of EYs material has been retracted or amended under critique? AFAICT , the answer is none.

Replies from: bramflakes, ChristianKl, Luke_A_Somers, Adele_L
comment by ChristianKl · 2014-07-08T11:48:58.673Z · LW(p) · GW(p)

How much of EYs material has been retracted or amended under critique? AFAICT , the answer is none.

The April Fool post of EY would be an example retracted from LW because of criticism. I still consider retractions to be a good metric for criticism. Not everyone thinks that honest mistakes should be retracted.

comment by Luke_A_Somers · 2014-07-08T13:31:43.281Z · LW(p) · GW(p)

He did solicit amendments for republication. Going back and changing old blog posts is considerably more... revisionist.

comment by Adele_L · 2014-07-09T22:15:15.135Z · LW(p) · GW(p)

IIRC, he retracted one of his earlier articles on gender because he doesn't agree with it anymore.

comment by AlexanderRM · 2014-11-14T23:22:48.465Z · LW(p) · GW(p)

On the subject of people opening discussions about whether LW is a cult, I'd like to suggest that while it is useful to notice, that metric alone is not enough to determine whether LW has become a cult: We could easily wind up constantly opening discussions about whether LW is a cult, patting ourselves on the back for having opened the discussion at all, and then ending the discussion.

Incidentally on a somewhat unrelated note about cultishness, I don't know how other LWers feel about it, but when I personally think about the subject I feel a really, really strong pull towards concluding outright that LW is not a cult and calling it settled, both because it feels less scary and takes less work than having to constantly guard against cultishness (reading some of EYs writing on how cultishness is something that needs to be constantly guarded against terrified me). I doubt I'm the only one to feel that way so it's something I thought would be good to mention.

comment by Mestroyer · 2014-07-08T08:41:39.398Z · LW(p) · GW(p)

CFAR seems to be trying to be using (some of) our common beliefs to produce something useful to outsiders. And they get good ratings from workshop attendees.

Replies from: Stefan_Schubert, AlexanderRM
comment by Stefan_Schubert · 2014-07-08T09:34:05.552Z · LW(p) · GW(p)

True. CFAR is anything but insular. Their (excellent) workshops are based on outside research and they do very well to reach out to outsiders. They have Slovic and Stanovich as advisors, Kahneman has visited them, etc.

comment by AlexanderRM · 2014-11-14T23:29:03.113Z · LW(p) · GW(p)

A couple questions- what portion of the workshop attendees self-selected from among people who were already interesting in rationality, compared to the portion that randomly stumbled upon it for some reason?

And even if it were from outsiders... I suppose that guards against the specific post-modernist failure mode. I think the checking by having to explain to outsiders isn't the most important thing that checks engineering, though: the most important one is having to engineer things that actually work. So rationality producing people who are better at accomplishing their goals would be the ideal measure.

Replies from: Mestroyer
comment by Mestroyer · 2015-05-12T18:09:28.414Z · LW(p) · GW(p)

A couple questions- what portion of the workshop attendees self-selected from among people who were already interesting in rationality, compared to the portion that randomly stumbled upon it for some reason?

Don't know, sorry.

comment by Stuart_Armstrong · 2014-07-08T16:38:47.343Z · LW(p) · GW(p)

or at least explain them in ways that intelligent outsiders can understand well enough to criticize

Based on feedback, I think I achieved that through my "Smarter than Us" booklet or through the AI risk executive summary: http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-07-09T12:10:38.316Z · LW(p) · GW(p)

What's the outsider feedback on those been like?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-07-09T12:22:06.235Z · LW(p) · GW(p)

Quite positive, but scarce.

comment by MrMind · 2014-07-08T12:45:43.885Z · LW(p) · GW(p)

Insularity in this case is simply a case of long inferential distances. It seems like senseless noise to the outside because that's what compressed information looks like to all who dont' have a decoder.
Every group that specializes on something falls into that, and it's healthy that it does so. But we should want a PR office only if we would want to sell our worldview to others, not to check our own sanity.

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-08T04:38:10.427Z · LW(p) · GW(p)

My understanding is that postmodernists face career incentives to keep the bullshit flowing. (To change my mind on this, find me an online community of enthusiastic amateur postmodernists who aren't trying to make it in academia or anything.)

Replies from: David_Gerard, None, AlexanderRM
comment by David_Gerard · 2014-07-08T12:56:59.323Z · LW(p) · GW(p)

(To change my mind on this, find me an online community of enthusiastic amateur postmodernists who aren't trying to make it in academia or anything.)

Critics. Art, literary, music. Postmodernism is largely art criticism purporting to take everything as a text.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-07-08T20:54:22.690Z · LW(p) · GW(p)

That's the most succinct explanation of post modernism I've seen.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-09T08:14:23.732Z · LW(p) · GW(p)

This is why anyone who knows anything about postmodernism looks at science fans' straw postmodernism and goes "wtf". It turns out a set of paintbrushes doesn't make a good hammer, well gosh.

Replies from: AlexanderRM
comment by AlexanderRM · 2014-11-14T23:49:55.417Z · LW(p) · GW(p)

...could you clarify what you mean by "science fans' straw postmodernism"?

I think "straw postmodernism" would generally imply that the science fans in question had invented the idea specifically to make fun of postmodernism (as a strawman). From the context however I get the impression that the science fans in question are themselves postmodernists and that you used the term "straw" to mean something like "not what postmodernism was intended to be".

(also to the earlier post, come to think of it: are there online communities of enthusiastic amateur art critics who aren't trying to make it in any career? I honestly don't know myself, there could easily be.)

comment by [deleted] · 2014-07-08T07:38:47.434Z · LW(p) · GW(p)

It could be argued that the neoreactionaries are an example. (Moldbug especially.)

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T08:39:11.133Z · LW(p) · GW(p)

You can criticise neoreactionaires on many fronts but they aren't postmodernists.

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2014-07-08T10:33:57.540Z · LW(p) · GW(p)

In style or substance...and which is more important...to them?

Replies from: ChristianKl
comment by ChristianKl · 2014-07-09T07:48:19.857Z · LW(p) · GW(p)

Postmodernism is a certain philosophy developed in the second part of the 20th century. I don't see how neoreactionaries subscribe to that philosophy either in style or substance.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T12:27:45.722Z · LW(p) · GW(p)

Style=obscurationism.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-09T13:16:37.024Z · LW(p) · GW(p)

If I put obscurationism in Google, it indicates that it has a history that's a lot older than postmodernism.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T13:19:33.710Z · LW(p) · GW(p)

So?

Replies from: ChristianKl
comment by ChristianKl · 2014-07-09T14:14:12.258Z · LW(p) · GW(p)

It's not something specific to postmodernism, so it's not useful for deciding whether neoreactionism has something to do with postmodernism.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T15:05:25.534Z · LW(p) · GW(p)

I can criticise neoreationaries for being as obscurantist as postmodernism.

Replies from: None
comment by [deleted] · 2014-07-12T00:15:34.660Z · LW(p) · GW(p)

No you can't -- unless you think postmodernists' obscurantism is a deliberate piece of institutional design.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-12T16:37:48.781Z · LW(p) · GW(p)

Accidental obscutantism is excusable?

comment by [deleted] · 2014-07-08T19:01:59.405Z · LW(p) · GW(p)

https://twitter.com/karmakaiser/status/427233616993599488

https://twitter.com/karmakaiser/status/427233789014597632

He's right: cladistics is genealogy. One of the most important conceptual tools of neoreaction is basically that thing Foucault did.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-08T20:47:51.128Z · LW(p) · GW(p)

I have to admit that I don't have a good grasp on Foucault but is cladistics/genealogy that much different from what Marx did earlier when he wanted to analyse history?

Replies from: None
comment by [deleted] · 2014-07-11T23:19:51.668Z · LW(p) · GW(p)

Yes.

edit: more on the contrast

comment by AlexanderRM · 2014-11-14T23:39:45.377Z · LW(p) · GW(p)

I honestly don't understand Postmodernism well enough to know if this is it (and not sure if it's even understandable enough for that), but I've encountered ideas that sound similar to what I've heard of post-modernism from undergraduate students in my college's philosophy club.

Specifically there are several people with a tendency to say things along the lines of "but how do we really know what's real or what's not", "can we really trust our senses", etc. with regards to every single discussion that comes up, making it essentially impossible to come to any actual conclusions in any discussion. Although one of them did actually accept the idea of discussing what the world would be like if our senses were reasonably accurate, but not without pointing out what a huge assumption that was. (now, actually, I think it makes a lot of sense to talk about what facts and truth are occasionally, but being able to just say "X is true" when you have 99.9999% confidence of it is a fairly useful shorthand.)

(another thing which I'm not sure is the same or not was one of the people in the club who said something about someone believing "things that are true for him", although I didn't discuss that enough to get any real understanding on what they meant by that. Nor do I actually remember the question that led to that or the discussion following it, I think the topic diverged. In fact I think it diverged into me asking if their attitude was postmodernism and them not having any better an understanding of postmodernism than I did.)

Is that similar to post-modernist ideas? Because I honestly have no idea if it is or not, and would be interested in any insights from someone who knows what post-modernism is.

comment by polymathwannabe · 2014-07-08T14:53:06.336Z · LW(p) · GW(p)

LW is the opposite of postmodernism. Plato's condemnation of sophists ("the art of contradiction making, descended from an insincere kind of conceited mimicry, of the semblance-making breed, derived from image making, distinguished as portion, not divine but human, of production, that presents, a shadow play of words") applies perfectly to postmodernists, who are just the umpteenth incarnation of the sophist virus.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-09T18:19:17.109Z · LW(p) · GW(p)

So what's Analytical Philosophy.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-07-09T20:15:33.985Z · LW(p) · GW(p)

Analytical philosophy is the serious one.