Cult impressions of Less Wrong/Singularity Institute
post by John_Maxwell (John_Maxwell_IV) · 2012-03-15T00:41:34.811Z · LW · GW · Legacy · 247 commentsContents
247 comments
I have several questions related to this:
- Did anyone reading this initially get the impression that Less Wrong was cultish when they first discovered it?
- If so, can you suggest any easy steps we could take?
- Is it possible that there are aspects of the atmosphere here that are driving away intelligent, rationally inclined people who might otherwise be interested in Less Wrong?
- Do you know anyone who might fall into this category, i.e. someone who was exposed to Less Wrong but failed to become an enthusiast, potentially due to atmosphere issues?
- Is it possible that our culture might be different if these folks were hanging around and contributing? Presumably they are disproportionately represented among certain personality types.
If you visit any Less Wrong page for the first time in a cookies-free browsing mode, you'll see this message for new users:
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Here are the worst violators I see on that about page:
Some people consider the Sequences the most important work they have ever read.
Generally, if your comment or post is on-topic, thoughtful, and shows that you're familiar with the Sequences, your comment or post will be upvoted.
Many of us believe in the importance of developing qualities described in Twelve Virtues of Rationality: [insert mystical sounding description of how to be rational here]
And on the sequences page:
If you don't read the sequences on Mysterious Answers to Mysterious Questions and Reductionism, little else on Less Wrong will make much sense.
This seems obviously false to me.
These may not seem like cultish statements to you, but keep in mind that you are one of the ones who decided to stick around. The typical mind fallacy may be at work. Clearly there is some population that thinks Less Wrong seems cultish, as evidenced by Google's autocomplete, and these look like good candidates for things that makes them think this.
We can fix this stuff easily, since they're both wiki pages, but I thought they were examples worth discussing.
In general, I think we could stand more community effort being put into improving our about page, which you can do now here. It's not that visible to veteran users, but it is very visible to newcomers. Note that it looks as though you'll have to click the little "Force reload from wiki" button on the about page itself for your changes to be published.
247 comments
Comments sorted by top scores.
comment by Grognor · 2012-03-15T02:22:38.789Z · LW(p) · GW(p)
AAAAARRRGH! I am sick to death of this damned topic. It has been done to death.
I have become fully convinced that even bringing it up is actively harmful. It reminds me of a discussion on IRC, about how painstakingly and meticulously Eliezer idiot-proofed the sequences, and it didn't work because people still manage to be idiots about it. It's because of the Death Spirals and the Cult Attractor sequence that people bring the stupid "LW is a cult hur hur" meme, which would be great dramatic irony if you were reading a fictional version of the history of Less Wrong, since it's exactly what Eliezer was trying to combat by writing it. Does anyone else see this? Is anyone else bothered by:
Eliezer: Please, learn what turns good ideas into cults, and avoid it!
Barely-aware public: Huh, wah? Cults? Cults! Less Wrong is a cult!
&
Eliezer: Do not worship a hero! Do not trust!
Rationalwiki et al: LW is a personality cult around Eliezer because of so-and-so.
Really, am I the only one seeing the problem with this?
People thinking about this topic just seem to instantaneously fail basic sanity checks. I find it hard to believe that people even know what they're saying when they parrot out "LW looks kinda culty to me" or whatever. It's like people only want to convey pure connotation. Remember sneaking in connotations, and how you're not supposed to do that? How about, instead of saying "LW is a cult", "LW is bad for its members"? This is an actual message, one that speaks negatively of LW but contains more information than negative affective valence. Speaking of which, one of the primary indicators of culthood is being unresponsive or dismissal of criticism. People regularly accuse LW of this, which is outright batshit. XiXiDu regularly posts SIAI criticism, and it always gets upvoted, no matter how wrong. Not to mention all the other posts (more) disagreeing with claims in what are usually called the Sequences, all highly upvoted by Less Wrong members.
The more people at Less Wrong naively wax speculatively on how the community appears from the outside, throwing around vague negative-affective-valence words and phrases like "cult" and "telling people exactly how they should be", the worse this community will be perceived, and the worse this community will be. I reiterate: I am sick to death of people playing color politics on "whether LW is a cult" without doing any of making the discussion precise and explicit rather than vague and implicit, taking into account that dissent is not only tolerated but encouraged here, remembering that their brains instantly mark "cult" as being associated to wherever it's seen, and any of a million other factors. The "million other factors" is, I admit, a poor excuse, but I am out of breath and emotionally exhausted; forgive the laziness.
Everything that should have needed to be said about this has been said in the Cult Attractor sequence, and, from the Less Wrong wiki FAQ:
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all. Yes, some of the results that fall out of these basics sound weird if you haven't seen the reasoning behind them, but there's nothing in the laws of physics that prevents reality from sounding weird.
Talking about this all the time makes it worse, and worse every time someone talks about it.
What the bleeding fuck.
Replies from: cousin_it, John_Maxwell_IV, wedrifid, epicureanideal, XiXiDu, dbaupp, epicureanideal, XiXiDu, halcyon↑ comment by cousin_it · 2012-03-15T07:06:38.177Z · LW(p) · GW(p)
LW doesn't do as much as I'd like to discourage people from falling into happy death spirals about LW-style rationality, like this. There seem to be more and more people who think sacrificing their life to help build FAI is an ethical imperative. If I were Eliezer, I would run screaming in the other direction the moment I saw the first such person, but he seems to be okay with that. That's the main reason why I feel LW is becoming more cultish.
Replies from: Richard_Kennaway, Wei_Dai, XiXiDu, None, Grognor↑ comment by Richard_Kennaway · 2012-03-15T07:27:06.236Z · LW(p) · GW(p)
How do you distinguish a happy death spiral from a happy life spiral? Wasting one's life on a wild goose chase from spending one's life on a noble cause?
"I take my beliefs seriously, you are falling into a happy death spiral, they are a cult."
Replies from: cousin_it↑ comment by cousin_it · 2012-03-15T07:52:37.567Z · LW(p) · GW(p)
I guess you meant to ask, "how do you distinguish ideas that lead to death spirals from ideas that lead to good things?" My answer is that you can't tell by looking only at the idea. Almost any idea can become a subject for a death spiral if you approach it the wrong way (the way Will_Newsome wants you to), or a nice research topic if you approach it right.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-19T03:13:16.037Z · LW(p) · GW(p)
(the way Will_Newsome wants you to),
I've recanted; maybe I should say so somewhere. I think my post on the subject was sheer typical mind fallacy. People like Roko and XiXiDu are clearly damaged by the "take things seriously" meme, and what it means in my head is not what it means in the heads of various people who endorse the meme.
↑ comment by Wei Dai (Wei_Dai) · 2012-03-15T08:40:52.289Z · LW(p) · GW(p)
There seem to be more and more people who think sacrificing their life to help build FAI is an ethical imperative. If I were Eliezer, I would run screaming in the other direction the moment I saw the first such person
You mean when he saw himself in the mirror? :)
Seriously, do you think sacrificing one's life to help build FAI is wrong (or not necessarily wrong but not an ethical imperative either), or is it just bad PR for LW/SI to be visibly associated with such people?
Replies from: cousin_it↑ comment by cousin_it · 2012-03-15T11:02:10.707Z · LW(p) · GW(p)
I think it's not an ethical imperative unless you're unusually altruistic.
Also I feel the whole FAI thing is a little questionable from a client relations point of view. Rationality education should be about helping people achieve their own goals. When we meet someone who is confused about their goals, or just young and impressionable, the right thing for us is not to take the opportunity and rewrite their goals while we're educating them.
Replies from: Wei_Dai, Vladimir_Nesov, Luke_A_Somers, Tripitaka↑ comment by Wei Dai (Wei_Dai) · 2012-03-16T21:02:38.805Z · LW(p) · GW(p)
It's hard not to rewrite someone's goals while educating them, because one of our inborn drives is to gain the respect and approval of people around us, and if that means overwriting some of our goals, well that's a small price to pay as far as that part of our brain is concerned. For example, I stayed for about a week at the SIAI house a few years ago when attending the decision theory workshop, and my values shift in obvious ways just by being surrounded by more altruistic people and talking with them. (The effect largely dissipated after I left, but not completely.)
I think it's not an ethical imperative unless you're unusually altruistic.
Presumably the people they selected for the rationality mini-camp were already more altruistic than average, and the camp itself pushed some of them to the "unusually altruistic" level. Why should SIAI people have qualms about this (other than possible bad PR)?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-20T16:12:56.736Z · LW(p) · GW(p)
Pointing out that religious/cultic value rewriting is hard to avoid hardly refues the idea that LW is a cult.
↑ comment by Vladimir_Nesov · 2012-03-15T17:22:46.198Z · LW(p) · GW(p)
I think it's not an ethical imperative unless you're unusually altruistic.
I don't think "unusually altruistic" is a good characterization of "doesn't value personal preferences about some life choices more than the future of humanity"...
Replies from: cousin_it, Vaniver↑ comment by cousin_it · 2012-03-15T17:38:33.493Z · LW(p) · GW(p)
Do you believe most people are already quite altruistic in that sense? Why? It seems to me that many people give lip service to altruism, but their actions (e.g. reluctance to donate to highly efficient charities) speak otherwise. I think rationality education should help people achieve the goals they're already trying to achieve, not the goals that the teacher wants them to achieve.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-15T17:57:12.234Z · LW(p) · GW(p)
I think rationality education should help people achieve the goals they're already trying to achieve, not the goals that the teacher wants them to achieve.
False dichotomy. Humans are not automatically strategic, we often act on urges, not goals, and even our explicitly conceptualized goals can be divorced from reality, perhaps more so than the urges. There are general purpose skills that have an impact on behavior (and explicit goals) by correcting errors in reasoning, not specifically aimed at aligning students' explicit goals with those of their teachers.
Replies from: cousin_it↑ comment by cousin_it · 2012-03-15T18:31:56.149Z · LW(p) · GW(p)
Rationality is hard to measure. If LW doesn't make many people more successful in mundane pursuits but makes many people subscribe to the goal of FAI, that's reason to suspect that LW is not really teaching rationality, but rather something else.
(My opinions on this issue seem to become more radical as I write them down. I wonder where I will end up!)
Replies from: Vladimir_Nesov, William_Quixote↑ comment by Vladimir_Nesov · 2012-03-15T18:46:13.786Z · LW(p) · GW(p)
I didn't say anything about "rationality". Whether the lessons help is a separate question from whether they're aimed at correcting errors of reasoning or at shifting one's goals in a specific direction. The posts I linked also respond to the objection about people "giving lip service to altruism" but doing little in practice.
Replies from: cousin_it↑ comment by cousin_it · 2012-03-15T19:09:11.063Z · LW(p) · GW(p)
Yes, the reasoning in the linked posts implies that deep inside, humans should be as altruistic as you say. But why should I believe that reasoning? I'd feel a lot more confident if we had an art of rationality that made people demonstrably more successful in mundane affairs and also, as a side effect, made some of them support FAI. If we only get the side effect but not the main benefit, something must be wrong with the reasoning.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-15T19:42:09.720Z · LW(p) · GW(p)
Yes, the reasoning in the linked posts implies that deep inside, humans should be as altruistic as you say.
This is not what the posts are about, even if this works as one of the conclusions. The idea that urges and goals should be distinguished, for example, doesn't say what your urges or goals should be, it stands separately on its own. There are many such results, and ideas such as altruism or importance of FAI are only few among them. Do these ideas demonstrate comparatively more visible measurable effect than the other ideas?
↑ comment by William_Quixote · 2012-09-07T03:27:04.938Z · LW(p) · GW(p)
Rationality is hard to measure. If LW doesn't make many people more successful in mundane pursuits but makes many people subscribe to the goal of FAI, that's reason to suspect that LW is not really teaching rationality, but rather something else.
if prediction markets were legal, we could much more easily measure if LW helped rationality. Just ask people to make n bets or predictions per month and see 1) it they did better than the population average and 2) if they improved over time.
In fact, trying to get intrade legal in the US might be a very worthwhile project for just this reason ( beyond all the general social reasons to like prediction markets)
Replies from: gwern, cousin_it↑ comment by gwern · 2012-09-07T17:40:48.447Z · LW(p) · GW(p)
There is no need to wish or strive for regulatory changes that may never happen: I've pointed out in the past that non-money prediction markets generally are pretty accurate and competitive with money prediction markets; so money does not seem to be a crucial factor. Just systematic tracking and judgment.
(Being able to profit may attract some people, like me, but the fear of loss may also serve as a potent deterrent to users.)
I have written at length about how I believe prediction markets helped me but I have been helped even more by the free active you-can-sign-up-right-now-and-start-using-it,-really,-right-now http://www.PredictionBook.com
I routinely use LW-related ideas and strategies in predicting, and I believe my calibration graph reflects genuine success at predicting.
↑ comment by cousin_it · 2012-09-07T08:22:23.282Z · LW(p) · GW(p)
Very nice idea, thanks! After some googling I found someone already made this suggestion in 2009.
Replies from: William_Quixote↑ comment by William_Quixote · 2012-09-07T17:40:49.780Z · LW(p) · GW(p)
If other people have suggested this before, there may be enouph background support to make it worth following up on this idea.
When I get home from work, I will post in the discussion forum to see if people would be interested in working to legalize prediction markets ( like intrade) it the US.
[EDITED: shortly after making this post, I saw Gwern’s post above suggesting that an alternative like prediction book would be just as good. As a result I did not make a post about legalizing prediction markets and instead tried prediction book for a month and a half. After this trial, I still think that making a push to legalize predictions markets would be worthwhile]
↑ comment by Vaniver · 2012-03-16T22:59:57.346Z · LW(p) · GW(p)
It doesn't sound like you know all that many humans, then. In most times and places, the "future of humanity" is a signal that someone shouldn't be taken seriously, not an actual goal.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-16T23:08:02.952Z · LW(p) · GW(p)
In most times and places, the "future of humanity" is a signal that someone shouldn't be taken seriously, not an actual goal.
I was talking about the future of humanity, not the "future of humanity" (a label that can be grossly misinterpreted).
↑ comment by Luke_A_Somers · 2012-03-15T18:29:56.167Z · LW(p) · GW(p)
I think it's not an ethical imperative unless you're unusually altruistic.
... or you estimate the risk to be significant and you want to live past the next N years.
Replies from: cousin_it↑ comment by cousin_it · 2012-03-15T18:40:59.661Z · LW(p) · GW(p)
I don't think this calculation works out, actually. If you're purely selfish (don't care about others at all), and the question is whether to devote your whole life to developing FAI, then it's not enough to believe that the risk is high (say, 10%). You also need to believe that you can make a large impact. Most people probably wouldn't agree to surrender all their welfare just to reduce the risk to themselves from 10% to 9.99%, and realistically their sacrifice won't have much more impact than that, because it's hard to influence the whole world.
↑ comment by Tripitaka · 2012-03-15T11:28:28.513Z · LW(p) · GW(p)
because there's a risk of going funny in the head
Funny in which way? Do you want to avoid an automatic "makro-of-denial"-invocation or are you afraid of them joining Eliezers evergrowing crowd of memetically subverted FAI-lers ?
Replies from: cousin_it, cousin_it↑ comment by XiXiDu · 2012-03-15T10:54:49.327Z · LW(p) · GW(p)
There seem to be more and more people who think sacrificing their life to help build FAI is an ethical imperative.
I have always been extremely curious about this. Do people really sacrifice their lifes or is it largely just empty talk?
It seems like nobody who wouldn't do anything else anyway is doing something. I mean, I don't think Eliezer Yudkowsky or Luke Muehlhauser would lead significantly different lifes if there were no existential risks. They are just the kind of people who enjoy doing what they do.
Are there people who'd rather play games all day but sacrifice their lifes to solve friendly AI?
Replies from: cousin_it, katydee, Will_Newsome, Tripitaka, drethelin↑ comment by cousin_it · 2012-03-15T12:35:19.819Z · LW(p) · GW(p)
If developing AGI were an unequivocally good thing, as Eliezer used to think, then I guess he'd be happily developing AGI instead of trying to raise the rationality waterline. I don't know what Luke would do if there were no existential risks, but I don't think his current administrative work is very exciting for him. Here's a list of people who want to save the world and are already changing their life accordingly. Also there have been many LW posts by people who want to choose careers that maximize the probability of saving the world. Judge the proportion of empty talk however you want, but I think there are quite a few fanatics.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-19T03:18:01.481Z · LW(p) · GW(p)
Indeed, Eliezer once told me that he was a lot more gung-ho about saving the world when he thought it just meant building AGI as quickly as possible.
↑ comment by katydee · 2012-03-16T17:29:43.528Z · LW(p) · GW(p)
I don't think Eliezer Yudkowsky or Luke Muehlhauser would lead significantly different lifes if there were no existential risks. They are just the kind of people who enjoy doing what they do.
I think at one point Eliezer said that, if not for AGI/FAI/singularity stuff, he would probably be a sci-fi writer. Luke explicitly said that when he found out about x-risks he realized that he had to change his life completely.
↑ comment by Will_Newsome · 2012-03-19T03:22:56.616Z · LW(p) · GW(p)
I have always been extremely curious about this. Do people really sacrifice their lifes or is it largely just empty talk?
I sacrificed some very important relationships and the life that could have gone along with them so I could move to California, and the only reason I really care about humans in the first place is because of those relationships, so...
This is the use of metaness: for liberation—not less of love but expanding of love beyond local optima.
— Nick Tarleton's twist on T.S. Eliot
↑ comment by Tripitaka · 2012-03-15T18:15:47.181Z · LW(p) · GW(p)
- Due to comparative advantage not changing much is actually a relativly good, straightforward strategy: just farm and redirect money.
- As an example of these Altruistic Ones user Rain been mentioned, so they are out there. They all be praised!
- Factor in time and demographics. A lot of LWlers are young people, looking for ways to make money; they are not able to spend much yet, and haven't had much impact yet. Time will have to show whether these stay true to their goals, or wether they are tempted to go the vicious path of always-growing investments into status.
↑ comment by [deleted] · 2012-03-15T19:17:30.818Z · LW(p) · GW(p)
There seem to be more and more people who think sacrificing their life to help build FAI is an ethical imperative.
Sacrificing or devoting? Those are different things. If FAI succeeds they will have a lot more life to party than they would have otherwise so devoting your life to FAI development might be a good bet even from a purely selfish standpoint.
Replies from: None↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T03:14:19.431Z · LW(p) · GW(p)
My post was mostly about how to optimize appearances, with some side speculation on how our current appearances might be filtering potential users. I agree LW rocks in general. I think we're mostly talking past each other; I don't see this discussion post as fitting into the genre of "serious LW criticism" as the other stuff you link to.
In other words, I'm talking about first impressions, not in-depth discussions.
I'd be curious where you got the idea that writing the cult sequence was what touched off the "LW cult" meme. That sounds pretty implausible to me. Keep in mind that no one who is fully familiar with LW is making this accusation (that I know of), but it does look like it might be a reaction that sometimes occurs in newcomers.
Let's keep in mind that LW being bad is a logically distinct proposition, and if it is bad, we want to know it (since we want to know what is true right?)
And if we can make optimizations to LW culture to broaden participation from intelligent people, that's also something we want to do, right? Although, on reflection, I'm not sure I see an opportunity for improvement where this is concerned, except maybe on the wiki (but I do think we could stand to be a bit nicer everywhere).
XiXiDu regularly posts SIAI criticism, and it always gets upvoted, no matter how wrong. Not to mention all the other posts disagreeing with claims in what are usually called the Sequences, all highly upvoted by Less Wrong members.
Criticism rocks dude. I'm constantly realizing that I did something wrong and thinking that if I had a critical external observer maybe I wouldn't have persisted in my mistake for so long. Let's keep this social norm up.
Replies from: Grognor, Antisuji↑ comment by Grognor · 2012-03-15T04:18:30.772Z · LW(p) · GW(p)
My post was mostly about how to optimize appearances, with some side speculation on how our current appearances might be filtering potential users.
Okay.
If we want to win, it might not be enough to have a book length document explaining why we're not a cult. We might have to play the first impressions game as well.
I said stop talking about it and implied that maybe it shouldn't have been talked about so openly in the first place, and here you are talking about it.
I'd be curious where you got the idea that writing the cult sequence was what touched off the "LW cult" meme.
Where else could it have come from? Eliezer's extensive discussion of cultish behavior gets automatically pattern-matched into helpless cries of "LW is not a cult!" (even though that isn't what he's saying and isn't what he's trying to say), and this gets interpreted as, "LW is a cult." Seriously, any time you put two words together like that, people assume they're actually related.
Elsewise, the only thing I can think of is our similar demographics and a horribly mistaken impression that we all agree on everything (I don't know where this comes from).
Criticism rocks dude.
Okay. (I hope you didn't interpret anything I said as meaning otherwise.)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T04:25:14.116Z · LW(p) · GW(p)
Point taken; I'll leave the issue alone for now.
↑ comment by Antisuji · 2012-03-16T00:59:50.845Z · LW(p) · GW(p)
Ya know, if LW and SIAI are serious about optimizing appearances, they might consider hiring a Communications professional. PR is a serious skill and there are people who do it for a living. Those people tend to be on the far end of the spectrum of what we call neurotypical here. That is, they are extremely good at modeling other people, and therefore predicting how other people will react to a sample of copy. I would not be surprised if literally no one who reads LW regularly could do the job adequately.
Edit to add: it's nice to see that they're attempting to do this, but again, LW readership is probably the wrong place to look for this kind of expertise.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-16T02:24:16.574Z · LW(p) · GW(p)
Ya know, if LW and SIAI are serious about optimizing appearances, they might consider hiring a Communications professional. PR is a serious skill and there are people who do it for a living.
People who do this for a living (effectively) cost a lot of money. Given the budget of SIAI putting a communications professional on the payroll at market rates represents a big investment. Transitioning a charity to a state where a large amount of income goes into improving perception (and so securing more income) is a step not undertaken lightly.
Replies from: NancyLebovitz, Antisuji↑ comment by NancyLebovitz · 2012-03-16T09:49:12.752Z · LW(p) · GW(p)
It's at least plausible that a lot of the people who can be good for SIAI would be put off more by professional marketing than by science fiction-flavored weirdness.
↑ comment by Antisuji · 2012-03-16T03:59:58.139Z · LW(p) · GW(p)
That's a good point. I'm guessing though that there's a lot of low hanging fruit, e.g. a front page redesign, that would represent a more modest (and one-time) expense than hiring a full-time flack. In addition to costing less this would go a long way to mitigate concerns of corruption. Let's use the Pareto Principle to our advantage!
↑ comment by wedrifid · 2012-03-15T03:52:19.332Z · LW(p) · GW(p)
AAAAARRRGH! I am sick to death of this damned topic.
It looks a bit better if you consider the generalization in the intro to be mere padding around a post that is really about several specific changes that need to be made to the landing pages.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T23:46:50.154Z · LW(p) · GW(p)
Unfortunately, Grognor reverts me every time I try to make those changes... Bystanders, please weigh in on this topic here.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2012-03-16T00:10:26.692Z · LW(p) · GW(p)
I didn't like your alternative for the "Many of us believe" line either, even though I don't like that line (it was what I came up with to improve on Luke's original text). To give the context: the current About page introduces twelve virtues with:
Many of us believe in the importance of developing qualities described in "Twelve Virtues of Rationality":
John's edit was to change it to:
For a brief summary of how to be rational, read the somewhat stylized "Twelve Virtues of Rationality":
P.S. I no longer supervise the edits to the wiki, but someone should...
Replies from: John_Maxwell_IV, wedrifid↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T00:56:30.554Z · LW(p) · GW(p)
He didn't like my other three attempts at changes either... I could come up with 10 different ways of writing that sentence, but I'd rather let him make some suggestions.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-16T02:51:29.488Z · LW(p) · GW(p)
He didn't like my other three attempts at changes either... I could come up with 10 different ways of writing that sentence, but I'd rather let him make some suggestions.
If you made the suggestions here and received public support for one of them it wouldn't matter much what Grognor thought.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T03:01:41.089Z · LW(p) · GW(p)
Why don't you make a suggestion?
Replies from: wedrifid↑ comment by wedrifid · 2012-03-16T03:04:48.372Z · LW(p) · GW(p)
*cough* Mine is 'delete the sentence entirely'. I never really liked that virtues page anyway!
Replies from: John_Maxwell_IV, lessdazed, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T03:06:17.542Z · LW(p) · GW(p)
Sounds like a great idea.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-17T05:52:39.496Z · LW(p) · GW(p)
To be clear, you are in favor of leaving the virtues off of the about page, correct?
Replies from: wedrifid↑ comment by wedrifid · 2012-03-17T06:01:03.470Z · LW(p) · GW(p)
For what it is worth, yes.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-17T17:34:25.419Z · LW(p) · GW(p)
Okay, thanks. One of the other wiki editors didn't think you meant that.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-17T23:47:01.620Z · LW(p) · GW(p)
Whatever wedrifid actually meant is not "apparent consensus", given that there's just 2 upvotes on the statement where it wasn't apparent to the voters what he actually meant... Reverted with suggestion to escalate to a discussion post and voting more clearly. Also, this started from talking about bad wording, which is a separate question from leaving the section out altogether, so the hypothetical discussion posting should distinguish those questions.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-18T00:19:17.048Z · LW(p) · GW(p)
Okay.
↑ comment by wedrifid · 2012-03-16T02:49:33.132Z · LW(p) · GW(p)
That change is less bad than the original but it is sometimes better to hold off on changes that may reduce the impetus for further improvement without quite satisfying the need.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T03:00:57.543Z · LW(p) · GW(p)
To be honest, I don't have much energy left to fight this. I'd like to rethink the entire page, but if I have to fight tooth and nail for every sentence I won't.
↑ comment by epicureanideal · 2012-03-16T02:48:30.825Z · LW(p) · GW(p)
A rambling, cursing tirade against a polite discussion of things that might be wrong with the group (or perceptions of the group) doesn't improve my perception of the group. I have to say, I have a significant negative impression from Grognor's response here. In addition to the tone of his response, a few things that added to this negative impression were:
"how painstakingly and meticulously Eliezer idiot-proofed the sequences, and it didn't work because people still manage to be idiots about it"
Again, the name dropping of Our Glorious Leader Eliezer, long may He reign. (I'm joking here for emphasis.)
"LW is a cult hur hur"
People might not be thinking completely rationally, but this kind of characterization of people who have negative opinions of the group doesn't win you any friends.
"since it's exactly what Eliezer was trying to combat by writing it."
There's Eliezer again, highlighting his importance as the group's primary thought leader. This may be true, and probably is, but highlighting it all the time can lead people to think this is cultish.
↑ comment by XiXiDu · 2012-03-15T10:49:06.472Z · LW(p) · GW(p)
People regularly accuse LW of this, which is outright batshit. XiXiDu regularly posts SIAI criticism, and it always gets upvoted, no matter how wrong.
Thanks for saying that I significantly helped to make Less Wrong look less cultish ;-)
By the way...
Replies from: jimrandomh↑ comment by jimrandomh · 2012-03-15T19:58:14.731Z · LW(p) · GW(p)
Actually, I believe what he said was that you generated evidence that Less Wrong is not cultish, which makes it look more cultish to people who aren't thinking carefully.
↑ comment by dbaupp · 2012-03-15T02:48:49.867Z · LW(p) · GW(p)
Eliezer: Do not worship a hero! Do not trust!
Rationalwiki et al: LW is a personality cult around Eliezer because of so-and-so.
A widely revered figure who has written a million+ words that form the central pillars of LW and has been directly (or indirectly) responsible for bringing many people into the rationality memespace says "don't do X" so it is obvious that X must be false.
Dismissing accusations of a personality cult around Eliezer by saying Eliezer said "no personality cult" is a fairly poor way of going about it. Two key points:
- saying "as a group, we don't worship Eliezer" doesn't guarantee that it is true (groupthink could easily suck us into ignore evidence)
- someone might interpret what Eliezer said as false modesty or an attempt to appear to be a reluctant saviour/messiah (i.e. using dark arts to suck people in)
↑ comment by epicureanideal · 2012-03-16T02:54:26.708Z · LW(p) · GW(p)
"I have become fully convinced that even bringing it up is actively harmful."
What evidence leads you to this conclusion?
Eliezer: Please, learn what turns good ideas into cults, and avoid it! Barely-aware public: Huh, wah? Cults? Cults! Less Wrong is a cult!
Can you provide evidence to support this characterization?
Eliezer: Do not worship a hero! Do not trust! Rationalwiki et al: LW is a personality cult around Eliezer because of so-and-so.
Can you provide evidence to support this characterization?
I would like to see some empirical analysis of the points made here and by the original poster. We should gather some data about perceptions from real users and use that to inform future discussion on this topic. I think we have a starting point in the responses to this post, and comments in other posts could probably be mined for information, but we should also try to find some rational people who are not familiar with less wrong and introduce them to it and ask them for their impressions (from someone acting like they just found the site, are not affiliated with it, and are curious about their friend's impressions, or something like that).
↑ comment by XiXiDu · 2012-03-15T11:01:26.311Z · LW(p) · GW(p)
I have become fully convinced that even bringing it up is actively harmful.
No, it is not. A lack of self-criticism and evaluation is one of the reasons for why people assign cult status to communities.
P.S. Posts with titles along the lines of 'Epistle to the New York Less Wrongians' don't help in reducing cultishness ;-)
(Yeah, I know it was just fun.)
↑ comment by halcyon · 2012-08-20T11:40:59.958Z · LW(p) · GW(p)
Actually, I believe the optimal utilitarian attitude would be to make fun of them. If you don't take them at all seriously, they will grow to doubt themselves. If you're persistently humorous enough, some of them, thinking themselves comedians, will take your side in poking fun at the rest. In time, LW will have assembled its own team of Witty Defenders responsible for keeping non-serious accusations at bay. This will ultimately lead to long pages of meaningless back and forth between underlings, allowing serious LWians to ignore these distracting subjects altogether. Also, the resulting dialogue will advertize the LW community, while understandably disgusting self-respecting thinkers of every description, thus getting them interested in evaluating the claims of LW on its own terms.
Personally, I think all social institutions are inevitably a bit cultish, (society = mob - negative connotations) and they all use similarly irrational mechanisms to shield themselves from criticism and maintain prestige. A case could be made that they have to, one reason being that most popular "criticism" is of the form "I've heard it said or implied that quality X is to be regarded as a Bad Thing, and property Y of your organization kind of resembles X under the influence of whatever it is that I'm smoking," or of equally abysmal quality. Heck, the United States government, the most powerful public institution in the world, is way more cultish than average. Frankly, more so than LW has ever been accused of being, to my knowledge. Less Wrong: Less cultish than America!
comment by CarlShulman · 2012-03-15T01:15:27.096Z · LW(p) · GW(p)
The top autocompletes for "Less Wrong" are
- sequences
- harry potter
- meetups
These are my (logged-in) Google results for searching "Less Wrong_X" for each letter of the alphabet (some duplicates appear):
- akrasia
- amanda knox
- atheism
- australia
- blog
- bayes
- basilisk
- bayes theorem
- cryonics
- charity
- cult
- discussion
- definition
- decoherence
- decision theory
- epub
- evolutionary psychology
- eliezer yudkowsky
- evidence
- free will
- fanfiction
- fanfic
- fiction
- gender
- games
- goals
- growing up is hard
- harry potter
- harry potter and the methods of rationality
- how to be happy
- hindsight bias
- irc
- inferential distance
- iq
- illusion of transparency
- joint configurations
- joy in the merely real
- kindle
- amanda knox
- lyrics
- luminosity
- lost purposes
- leave a line of retreat
- meetup
- mobi
- meditation
- methods of rationality
- newcomb's problem
- nyc
- nootropics
- neural categories
- optimal employment
- overcoming bias
- open thread
- outside the laboratory
- procrastination
- polyamory
- podcast
- quantum physics
- quotes
- quantum mechanics
- rationality quotes
- rationality quotes
- rationalwiki
- reading list
- rationality
- sequences
- survey
- survey results
- sequences pdf
- textbooks
- three worlds collide
- toronto
- ugh fields
- universal fire
- value is fragile
- village idiot
- wiki
- wikipedia
- words
- what is evidence
- yudkowsky
- yvain
- your strength as a rationalist
- your rationality is my business
- zombies
- zombies the movie
The autocomplete bit doesn't seem to be too big a problem for Less Wrong.
However, it is one of the immediate autocompletes for "Singularity Institute." What pages do you get on the first page of results if you search "singularity institute cult"? I see the wikipedia page, the SI website, Michael Anissimov's blog, RationalWiki, Less Wrong posts about cultishness and death spirals, Lukeprog's blog, a Forbes article mention of "cargo-cult enthusiasm," and at the bottom a blog post making a case against SI and other transhumanist organizations.
Replies from: timtyler↑ comment by timtyler · 2012-03-15T11:16:11.261Z · LW(p) · GW(p)
Luke's link to How Cults work is pretty funny.
comment by jimrandomh · 2012-03-15T01:35:17.515Z · LW(p) · GW(p)
Google's autocomplete has a problem, which has produced controversy in other contexts: when people want to know whether X is trustworthy, the most informative search they can make is "X scam". Generally speaking, they'll find no results and that will be reassuring. Unfortunately, Google remembers those searches, and presents them later as suggestions - implying that there might be results behind the query. Once the "X scam" link starts showing up in the autocomplete, people who weren't really suspicious of X click on it, so it stays there.
Replies from: beoShaffer, John_Maxwell_IV↑ comment by beoShaffer · 2012-03-15T02:42:56.607Z · LW(p) · GW(p)
Personal anecdote warning. I semi-routinely google the phrase "X cult" when looking into organizations.
Replies from: CarlShulman↑ comment by CarlShulman · 2012-03-15T03:02:46.259Z · LW(p) · GW(p)
Does this ever work?
Replies from: beoShaffer↑ comment by beoShaffer · 2012-03-15T16:07:12.012Z · LW(p) · GW(p)
I think so, but it's hard to say. I look into organizations infrequently enough that semi-routinely leaves me a very small sample size. The one organization that had prominent cult results(not going to name it for obvious reasons) does have several worrying qualities. And they seem related to why it was called a cult. -edit minor grammar/style fix
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T03:04:05.580Z · LW(p) · GW(p)
Thanks; I updated the post to reflect this.
comment by JoshuaFox · 2012-03-15T10:12:30.086Z · LW(p) · GW(p)
Eliezer addressed this in part with his "Death Spiral" essay, but there are some features to LW/SI that are strongly correlated with cultishness, other than the ones that Eliezer mentioned such as fanaticism and following the leader:
- Having a house where core members live together.
- Asking followers to completely adjust their thinking processes to include new essential concepts, terminologies, and so on to the lowest level of understanding reality.
- Claiming that only if you carry out said mental adjustment can you really understand the most important parts of the organization's philosophy.
- Asking for money for a charity, particularly one which does not quite have the conventional goals of a charity, and claiming that one should really be donating a much larger percentage of one's income than most people donate to charity.
- Presenting an apocalyptic scenario including extreme bad and good possibilities, and claiming to be the best positioned to deal with it.
- [Added] Demand you leave any (other) religion.
Sorry if this seems over-the-top. I support SI. These points have been mentioned, but has anyone suggested how to deal with them? Simply ignoring the problem does not seem to be the solution; nor does loudly denying the charges; nor changing one's approach just for appearances.
Replies from: timtyler, TheAncientGeek, MTGandP, timtyler, timtyler↑ comment by timtyler · 2012-03-15T11:25:32.992Z · LW(p) · GW(p)
Perhaps consider adding the high fraction of revenue that ultimately goes to paying staff wages to the list.
Oh yes, and fact that the leader wants to SAVE THE WORLD.
Replies from: Bongo, JoshuaFox, epicureanideal↑ comment by Bongo · 2012-03-15T22:00:32.976Z · LW(p) · GW(p)
fraction of revenue that ultimately goes to paying staff wages
About a third in 2009, the last year for which we have handy data.
Replies from: timtyler↑ comment by timtyler · 2012-03-16T00:49:18.216Z · LW(p) · GW(p)
Practically all of it goes to them or their "associates" - by my reckoning. In 2009 some was burned on travel expenses and accomodation, some was invested - and some was stolen.
Who was actually helped? Countless billions in the distant future - supposedly.
Replies from: dbaupp, epicureanideal↑ comment by dbaupp · 2012-03-16T02:50:19.029Z · LW(p) · GW(p)
all of it goes to them or their "associates"
What else should it go to? (Under the assumption that SI's goals are positive.)
As Larks said above, they are doing thought work: they are not trying to ship vast quantities of food or medical supplies. The product of SI is the output from their researchers, the only way to get more output is to employ more people (modulo improving the output of the current researchers, but that is limited).
Replies from: timtyler↑ comment by timtyler · 2012-03-16T11:19:01.953Z · LW(p) · GW(p)
So, to recap, this is a proposed part of a list of ways in which the SIAI resembles a cult. It redistribtutes economic resources from the "rank and file" members up the internal heirarchy without much expenditure on outsiders - just like many cults do.
Replies from: dbaupp↑ comment by dbaupp · 2012-03-16T13:12:02.827Z · LW(p) · GW(p)
(Eh. Yes, I think I lost track of that a bit.)
Keeping that in mind: SI has a problem because acting to avoid appearing to exist to give money to the upper ranks means that they can't pay their researchers. There are three broad classes of solutions to this (that I can see):
- Give staff little to no compensation for their work
- Use tricky tactics to try to conceal how much money goes to the staff
- Try to explain to everyone why such a large proportion of the money goes to the staff
All of those seem suboptimal.
↑ comment by epicureanideal · 2012-03-16T02:38:33.514Z · LW(p) · GW(p)
Why was this downvoted instead of responded to? Downvoting people who are simply stating negative impressions of the group doesn't improve impressions of the group.
↑ comment by JoshuaFox · 2012-03-15T12:11:18.586Z · LW(p) · GW(p)
Most organizations spend most of their money on staff. What else could you do with it? Paying fellowships for "external staff" is a possibility. But in general, good people are exactly what you need.
Replies from: timtyler, epicureanideal↑ comment by timtyler · 2012-03-15T12:25:57.425Z · LW(p) · GW(p)
Often goods or needy beneficiaries are also involved. Charity actions are sometimes classified into:
- Program Expenses
- Administrative Expenses
- Fundraising Expenses
This can be used as a heuristic for identifying good charities.
Not enough in category 1 and too much in categories 2 and 3 is often a bad sign.
Replies from: Larks↑ comment by Larks · 2012-03-16T01:07:52.889Z · LW(p) · GW(p)
But they're not buying malaria nets, they're doing thought-work. Do you expect to see an invoice for TDT?
Quite appart from the standard complaint about how awful a metric that is.
↑ comment by epicureanideal · 2012-03-16T02:40:05.121Z · LW(p) · GW(p)
And yet there are plenty of things that don't cost much money that they could be doing right now, that I have previously mentioned to SIAI staff and will not repeat (edit: in detail) because it might interfere with my own similar efforts in the near future.
Basically I'm referring to public outreach, bringing in more members of the academic community, making people aware that LW even exists (I wasn't except when I randomly ran into a few LWers in person), etc.
What's the reason for downvoting this? Please comment.
↑ comment by epicureanideal · 2012-03-16T02:37:34.765Z · LW(p) · GW(p)
As I've discussed with several LWers in person, including some staff and visiting fellows, one of the things I disliked about LW/SIAI was that so much of the resources of the organization go to pay the staff. They seemingly wouldn't even consider proposals to spend a few hundred dollars on other things because they claimed it was "too expensive".
↑ comment by TheAncientGeek · 2014-05-20T13:33:44.819Z · LW(p) · GW(p)
add
Leader(s) are credited with expertise beyond that convenrional experts in subjects they are not conventionally qualified in.
Studying conventional versions of subjects is deprecated in favour of in group versions.
↑ comment by JoshuaFox · 2014-05-20T18:03:48.973Z · LW(p) · GW(p)
Also:
Associated with non-standard and non-monogamous sexual practices.
(Just some more pattern-matching on top of what you see in the parent and grandparent comment. I don't actually think this is a strong positive indicator.)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-20T18:24:58.462Z · LW(p) · GW(p)
The usual version of that indicator is "leader has sex with followers"
↑ comment by MTGandP · 2012-09-08T20:38:10.127Z · LW(p) · GW(p)
One fundamental difference between LW and most cults is that LW tells you to question everything, even itself.
Replies from: gwern↑ comment by gwern · 2012-09-08T21:25:13.183Z · LW(p) · GW(p)
Most, but not all. The Randians come to mind. Even the Buddha encouraged people to be critical, but doesn't seem to have stopped the cults. I was floored to learn a few weeks ago that Buddhism has formalized even when you stop doubting! When you stop doubting, you become a Sotāpanna; a Sotāpanna is marked by abandoning '3 fetters', the second fetter according to Wikipedia being
Skeptical Doubt - Doubt about the Buddha and his teaching is eradicated because the Sotāpanna personally experiences the true nature of reality through insight, and this insight confirms the accuracy of the Buddha’s teaching.
As well, as unquestioningness becomes a well known trait of cults, cults tend to try to hide it. Scientology hides the craziest dogmas until you're well and hooked, for example.
Replies from: None, MTGandP↑ comment by [deleted] · 2013-01-04T16:32:16.801Z · LW(p) · GW(p)
If the Randians are a cult, LW is a cult.
Like the others, the members just think it's unique in being valid.
Replies from: Desrtopa, MugaSofer↑ comment by Desrtopa · 2013-01-04T17:02:07.704Z · LW(p) · GW(p)
If a person disagrees with Rand about a number of key beliefs, do they still count as a Randian?
Replies from: Peterdjones, None↑ comment by Peterdjones · 2013-01-10T00:40:07.948Z · LW(p) · GW(p)
If they don't count as an Orthodox Randian, they can always become a Liberal Randian
↑ comment by MTGandP · 2012-09-08T21:28:58.768Z · LW(p) · GW(p)
So there comes a point in Buddhism where you're not supposed to be skeptical anymore. And Objectivists aren't supposed to question Ayn Rand.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-09T00:42:18.798Z · LW(p) · GW(p)
Would it be productive to be skeptical about whether your login really starts with the letter "M"? Taking an issue off the table and saying, we're done with that, is not in itself a bad sign. The only question is whether they really do know what they think they know.
I personally endorse the very beginning of Objectivist epistemology - I mean this: "Existence exists—and the act of grasping that statement implies two corollary axioms: that something exists which one perceives and that one exists possessing consciousness, consciousness being the faculty of perceiving that which exists." It's the subsequent development which is a mix of further gemlike insights, paths not taken, and errors or uncertainties that are papered over.
In the case of Buddhism, one has the usual problem of knowing, at this historical distance, exactly what psychological and logical content defined "enlightenment". One of its paradoxes is that it sounds like the experience of a phenomenological truth, and yet the key realization is often presented as the discovery that there is no true self or substantial self. I would have thought that achieving reflective consciousness implied the existence of a reflector, just as in the Objectivist account. Then again, reflection can also produce awareness that traits with which you have identified yourself are conditioned and contingent, so it can dissolve a naive concept of self, and that sounds more like the Buddhism we hear about today. The coexistence of a persistent observing consciousness, and a stream of transient identifications, in certain respects is like Hinduism; though the Buddhists can strike back by saying that the observing consciousness is not eternal and free of causality, it too exists only if it has been caused to exist.
So claims to knowledge, and the existence of a stage where you no longer doubt that this really is knowledge, and get on with developing the implications, do not in themselves imply falsity. In a systematic philosophy based on reason, a description which covers Objectivism, Buddhism, and Less-Wrong-ism, there really ought to be some notion of a development that occurs as you as learn.
The alternative is Zen Rationalism: if you meet a belief on the road (of life), doubt it! It's a good heuristic if you are beset by nonsense, and it even has a higher form in phenomenological or experiential rationalism, where you test the truth of a proposition about consciousness by seeing whether you can plausibly deny it, even as the experience is happening. But if you do this, even while you keep returning to beginner's mind, you should still be dialectically growing your genuine knowledge about the nature of reality.
↑ comment by timtyler · 2012-03-15T11:28:01.301Z · LW(p) · GW(p)
Not just a cult - an END OF THE WORLD CULT.
My favourite documentary on the topic: The End of The World Cult.
comment by CasioTheSane · 2012-03-15T04:05:24.549Z · LW(p) · GW(p)
Did anyone reading this initially get the impression that Less Wrong was cultish when they first discovered it?
I only discovered LW about a week ago, and I got the "cult" impression strongly at first, but decided to stick around anyway because I am having fun talking to you guys, and am learning a lot. The cult impression faded once I carefully read articles and threads on here and realized that they really are rational, well argued concepts rather than blindly followed dogma. However, it takes time and effort to realize this, and I suspect that the initial appearance of a cult would turn many people off from putting out that time and effort.
For a newcomer expecting discussions about practical ways to overcome bias and think rationally, the focus on things like transhumanism and singularity development seem very weird- those appear to be pseudo-religous ideas with no obvious connection to rationality or daily life.
AI and transhumanism are very interesting, but are distinct concepts from rationality. I suggest moving singularity and AI specific articles to a different site, and removing the singularity institute and FHI links from the navigation bar.
There's also the problem of having a clearly defined leader, with strong controversial opinions which are treated like gospel. I would expect a community which discusses rationality to be more of an open debate/discussion between peers without any philosophical leaders that everybody agrees with. I don't see any easy solution here, because Eliezer Yudkowsky's reputation here is well earned- he actually is exceptionally brilliant and rational.
I would also like to see more articles on how to avoid bias, and apply bayesian methods to immediate present day problems and decision making. How can we avoid bias and correctly interpret data from scientific experiments, and then apply this knowledge to make good choices about things such as improving our own health?
Replies from: jsteinhardt, beriukay↑ comment by jsteinhardt · 2012-03-15T05:02:59.503Z · LW(p) · GW(p)
Random nitpick: a substantial portion of LW disagrees with Eliezer on various issues. If you find yourself actually agreeing with everything he has ever said, then something is probably wrong.
Slightly less healthy for overall debate is that many people automatically attribute a toxic/weird meme to Eliezer whenever it is encountered on LW, even in instances where he has explicitly argued against it (such as utility maximization in the face of very small probabilities).
↑ comment by beriukay · 2012-03-15T04:52:17.607Z · LW(p) · GW(p)
Upvoted for sounding a lot like the kinds of complaints I've heard people say about LW and SIAI.
There is a large barrier to entry here, and if we want to win more, we can't just blame people for not understanding the message. I've been discussing with a friend what is wrong with LW pedagogy (though he admits that it is certainly getting better). To paraphrase his three main arguments:
We often use nomenclature without necessary explanation for a general audience. Sure, we make generous use of hyperlinks, but without some effort to bridge the gap in the body of our text, we aren't exactly signalling openness or friendliness.
We have a tendency to preach to the converted. Or as the friend said:
It's that classic mistake of talking in a way where you're convincing or explaining something to yourself or the well-initiated instead of laying out the roadwork for foreigners.
He brought up an example for how material might be introduced to newly exposed folk.
If This American Life explained the financial crisis in an hour so that four million people improved on a written test on the subject, it's clear you can explain complicated material from near-scratch.
The curse of knowledge can be overcome, but it takes desire and some finesse.
- If we intend to win the hearts and minds of the people (or at least make a mark in the greater world), we might want to work on evocative imagery that isn't immediately cool to futurists and technophiles and sci-fi geeks. Sure, keep the awesome stuff we have, but maybe look for metaphors that work in other domains. In my mind, ideally, we should build a database of ideas and their parallels in other fields (using some degree of field work to actually find the words that work). Eliezer has done some great work this way, like with HP:MoR, and some of his short stories. Maybe the SIAI could shell out money to fund focus groups and interviews a la Luntz, who in my mind is a great Dark Side example of winning.
Edit for formatting and to mention that outreach and not seeming culty seem to be intertwined in a weird way. It is obvious to me that being The Esoteric Order Of LessWrong doesn't do the world any favors (or us, for that matter), but that by working on outreach, we can be accused of proselytizing. I think it comes down to doing what works without doing the death spiral stuff. And it seems to me that no matter what is done, detractors are going to detract.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-15T11:14:11.640Z · LW(p) · GW(p)
If This American Life explained the financial crisis in an hour so that four million people improved on a written test on the subject, it's clear you can explain complicated material from near-scratch.
That's an inspiring goal, but it might be worth pointing out that the This American Life episode was extraordinary-- when I heard it, it seemed immediately obvious that this was the most impressively clear and efficient hour I'd heard in the course of a lot of years of listening to NPR.
I'm not saying it's so magical that it can't be equaled, I'm saying that it might be worth studying.
comment by IlyaShpitser · 2012-03-15T17:31:26.421Z · LW(p) · GW(p)
Here's what an outsider might see:
"doomsday beliefs" (something "bad" may happen eschatologically, and we must work to prevent this): check
a gospel (The Sequences): check
vigorous assertions of untestable claims (Everett interpretation): check
a charismatic leader extracting a living from his followers: check
is sometimes called a cult: check
This is enough to make up a lot of minds, regardless of any additional distinctions you may want to make, sadly.
Replies from: None, advancedatheist↑ comment by [deleted] · 2012-03-15T21:24:45.892Z · LW(p) · GW(p)
But an outsider would have to spend some time here to see all those things. If they think LW is accurately described by the c-word even after getting acquainted with the site, there might be no point in trying to change their minds. It's better to focus on people who are discouraged by first impressions.
↑ comment by advancedatheist · 2012-03-18T02:41:04.466Z · LW(p) · GW(p)
I recently read an article about Keith Raniere, the founder of a cult called NXIVM (pronounced "nexium"):
http://www.timesunion.com/local/article/Secrets-of-NXIVM-2880885.php
Raniere reminds me of Yudkowsky, especially after reading cult expert Rick Ross's assessment of Raniere:
Rick Ross has been a cult tracker for more than 25 years. He has examined and spoken about NXIVM so extensively it spawned an ongoing federal lawsuit from Raniere for publicizing portions of NXIVM's training program. That legal battle with NXIVM, where he is countersuing, is entering its ninth year. Ross has been qualified and accepted as an expert witness regarding cults and cultlike groups in the courts of 10 states and has been used by the federal government as a consultant. He has spent 50 to 100 hours talking with NXIVM members, he said, and additional time talking with ex-members, which is why he said he's confident in his view that Raniere is a cult leader. Ross has been retained by three former NXIVM members to help in deprogramming, and he has counseled several others, including one he said was sent into a psychotic episode from her NXIVM experience. "In my opinion, NXIVM is one of the most extreme groups I have ever dealt with in the sense of how tightly wound it is around the leader, Keith Raniere," Ross said in an interview. Ross was asked to provide insight on David Koresh to the federal government during the height of the Waco situation and says Raniere shows characteristics similar to Koresh. Like the infamous leader of the Branch Davidians, Ross said, Raniere thinks he knows a way to reorder human existence, believes he is on the cutting edge of the new wave of the future, has followers who see him as a savior and uses his position of power to gain sexual favors from women.
comment by gRR · 2012-03-15T11:59:17.031Z · LW(p) · GW(p)
I'm here for only a couple of months, and I didn't have any impression of cultishness. I saw only a circle of friends doing a thing together, and very enthusiastic about it.
What I also did see (and still do) is specific people just sometimes being slightly crazy, in a nice way. As in: Eliezer's threatment of MWI. Or way too serious fear of weird acausal dangers that fall out of currently best decision theories.
Note: this impression is not because of craziness of the ideas, but because of taking them too seriously too early. However, the relevant posts always have sane critical comments, heavily upvoted.
I'm slightly more alarmed by posts like How would you stop Moore's Law?. I mean, seriously thinking of AI dangers is good. Seriously considering nuking Intel's fabs in order to stop the dangers is... not good.
Replies from: Luke_A_Somers, wedrifid↑ comment by Luke_A_Somers · 2012-03-15T16:19:50.636Z · LW(p) · GW(p)
Agreed, except the treatment of WMI does not seem the least bit crazy to me. But what do I know - I'm a crazy physicist.
Replies from: roystgnr↑ comment by roystgnr · 2012-03-15T17:24:57.163Z · LW(p) · GW(p)
The conclusions don't seem crazy (well, they seem "crazy-but-probably-correct", just like even the non-controversial parts of quantum mechanics), but IIRC the occasional emphasis on "We Have The One Correct Answer And You All Are Wrong" rang some warning bells.
On the other hand: Rationality is only useful to the extent that it reaches conclusions that differ from e.g. the "just believe what everyone else does" heuristic. Yet when any other heuristic comes up with new conclusions that are easily verified, or even new conclusions which sound plausible and aren't disproveable, "just believe what everyone else does" quickly catches up. So if you want a touchstone for rationality in an individual, you need to find a question for which rational analysis leads to an unverifiable, implausible sounding answer. Such a question makes a great test, but not such a great advertisement...
Replies from: Dmytry↑ comment by wedrifid · 2012-03-15T14:03:55.052Z · LW(p) · GW(p)
I saw only a circle of friends doing a thing together, and very enthusiastic about it.
That's a positive impression. People really look that enthusiastic and well bonded?
Replies from: gRR↑ comment by gRR · 2012-03-15T14:19:35.055Z · LW(p) · GW(p)
Yes to well bonded. People here seem to understand each other far better than average on the net, and it is immediately apparent.
Enthusiastic is a wrong word, I suppose. I meant, sure of doing a good thing, happy to be doing it, etc, not in the sense of applauding and cheering.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-15T14:35:02.090Z · LW(p) · GW(p)
Yes to well bonded. People here seem to understand each other far better than average on the net, and it is immediately apparent.
Thankyou. It is good to be reminded that these things are relative. Sometimes I forget to compare interactions to others on the internet and instead compare them to interactions with people as I would prefer them to be or even just interactions with people I know in person (and have rather ruthlessly selected for not being annoying).
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T01:14:04.918Z · LW(p) · GW(p)
Speaking for myself, I know of at least four people who know of Less Wrong/SI but are not enthusiasts, possibly due to atmosphere issues.
An acquaintance of mine attends Less Wrong meetups and describes most of his friends as being Less Wrongers, but doesn't read Less Wrong and privately holds reservations about the entire singularity thing, saying that we can't hope to say much about the future more than 10 years in advance. He told me that one of his coworkers is also skeptical of the singularity.
A math student/coder I met at an entrepreneurship event told me Less Wrong had good ideas but was "too pretentious".
I was interviewing for an internship once, and the interviewer and I realized we had a mutual acquaintance who was a Less Wronger and SI donor. He asked me if I was part of that entire group, and I said yes. His attitude was a bit derisive.
Replies from: timtyler, daenerys, Nisan↑ comment by timtyler · 2012-03-15T11:47:38.685Z · LW(p) · GW(p)
The FHI are trying to do a broadly similar thing from within academia. They seem less kooky and cultish - probably as a result of trying harder to avoid cultishness.
Replies from: None↑ comment by [deleted] · 2012-03-16T18:55:57.349Z · LW(p) · GW(p)
I don't know why you would assume that it's "probably as a result of trying harder to avoid cultishness." My prior is that they just don't seem cultish because academics are often expected to hold unfamiliar positions.
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2012-03-17T23:41:11.536Z · LW(p) · GW(p)
I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along with John Salvatier) whether the SIAI was a cult and obtained lots of evidence to push me far away from that position.
I wonder if the cult accusation in part comes from the fact that it seems too good to be true, so we feel a need for defensive suspicion. Rationality is very much about changing one's mind and thinking about this we become suspicious that the goals of SIAI are to change our minds in a particular way. Then we discover that in fact the SIAI's goals (are in part) to change our minds in a particular way so we think our suspicions are justified.
My model tells me that stepping into a church is several orders of magnitude more psychologically dangerous than stepping into a Less Wrong meetup or the SIAI headquarters.
(The other 5% goes to things like "they are a cult and totally duped me and I don't know it", "they are a cult and I was too distant from their secret inner cabals to discover it", "they are a cult and I don't know what to look for", "they aren't a cult but they want to be one and are screwing it up", etc. I should probably feel more confident about this than 95%, but my own inclination to be suspicious of people who want to change how I think means I'm being generous with my error. I have a hard time giving these alternate stories credit.)
↑ comment by daenerys · 2012-03-15T19:09:48.930Z · LW(p) · GW(p)
Speaking for myself, I know of at least four people who know of Less Wrong/SI but are not enthusiasts, possibly due to atmosphere issues.
I would consider myself a pretty far outlier on LessWrong (as a female, ENFP (people-person, impulsive/intuitive), Hufflepuff type). So on one hand, my opinion may mean less, because I am not generally the "type" of person associated with LW. On the other hand, if you want to expand LW to more people, then I think some changes need to be made for other "types" of people to also feel comfortable here.
Along with the initial "cult" impression (which eventually dissipates, IMO), what threw me most is the harshness of the forums. I've been on here for about 4 months now, and it's still difficult for me to deal with. Also, I agree that topics like FAI, and Singularitarianism aren't necessarily the best things to be discussing when trying to get people interested in rationality.
I am well-aware that the things that would make LW more comfortable for me and others like me, would make it less comfortable for many of the current posters. So there is definitely a conflict of goals.
Goal A- Grow LW and make rationality more popular- Need to make LW more "nice" and perhaps focused on Instrumental Rationality rather than Singularity and FAI issues.
Goal B- Maintain current culture and level of posts.- Need to NOT significantly change LW, and perhaps focus more on the obscure posts that are extremely difficult for newer people to understand.
AFAICT pursuit of either of these goals will be at the detriment of the other goal.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-15T22:56:48.700Z · LW(p) · GW(p)
Could you be more specific about what comes off as harsh to you?
If you'd rather address this as a private message, I'm still interested.
Replies from: John_Maxwell_IV, daenerys↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T23:11:32.543Z · LW(p) · GW(p)
What comes across as harsh to me: down voting discussion posts because they're accidental duplicates/don't fit some idea of what a discussion post is supposed to be, a lot of down voting that goes on in general, unbridled or curt disagreement (like grognor's response to my post. You saw him cursing and yelling, right? I made this post because I thought the less wrong community could use optimization on the topics I wrote about, not because I wanted to antagonize anyone.)
↑ comment by Nisan · 2012-03-15T19:29:27.591Z · LW(p) · GW(p)
A math student/coder I met at an entrepreneurship event told me Less Wrong had good ideas but was "too pretentious".
This person might have been in the same place as a math grad student I know. They read a little Less Wrong and were turned off. Then they attended a LW-style rationality seminar and responded positively, because it was more "compassionate". What they mean is this: A typical epistemology post on Less Wrong might sound something like
There are laws of probability; you can't just make up beliefs.
(That's not a quote.) Whereas the seminar sounded more like
We'll always have uncertainty, and we'll never be perfectly calibrated, but we can aspire to be better-calibrated.
Similarly, an instrumental-rationality post here might sound like
To the extent you fail to maximize some utility function, you can be Dutch-booked. Give me a penny to switch between these two gambles; give me another penny to switch back again. There: You have given me your two cents on the matter.
Whereas the seminar sounds more like
You must decide alone.
But you are not alone.
Of course, both approaches are good and necessary, and you can find both on Less Wrong.
comment by Viliam_Bur · 2012-03-15T13:20:40.683Z · LW(p) · GW(p)
Defending oneself from the cult accusation just makes it worse. Did you write a long excuse why you are not a cult? Well, that's exactly what a cult would do, isn't it?
To be accused is to be convicted, because the allegation is unfalsifiable.
Trying to explain something is drawing more attention to the topic, from which people will notice only the keywords. The more complex explanation you make, especially if it requires reading some of your articles, the worse it gets.
The best way to win is to avoid the topic.
Unfortunately, someone else can bring this topic and be persistent enough to make it visible. (Did it really happen on a sufficient scale, or are we just creating it by our own imagination?) Then, the best way is to make some short (not necessarily rational, but cached-thought convincing) answer and then avoid the topic. For example: "So, what exactly is that evil thing people on LW did? Downvote someone's forum post? Seriously, guys, you need to get some life."
And now, everybody stop worrying and get some life. ;-)
It could also help to make the site seem a bit less serious. For example put more emphasis on the instrumental rationality on the front page. People discussing best diet habits don't seem like a doomsday cult, right?
The Sequences could be recommended somewhat differently, for example: "In this forum we sometimes discuss some complicated topics. To make the discussion more efficient and avoid endlessly repeating the same arguments about statistics, evolution, quantum mechanics, et cetera, it is recommended to read the Sequences." Not like 'you have to do this', but rather like 'read the FAQ, please'. Also in discussion, instead of "read the Sequences" it is better to recommend one specific sequence, or one article.
Relax, be friendly. But don't hesitate to downvote a stupid post, even if the downvotee threatens to accuse you of whatever.
Replies from: roystgnr, epicureanideal, MugaSofer, Peterdjones↑ comment by epicureanideal · 2012-03-16T02:34:55.539Z · LW(p) · GW(p)
I don't think the best way to win is to avoid the topic. A healthy discussion of false impressions and how to correct them, or other failings a group may have, is a good indication to me of a healthy community. This post for example caused my impression of LW to increase somewhat, but some of the responses to it have caused my impression to decrease below its original level.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-03-16T08:44:05.967Z · LW(p) · GW(p)
Then let's discuss "false impressions" or even better "impressions" in general, not focusing on cultishness, which even cannot be defined (because there are so many different kind of cults). If we focus on making things right, we do not have to discuss hundred ways they could go wrong.
What is our community (trying to be) like?
Friendly. In more senses of the word: we speak about ethics, we are trying to make a nice community, we try to help each other become stronger and win.
Rational. Instead of superstition and gossip, we discuss how and why things really happen. Instead of happy death spirals, we learn about the world around us.
Professional. By that I do not mean that everyone here is an AI expert, but that the things we do and value here (studying, politeness, exactness, science) are things that for most people correlate positively with their jobs, rather than free time. Even when we have fun, it's adult people having fun.
So where exactly in the space of human organizations do we belong? Which of the cached-thoughts can be best applied to us? People will always try to fit us to some existing model (for example: cult), so why not choose this model rationally? I am not sure, but "educational NGO" sounds close. Science, raising the sanity waterline, et cetera. By seeming as something well-known, we become less suspicious, more normal.
↑ comment by MugaSofer · 2013-01-10T12:15:46.238Z · LW(p) · GW(p)
The Sequences could be recommended somewhat differently, for example: "In this forum we sometimes discuss some complicated topics. To make the discussion more efficient and avoid endlessly repeating the same arguments about statistics, evolution, quantum mechanics, et cetera, it is recommended to read the Sequences." Not like 'you have to do this', but rather like 'read the FAQ, please'. Also in discussion, instead of "read the Sequences" it is better to recommend one specific sequence, or one article.
This.
Seriously, we need to start doing all the stuff recommended here, but this is perhaps the simplest and most immediate. Someone go do it.
↑ comment by Peterdjones · 2013-01-10T00:22:46.665Z · LW(p) · GW(p)
The best way to win is to avoid the topic.
No. The best way is to not be a cult. Since cults are the most efficient known ways of engendering irrational bias, and since LW's mission statement is the avoidance of irrationality and bias, that is something LW should be doing anyway.
Here's how:
Don't have a leader
Don''t have a gospel
Don't have a dogma
Don't have quasi-religious "meetups"
Don't have quasi-religious rituals (!)
Don't have an eschatology
Don't have a God.
WELCOME CRITIICISM AND DISSENT
Suimmary:
Don't be a religion of rationality. Be the opposite of a religion.
Replies from: Vladimir_Nesov, wedrifid, MugaSofer↑ comment by Vladimir_Nesov · 2013-01-10T00:34:31.363Z · LW(p) · GW(p)
Be the opposite of a religion.
Reversed stupidity is not intelligence.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T01:31:32.608Z · LW(p) · GW(p)
Worship of Yud-Suthoth is not rationality.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-10T09:08:41.780Z · LW(p) · GW(p)
... and?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T12:49:21.988Z · LW(p) · GW(p)
Intoning that "reversed stupidity is mot intelligence" is not going to switch any less wrongers brain back on.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-10T13:08:00.617Z · LW(p) · GW(p)
You misunderstand me. No-one is worshiping Yud-Suthoth and calling it rationality. You proposed, in essence, that we disregard everything connected with religion - this is precisely the fallacy "reversed stupidity is not intelligence" is intended to address. When this was pointed out, you responded with what can only be charitably interpreted as meaning "but religions are irrational!" which isn't really addressing his point, or indeed any point, since no-one is proposing starting a religion or for that matter joining one.
EDIT: sodding karma toll.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T16:10:32.659Z · LW(p) · GW(p)
No-one is worshiping Yud-Suthoth and calling it rationality.
Something like that is happenning. For instance, sending people off to the sequencs to find The Answer, when the sequences don't even say anything conclusive.
this is precisely the fallacy "reversed stupidity is not intelligence" is intended to address.
That's just a slogan, not some universal law.
↑ comment by wedrifid · 2013-01-10T09:39:52.776Z · LW(p) · GW(p)
Be the opposite of a religion.
The opposite of most sorts of stupid is still stupid. Particularly most things that are functional enough to proliferate themselves successfully.
Don't have a leader
If you meant "Have more than one leader" you'd be on to something. That isn't what you meant though.
Don''t have a gospel
There is a difference between the connotations you are going with for 'gospel' and what amounts to a textbook that most people haven't read anyway.
Don't have a dogma
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. That is far from what I observe.
Don't have quasi-religious "meetups"
Socialize in person with rudimentary organisation? Oh the horror!
Don't have quasi-religious rituals (!)
Actually, I don't disagree at all on this one. Or at least I'd prefer that anyone who was into that kind of thing did it without it being affiliated with lesswrong in any way except partial membership overlap.
Don't have an eschatology
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
Don't have a God.
Don't. Working on it...
WELCOME CRITIICISM AND DISSENT
Most upvoted post. Welcome competent, sane or useful criticism. Don't give nonsense a free pass just because it is 'dissent'.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T12:23:45.082Z · LW(p) · GW(p)
If you meant "Have more than one leader" you'd be on to something. That isn't what you meant though.
How do you know? Multiple leaders at least dilute the problem.
There is a difference between the connotations you are going with for 'gospel' and what amounts to a textbook that most people haven't read anyway.
I've read it. There's some time I'll never get back.
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. T
Not what I meant. Those can be studied anywhere. "MWI is the correct interpretation of QM" is an example of dogma.
Socialize in person with rudimentary organisation?
Other rationalists manage without it.
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
No, I am referring to mind-killing aspects of the mythos: it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of "No, don't take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person".
Replies from: ArisKatsarisDon't give nonsense a free pass just because it is 'dissent'.
↑ comment by ArisKatsaris · 2013-01-10T13:06:47.684Z · LW(p) · GW(p)
Is this nonsense ?
It contains five misspellings in a single paragraph: "utimately" "canot" "statees" "hvae" "ontoogical" which might themselves be enough for a downvote, regardless of content.
As for the is-ought problem, if we accept that "ought" is just a matter of calculations in our brain returning an output (and reject that it's a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the "ought" is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It's not a matter of "cramming" an entire world-state into your brain -- any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain -- your current brain, not some hypothetical superbrain.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T13:09:45.528Z · LW(p) · GW(p)
As for the is-ought problem, if we accept that "ought" is just a matter of calculations in our brain returning an output
But we shouldnt accept that, because we can miscalculate an "ought" or antyhing else. The is-ought problem is the problem of correctly inferring an ought from a tractable amount of "is's".
(and reject that it's a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the "ought" is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It perhaps might be one day given sufficiently advanced brain scanning, but we don't have that now, so we still have an is-ought gap.
It's not a matter of "cramming" an entire world-state into your brain -- any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain -- your current brain, not some hypothetical superbrain.
The is-ought problem is epistemic. Being told that I have an epistemically inaccessible black box in my head that calculates oughts still doesn't lead to a situation where oughts can be consciously undestood as correct entailments of is's.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-01-10T13:19:46.243Z · LW(p) · GW(p)
because we can miscalculate an "ought" or anything else.
One way to miscalculate an "ought" is the same way that we can miscalculate an "is" -- e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren't perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm -- same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there's no "blue" light frequency striking the optical nerve. But that sort of thing doesn't seem as a very deep philosophical problem to me.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T13:30:05.666Z · LW(p) · GW(p)
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.
↑ comment by MugaSofer · 2013-01-10T09:26:07.143Z · LW(p) · GW(p)
- Don't have a leader
Leaders are useful. Pretty much every cause/movement/group has leadership of some kind.
- Don''t have a gospel
I'm not really sure how the sequences map onto the Christian Gospel. A catechism, maybe.
- Don't have a dogma
Assuming we don't excommunicate people for disagreeing with it (politely), I'm not sure why not. I mean, we mostly agree that there's no God, for example; rationality should, presumably, move us closer to the correct position, and if most of us agree that we've probably found it, why shouldn't we assume members agree unless they indicate otherwise?
Or did you have a different meaning of "dogma" in mind?
- Don't have quasi-religious "meetups"
Because meeting people with similar interests and goals is only done via religion.
- Don't have quasi-religious rituals (!)
Has anyone who's not a member of this site actually used those rituals as evidence of phygishness? Genuinely asking here.
- Don't have an eschatology
Because any idea that predicts the end of the world must be discarded a priori?
- Don't have a God.
Because any idea you place in the reference class "god" must be discarded a priori?
- WELCOME CRITIICISM AND DISSENT
An excellent suggestion! In theory, we already do (we could probably do better on this.) Trolling, however, is not generally considered part of that.
- Be the opposite of a religion.
I'm not even going to bother linking to the appropriate truism, but reversed stupidity etc.
EDIT: dammit stupid karma toll cutting off my discussions.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T12:01:45.694Z · LW(p) · GW(p)
Leaders are useful.
Leaders cause people to lapse into thinking "The Guru has an answer, even if I don't understand it". This is aready happening in LW.
I'm not really sure how the sequences map onto the Christian Gospel
People say "The answer is in the Sequencess" without bothering to check that it is.
,Assuming we don't excommunicate people for disagreeing it (politely), I'm not sure why not. I mean, we mostly agree that there's no God, for example; rationality should, presumably, move us closer to the correct position, and if most of us agree that we've probably found it, why shouldn't we assume members agree unless they indicate otherwise?
Rationalists should think and argue. However LWers just say "this is wrong" and downvote.
Because meeting people with similar interests and goals is only done via religion.
Other ratioanlists manage withoiut them. LWers aren't aware of how religious they seem.
Because any idea that predicts the end of the world must be discarded a priori?
Because it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of "No, don't take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person".
Because any idea you place in the reference class "god" must be discarded a priori?
See above. Leads to over-estimation of individual importance, and therefore emotional investment, and therefore mind-killing.
An excellent suggestion! In theory, we already do (we could probably do better on this.)
Trolling, however, is not generally considered part of that.
"Trolling" is the blind dogmatist's term for reasoned criticism.
I> 'm not even going to bother linking to the appropriate truism, but reversed stupidity etc.
Stupidity is stupidity, too.
comment by [deleted] · 2012-03-15T21:16:05.034Z · LW(p) · GW(p)
Some things that might be problematic:
We use the latest insights from cognitive science, social psychology, probability theory, and decision theory to improve our understanding of how the world works and what we can do to achieve our goals.
I don't think we actually do that. Insights, sure, but latest insights? Also, it's mostly cognitive science and social psychology. The insights from probability and decision theory are more in the style of the simple math of everything.
Want to know if your doctor's diagnosis is correct? It helps to understand Bayes' Theorem.
This might sound weird to someone who hasn't already read the classic example about doctors not being able to calculate conditional probabilities. Like we believe Bayes theorem magically grants us medical knowledge or something.
[the link to rationality boot-camp]
I'm not a native speaker of English so I can't really tell, but I recall people complaining that the name 'boot-camp' is super creepy.
On the about page:
Introduce yourself to the community here.
That's not cultish-sounding but it's unnecessarily imperative. Introduction thread is optional.
comment by fubarobfusco · 2012-03-15T07:00:34.374Z · LW(p) · GW(p)
Disclaimer: My partner and I casually refer to LW meetups (which I attend and she does not) as "the cult".
That said, if someone asked me if LW (or SIAI) "was a cult", I think my ideal response might be something like this:
"No, it's not; at least not in the sense I think you mean. What's bad about cults is not that they're weird. It's that they motivate people to do bad things, like lock kids in chain-lockers, shun their friends and families, or kill themselves). The badness of being a cult is not being weird; it's doing harmful things — and, secondarily, in coming up with excuses for why the cult gets to do those harmful things. Less Wrong is weird, but not harmful, so I don't think it is a cult in the sense you mean — at least not at the moment.
"That said, we do recognize that "every cause wants to be a cult", that human group behavior does sometimes tend toward cultish, and that just because a group says 'Rationality' on the label does not mean it contains good thinking. Hoping that we're special and that the normal rules of human behavior don't apply to us, would be a really bad idea. It seems that staying self-critical, understanding how cults happen and why, and consciously taking steps to avoid making in-group excuses for bad behavior or bad thinking, is a pretty good strategy for avoiding becoming a cult."
Replies from: Viliam_Bur, roystgnr, CasioTheSane↑ comment by Viliam_Bur · 2012-03-16T09:07:04.476Z · LW(p) · GW(p)
What's bad about cults is not that they're weird. It's that they motivate people to do bad things...
People use "weird" as a heuristic for danger, and personally I don't blame them, because they have good Bayesian reasons for it. Breaking a social norm X is positively correlated with breaking a social norm Y, and the correlation is strong enough for most people to notice.
The right thing to do is to show enough social skill to avoid triggering the weirdness alarm signal. (Just like publishing in serious media is the right thing to avoid the "pseudoscience" label.) You cannot expect that outsiders will do an exception for LW, suspend their heuristics and explore the website deeply; that would be asking them to privilege a hypothesis.
If something is "weird", we should try to make it less weird. No excuses.
Replies from: ryjm↑ comment by ryjm · 2012-03-21T13:42:26.845Z · LW(p) · GW(p)
So we should be Less Weird now? ;)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-03-21T15:22:02.845Z · LW(p) · GW(p)
We should be winning.
Less Weird is a good heuristic for winning (though a bad heuristic for a site name ).
↑ comment by roystgnr · 2012-03-15T17:33:40.729Z · LW(p) · GW(p)
Often by the time a cult starts doing harmful things, its members have made both real and emotional investments that turn out to be nothing but sunk costs. To avoid ever getting into such a situation, people come up with a lot of ways to attempt to identify cults based on nothing more than the non-harmful, best-foot-forward appearance that cults first try to project. If you see a group using "love bombing", for instance, the wise response is to be wary - not because making people feel love and self-esteem is inherently a bad thing, but because it's so easily and commonly twisted toward ulterior motives.
↑ comment by CasioTheSane · 2012-03-15T11:43:59.627Z · LW(p) · GW(p)
Less Wrong is weird, but not harmful
That is until people start bombing factories to mitigate highly improbable existential risks.
comment by Bugmaster · 2012-03-15T20:33:10.576Z · LW(p) · GW(p)
Did anyone reading this initially get the impression that Less Wrong was cultish when they first discovered it?
What do you mean, "initially" ? I am still getting that impression ! For example, just count the number of times Eliezer (who appears to only have a single name, like Prince or Jesus) is mentioned in the other comments on this post. And he's usually mentioned in the context of, "As Eliezer says...", as though the mere fact that it is Eliezer who says these things was enough.
The obvious counter-argument to the above is, "I like the things Eliezer says because they make sense, not because I worship him personally", but... well... that's what one would expect a cultist to say, no ?
Less Wrongers also seem to have their own vocabulary ("taboo that term or risk becoming mind-killed, which would be un-Bayesian"). We spend a lot of time worrying about doomsday events that most people would consider science-fictional (at best). We also cultivate a vaguely menacing air of superiority, as we talk about uplifting the ignorant masses by spreading our doctrine of rationality. As far as warning signs go, we've got it covered...
Replies from: epicureanideal, None, Martin-2↑ comment by epicureanideal · 2012-03-16T02:41:45.140Z · LW(p) · GW(p)
Specialized terminology is really irritating to me personally, and off-putting to most new visitors I would think. If you talk to any Objectivists or other cliques with their own internal vocabulary, it can be very bothersome. It also creates a sense that the group is insulated from the rest of the world, which adds to the perception of cultishness.
↑ comment by [deleted] · 2012-03-15T20:41:58.162Z · LW(p) · GW(p)
We also cultivate a vaguely menacing air of superiority, as we talk about uplifting the ignorant masses by spreading our doctrine of rationality
I think the phrase 'raising the sanity waterline' is a problem. As is the vaguely religious language, like 'litany of Tarski'. I looked up the definiton of 'litany' to make sure I was picking up on a religious denotation and not a religious connotation, and here's what I got:
A series of petitions for use in church services, usually recited by the clergy and responded to in a recurring formula by the people.
Not a great word, I think. Also 'Bayesian Conspiracy.' There's no conspiracy, and there shouldn't be.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-15T20:57:01.934Z · LW(p) · GW(p)
Agreed. I realize that the words like "litany" and "conspiracy" are used semi-ironically, but a newcomer to the site might not.
Replies from: William_Quixote, Eneasz↑ comment by William_Quixote · 2012-09-07T03:16:31.200Z · LW(p) · GW(p)
This wording may lose a few people, but it probably helps for many people as well. The core subject matter of rationality could very easily be dull or dry or "academic". The tounge-in-cheek and occasionally outright goofy humor makes the sequences a lot more fun to read.
The tone may have costs, but not being funny has costs too. If you think back to college, more professors have students tune out by being boring than by being esoteric.
Replies from: Jiro↑ comment by Jiro · 2014-05-20T17:35:28.261Z · LW(p) · GW(p)
(Responding to old post.)
One problem with such ironic usage is that people tend to joke about things that cause themselves stress, and that includes uncomfortable truths or things that are getting too close to the truth. It's why it actually makes sense to detain people making bomb jokes in airports. So just because the words are used ironically doesn't mean they can't reasonably be taken as signs of a cult--even by people who recognize that they are being used ironically.
(Although this is somewhat mitigated by the fact that many cults won't allow jokes about themselves at all.)
↑ comment by Eneasz · 2012-03-16T22:12:46.905Z · LW(p) · GW(p)
You'd have to be new to the entire internet to think those are being used seriously. And if you're THAT new, there's really very little that can be done to prevent misunderstanding no matter where you first land.
On top of that, it's extremely unlikely someone very new to the internet would start their journey at LessWrong
comment by Douglas_Reay · 2012-03-15T11:48:57.808Z · LW(p) · GW(p)
There doesn't seem to be anyone arguing seriously that Less Wrong is a cult, but we do give some newcomers that impression.
The LW FAQ says: >
Why do you all agree on so much? Am I joining a cult?
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all. Yes, some of the results that fall out of these basics sound weird if you haven't seen the reasoning behind them, but there's nothing in the laws of physics that prevents reality from sounding weird.
I suspect that putting a more human face on the front page, rather than just linky text, would help.
Perhaps something like a YouTube video version of the FAQ, featuring two (ideally personable) people talking about what Less Wrong is and is not, and how to get started on it. For some people, seeing is believing. It is one thing to tell them there are lots of different posters here and we're not fanatics; but that doesn't have the same impact as watching an actual human with body cues talking.
comment by Shephard · 2012-03-16T22:35:43.371Z · LW(p) · GW(p)
I don't believe LW is a cult, but I can see where intelligent, critical thinking people might get that impression. I also think that there may be elitist and clannish tendencies within LW that are detrimental in ways that could stand to be (regularly) examined. Vigilance against irrational bias is the whole point here, right? Shouldn't that be embraced on the group level as much as on an individual one?
Part of the problem as I see it is that LW can't decide if it's a philosophy/science or a cultural movement.
For instance, as already mentioned, there's a great deal of jargon, and there's a general attitude of impatience for anyone not thoroughly versed in the established concepts and terminology. Philosophies and sciences also have this problem, but the widely accepted and respected philosophical and scientific theories have proven themselves to the world (and weren't taken very seriously until they did). I personally believe there's a lot of substance to the ideas here, but LW hasn't delivered anything dramatic to the world at large. Until it does so it may remain, in the eyes of outsiders, as some kind of hybrid of Scientology and Objectivism - an insular group of people with a special language, a revered spokesperson, and who claim to have "the answers".
If, however, LW is supposed to be a cultural movement, then I'm sorry, but ”ur doin it wrong". Cultural movements gain momentum by being inclusive and organic, and by creating a forum for people to express themselves without fear of judgment. Movements are bottom up, and LW often gives the impression of being top down.
I'm not saying that a choice has to be made or even can be made, merely that there are conflicting currents here. I don't know if I have any great suggestions. I guess the one thing I can say is that while I've observed (am observing) a lot of debate and self-examination internally, there's still a strong outward impression of having found “the answers”. Perhaps if this community presented itself a little more as a forum for the active practice of critical thinking, and a little less as the authoritative source for an established methodology for critical thinking.
And if that doesn't work, we could always try bus ads.
comment by Grognor · 2012-03-15T02:37:17.010Z · LW(p) · GW(p)
In general, I think we could stand more community effort being put into optimizing our about page, which you can do now here.
Thank you for this.
(In light of my other comment, I should emphasize that I really mean that. It is not sarcasm or any other kind of irony.)
comment by syzygy · 2012-03-15T08:12:32.459Z · LW(p) · GW(p)
I have seen this problem afflict other intellectually-driven communities, and believe me, it is a very hard problem to shake. Be grateful we aren't getting media attention. The adage, "All press is good press", has definitely been proven wrong.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T03:26:38.987Z · LW(p) · GW(p)
I assume that my post has aggravated things? :o(
comment by antigonus · 2012-03-15T07:31:31.967Z · LW(p) · GW(p)
The word "cult" never makes discussions like these easier. When people call LW cultish, they are mostly just expressing that they're creeped out by various aspects of the community - some perceived groupthink, say. Rather than trying to decide whether LW satisfies some normative definition of the word "cult," it may be more productive to simply inquire as to why these people are getting creeped out. (As other commenters have already been doing.)
Replies from: Nonecomment by Shmi (shminux) · 2012-03-18T04:15:43.408Z · LW(p) · GW(p)
I got a distinct cultish vibe when I joined, but only from the far-out parts of the site, like UFAI, but not from the "modern rationality" discussions. When I raised the issue on #lesswrong, the reaction from most regulars was not very reassuring: somewhat negative and more emotional than rational. The same happened when I commented here. That's why I am looking forward to the separate rationality site, without the added untestable and useless to me EY's idiosyncrasies, such as the singularity, the UFAI and the MWI.
Replies from: jacoblyles↑ comment by jacoblyles · 2012-08-18T19:56:19.473Z · LW(p) · GW(p)
We should try to pick up "moreright.com" from whoever owns it. It's domain-parked at the moment.
Replies from: arborealhominid↑ comment by arborealhominid · 2013-07-28T00:40:52.224Z · LW(p) · GW(p)
Moreright.net already exists, and it's a "Bayesian reactionary" blog- that is, a blog for far-rightists who are involved in the Less Wrong community. It's an interesting site, but it strikes me as decidedly unhelpful when it comes to looking uncultish.
comment by quantropy · 2012-03-16T11:34:48.586Z · LW(p) · GW(p)
As I see it Cult =Clique + Weird Ideas
I think the weird ideas are an integral part of LessWrong, and any attempt to disguise them with a fluffy introduction would be counterproductive.
What about Cliquishness? I think that the problem here is that any internet forum tends to become a clique. To take part you need to read through lots of posts, so it requires quite a commitment. Then there is always some indication of your status within the group - Karma score in this case.
My advice would be to link to some non-internet things. Why not have the FHI news feed and links to a few relevant books on Amazon in the column on the right?
comment by RobertLumley · 2012-03-15T00:50:03.211Z · LW(p) · GW(p)
"Twelve Virtues of Rationality" has always seemed really culty to me. I've never read it, which may be part of the reason. It just comes across as telling people exactly how they should be, and what they should value.
Also, I've never liked that quote about the Sequences. I agree with it, what I've read of the sequences (and it would be wrong to not count HPMOR in this) is by far the most important work I've ever read. But that doesn't mean that's what we should advertise to people.
Replies from: Grognor, John_Maxwell_IV↑ comment by Grognor · 2012-03-15T02:35:51.062Z · LW(p) · GW(p)
All you are saying here is "The title of the Twelve Virtues makes me feel bad." That is literally all you are saying, since you admit to not having read it.
I quote:
It's supposed to be strange. Strange gets attention. Strange sticks in the mind. Strange makes the truth memorable. Other suggestions are possible, I guess, but can the result be equally strange?
I'll tell you one thing. It got my attention. It got me interested in rationality. I've shown it to others; they all liked it or were indifferent. If you're going to say "culty" because of the title, you are both missing the (most important) point and failing to judge based on anything reasonable. And I don't particularly care if LW appeals to people who don't even try to be reasonable.
Replies from: None↑ comment by [deleted] · 2012-03-15T19:38:18.184Z · LW(p) · GW(p)
All you are saying here is "The title of the Twelve Virtues makes me feel bad." That is literally all you are saying, since you admit to not having read it.
That's still an useful data-point. Do we want to scare away people with strong weirdness filters?
Replies from: komponisto↑ comment by komponisto · 2012-03-15T20:28:50.531Z · LW(p) · GW(p)
Do we want to scare away people with strong weirdness filters?
The answer to this may very well turn out to be yes.
Replies from: NancyLebovitz, John_Maxwell_IV↑ comment by NancyLebovitz · 2012-03-15T22:53:50.959Z · LW(p) · GW(p)
What proportion of top people at SIAI love sf?
It's at least plausible that strong weirdness filters interfere with creativity.
On the other hand, weirdness is hard to define-- sf is a rather common sort of weirdness these days..
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-17T05:58:52.711Z · LW(p) · GW(p)
There is no reason to turn them off right away. The blog itself is weird enough. Maybe they will be acclimated, which would be good.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-15T01:05:01.243Z · LW(p) · GW(p)
I almost forgot this, but I was pretty put off by the 12 virtues as well when I first came across it on reddit at age 14 or so. My reaction was something like "you're telling me I should be curious? What if I don't want to be curious, especially about random stuff like Barbie dolls or stamp collecting?" I think I might have almost sent Eliezer an e-mail about it.
When you put this together with what Eliezer called "the bizarre "can't get crap done" phenomenon that afflicts large fractions of our community, which he attributes to feelings of low status, this paints a picture of LW putting off the sort of person who is inclined to feel high status (and is therefore good at getting crap done, but doesn't like being told what to do). This may be unrelated to the cult issue.
Of course, these hypothetical individuals who are inclined to feel high status might not like being told how to think better either... which could mean that Less Wrong is not their cup of tea under any circumstances. But I think it makes sense to shift away from didacticism on the margin.
comment by Daniel_Starr · 2012-03-22T13:30:04.065Z · LW(p) · GW(p)
Ironically, I suspect the "cultlike" problem is that LessWrong/SI's key claims lack falsifiability.
Friendly AI? In the far future.
Self-improvement? All mental self-improvement is suspected of being a cult, unless it trains a skill outsiders are confident they can measure.
If I have a program for teaching people math, outsiders feel they know how they can check my claims - either my graduates are good at math or not.
But if I have a program for "putting you in touch with your inner goddess", how are people going to check my claims? For all outsiders know I'm making people feel good, or feel good about me, without actually making them meaningfully better.
Unfortunately, the external falsifiability of LW/SI's merits is more like the second case than the first. Especially, I suspect, for people who aren't already big fans of mathematics, information theory, probability, and potential AI.
Organization claims to improve a skill anyone can easily check = school. Organization claims to improve a quality that outsiders don't even know how to measure = cult.
If and when LW/SI can headline more easily falsifiable claims, it will be less cultlike.
I don't know if this is an immediately solvable problem, outside of developing other aspects of LW/SI that are more obviously useful/impressive to outsiders, and/or developing a generation of LW/SI fans who are indeed "winners" as rationalists ideally would be.
Replies from: roryokane↑ comment by roryokane · 2014-05-24T06:18:57.824Z · LW(p) · GW(p)
PredictionBook might help with measuring improvement, in a limited way. You can use it to measure how often your predictions are correct, and whether you are getting better over time. And you could theoretically ask LW-ers and non-LW-ers to make some predictions on PredictionBook, and then compare their accuracy to see if Less Wrong helped. Making accurate predictions of likelihood is a real skill that certainly has the possibility to be very useful – though it depends on what you’re predicting.
comment by RomeoStevens · 2012-03-15T02:44:09.501Z · LW(p) · GW(p)
so I shouldn't refer people to death spirals and baby eating right away?
Replies from: antigonus, Kaj_Sotala↑ comment by Kaj_Sotala · 2012-03-15T07:04:41.974Z · LW(p) · GW(p)
Offer them a hamster in a tutu first, that'll be cute and put them at ease.
comment by aaronsw · 2012-08-05T23:22:53.355Z · LW(p) · GW(p)
I think the biggest reason Less Wrong seems like a cult is because there's very little self-skepticism; people seem remarkably confident that their idiosyncratic views must be correct (if the rest of the world disagrees, that's just because they're all dumb). There's very little attempt to provide any "outside" evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).
Instead, when someone points this out, Eliezer fumes "do you know what pluralistic ignorance is, and Asch's conformity experiment? ... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong".
What's especially amusing is that EY is able to keep this stuff up by systematically ignoring every bit of his own advice: telling people to take the outside view and then taking the inside one, telling people to look into the dark while he studiously avoids it, emphasizing the importance of AI safety while he embarks on an extremely dangerous way of building AI -- you can do this with pretty much every entry in the sequences.
These are the sorts of things that make me think LessWrong is most interesting as a study in psychoceramics.
Replies from: John_Maxwell_IV, MugaSofer↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T07:23:19.626Z · LW(p) · GW(p)
There's very little attempt to provide any "outside" evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).
Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?
My read on your comment is: LWers don't act humble, therefore they are crackpots. I agree that LWers don't always act humble. I think it'd be a good idea for them to be more humble. I disagree that lack of humility implies crackpottery. In my mind, crackpottery is a function of your reasoning, not your mannerisms.
Your comment is a bit short on specific failures of reasoning you see--instead, you're mostly speaking in broad generalizations. It's fine to have general impressions, but I'd love to see a specific failure of reasoning you see that isn't of the form "LWers act too confident". For example, a specific proposition that LWers are too confident in, along with a detailed argument for why. Or a substantive argument for why SI's approach to AI is "extremely dangerous". (I personally know pretty much everyone who works for SI, and I think there's a solid chance that they'll change their approach if your argument is good enough. So it might not be a complete waste of time.)
you can do this with pretty much every entry in the sequences
Now it sounds like you're deliberately trying to be be inflammatory ಠ_ಠ
Replies from: aaronsw↑ comment by aaronsw · 2012-08-06T11:50:08.189Z · LW(p) · GW(p)
Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?
Well, for example, if EY is so confident that he's proven "MWI is obviously true - a proposition far simpler than the argument for supporting SIAI", he should try presenting his argument to some skeptical physicists. Instead, it appears the physicists who have happened to run across his argument found it severely flawed.
How rational is it to think that you've found a proof most physicists are wrong and then never run it by any physicists to see if you're right?
My read on your comment is: LWers don't act humble, therefore they are crackpots.
I do not believe that.
As for why SI's approach is dangerous, I think Holden put it well in the most upvoted post on the site.
I'm not trying to be inflammatory, I just find it striking.
Replies from: Mitchell_Porter, Eliezer_Yudkowsky, Eliezer_Yudkowsky, John_Maxwell_IV↑ comment by Mitchell_Porter · 2012-08-07T10:07:38.613Z · LW(p) · GW(p)
it appears the physicists who have happened to run across his argument found it severely flawed
The criticisms at those links have nothing to do with the argument for MWI. They are just about a numerical mistake in an article illustrating how QM works.
The actual argument for MWI that is presented is something like this: Physicists believe that the wavefunction is real and that it collapses on observation, because that is the first model that explained all the data, and science holds onto working models if they are falsified. But we can also explain all the data by saying that the wavefunction is real and doesn't collapse, if we learn to see the wavefunction as containing multiple worlds that are equally real. The wavefunction doesn't collapse, it just naturally spreads out into separate parts and what we see is one of those separate parts. A no-collapse theory is simpler than a collapse theory because it has one less postulate, so even though there are no new predictions, by Bayes (or is it Occam?) we can favor the no-collapse theory over the collapse theory. Therefore, there are many worlds.
This is informal reasoning about which qualitative picture of the world to favor, so it is not something that can be verified or falsified by a calculation or an experiment. Therefore, it's not something that a hostile physicist could crisply debunk, even if they wanted to. In the culture of physics there are numerous qualitative issues where there is no consensus, and where people take sides on the basis of informal reasoning. Eliezer's argument is on that level; it is an expression in LW idiom, of a reason for believing in MWI that quite a few physicists probably share. It can't be rebutted by an argument along the lines that Eliezer doesn't know his physics, because it is an argument which (in another form) a physicist might actually make! So if someone wants to dispute it, they'll have to do so, just as if they were intervening in any of these informal professional disagreements which exist among physicists, by lines of argument about plausibility, future theoretical prospects, and so on.
ETA One more comment about the argument for MWI as I have presented it. Physicists don't agree that the wavefunction is real. The debate over whether it is real, goes all the way back to Schrodinger (it's a real physical object or field) vs Heisenberg (it's just a calculating device). The original Copenhagen interpretation was in Heisenberg's camp: a wavefunction is like a probability distribution, and "collapse" is just updating on the basis of new experimental facts (the electron is seen at a certain location, so the wavefunction should be "collapsed" to that point, in order to reflect the facts). I think it's von Neumann who introduced wavefunction realism into the Copenhagen interpretation (when he axiomatized QM), and thereby the idea of "observer-induced collapse of the wavefunction" as an objective physical process. Though wavefunction realism was always going to creep up on physicists, since they describe everything with wavefunctions (or state vectors) and habitually refer to these as "the state" of the object, rather than "the state of our knowledge" of the object; also because Copenhagen refused to talk about unobserved realities (e.g. where the electron is, when it's not being seen to be somewhere), an attitude which was regarded as prim positivistic virtue by the founders, but which created an ontological vacuum that was naturally filled by the de-facto wavefunction realism of physics practice.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-15T19:57:26.586Z · LW(p) · GW(p)
BTW, it's important to note that by some polls an actual majority of theoretical physicists now believe in MWI, and this was true well before I wrote anything. My only contributions are in explaining the state of the issue to nonphysicists (I am a good explainer), formalizing the gross probability-theoretic errors of some critiques of MWI (I am a domain expert at that part), and stripping off a lot of soft understatement that many physicists have to do for fear of offending sillier colleagues (i.e., they know how incredibly stupid the Copenhagen interpretation appears nowadays, but will incur professional costs from saying it out loud with corresponding force, because there are many senior physicists who grew up believing it).
The idea that Eliezer Yudkowsky made up the MWI as his personal crackpot interpretation isn't just a straw version of LW, it's disrespectful to Everett, DeWitt, and the other inventors of MWI. It does seem to be a common straw version of LW for all that, presumably because it's spontaneously reinvented any time somebody hears that MWI is popular on LW and they have no idea that MWI is also believed by a plurality and possibly a majority of theoretical physicists and that the Quantum Physics Sequence is just trying to explain why to nonphysicists / formalize the arguments in probability-theoretic terms to show their nonambiguity.
Replies from: Mitchell_Porter, aaronsw, Quantumental, fezziwig, Peterdjones↑ comment by Mitchell_Porter · 2012-08-16T03:50:45.044Z · LW(p) · GW(p)
by some polls
The original source for that "58%" poll is Tipler's The Physics of Immortality, where it's cited (chapter V, note 6) as "Raub 1991 (unpublished)". (I know nothing about the pollster, L. David Raub, except that he corresponded with Everett in 1980.) Tipler says that Feynman, Hawking, and Gell-Mann answered "Yes, I think the MWI is true", and he lists Weinberg as another believer. But Gell-Mann's latest paper is a one-history paper, Weinberg's latest paper is about objective collapse, and Feynman somehow never managed to go on record anywhere else about his belief in MWI.
Replies from: None↑ comment by aaronsw · 2012-08-18T19:14:06.087Z · LW(p) · GW(p)
Has anyone seriously suggested you invented MWI? That possibility never even occurred to me.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-18T21:53:47.964Z · LW(p) · GW(p)
It's been suggested that I'm the one who invented the idea that it's obviously true rather than just one more random interpretation; or even that I'm fighting a private war for some science-fiction concept, rather than being one infantry soldier in a long and distinguished battle of physicists. Certainly your remark to the extent that "he should try presenting his argument to some skeptical physicists" sounds like this. Any physicist paying serious attention to this issue (most people aren't paying attention to most things most of the time) will have already heard many of the arguments, and not from me. It sounds like we have very different concepts of the state of play.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-08-20T22:53:10.606Z · LW(p) · GW(p)
one infantry soldier in a long and distinguished battle of physicists
Can't help but compare this to the Swiftian battle of big-endians and little-endians, only the interpretational war makes even less sense.
↑ comment by Quantumental · 2012-08-17T15:41:54.168Z · LW(p) · GW(p)
I just can't ignore this. If you take a minute to actually look at the talk section of that wikipedia page you will see those polls being thorn to pieces.
David Deutsch himself has stated that less than 10% of the people doing quantum fundamentals believe in MWI and then within that minority there are a lot of diverging views. So this is still not by any means a "majority interpretation".
As Mitchell_Porter has pointed out Gell-Mann certainly do not believe in MWI. Nor do Steven Weinberg, he denounced his 'faith' in it in a paper last year. Feynman certainly did never talk about it, which to me is more than enough indication that he did not endorse it. Hawking is a bit harder, he is on record seemingly being pro and con it, so I guess he is a fence sitter.
But more importantly is the fact that none of the proponents agree on what MWI they support. (This includes you Eliezer)
Zurek is another fence sitter, partly pro-some-sort-of-MWI, partly pro-It-from-Bit. Also his way of getting the Born Rule in MWI is quite a bit different. From what I understand, only the worlds that are "persistent" are actualized. This reminds me of Robin Hanson's mangled worlds where only some worlds are real and the rest gets "cancelled" out somehow. Yet they are completley different ways of looking at MWI. Then you got David Deutsch's fungible worlds which is slightly different from David Wallace's worlds. Tegmark got his own views etc.
There seems to be no single MWI and there has been no answer to the Born Rule.
So I want to know why you keep on talking about it as it is a slam dunk?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-09T23:38:17.359Z · LW(p) · GW(p)
Good question.
↑ comment by fezziwig · 2012-08-15T20:34:41.757Z · LW(p) · GW(p)
I think your use of "believe in" is a little suspect here. I'm willing to believe that more than half of all theoretical physicists believe some variant of the MWI is basically right (though the poll can't have been that recent if Feynman was part of it, alas), but that's different from the claim that there are no non-MWI interpretations worth considering, which is something a lot of people, including me, seem to be taking from the QP sequence. Do you believe that that's a majority view, or anything close to one? My impression is that that view is very uncommon, not just in public but in private too...at least outside of Less Wrong.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-15T20:37:24.072Z · LW(p) · GW(p)
That sounds correct to me. A physicist who also possesses probability-theory expertise and who can reason with respect to Solomonoff Induction and formal causal models should realize that single-world variants of MWI are uniformly unworkable (short of this world being a runtime-limited computer simulation); but such is rare (though not unheard-of) among professional physicists; and among the others, you can hardly blame them for trying to keep an open mind.
Replies from: shminux, V_V↑ comment by Shmi (shminux) · 2012-08-15T21:58:20.550Z · LW(p) · GW(p)
single-world variants of MWI are uniformly unworkable
The Penrose's objective collapse theory saying that the entanglement scale is limited by gravity, which results in the macroscopic objects remaining essentially classical, does not look all that unworkable.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-16T02:47:41.516Z · LW(p) · GW(p)
It'd still be the only FTL discontinuous non-differentiable non-CPT-symmetric non-unitary non-local-in-the-configuration-space etc. etc. process in all of physics, to explain a phenomenon (why do we see only one outcome?) that doesn't need explaining.
Replies from: shminux, V_V↑ comment by Shmi (shminux) · 2012-08-16T04:32:36.358Z · LW(p) · GW(p)
Well, one advantage of it is that it is testable, and so is not a mere interpretation, which holds a certain amount of appeal to the more old-fashioned of us.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-16T07:17:01.428Z · LW(p) · GW(p)
I agree, and I myself was, and am still, sentimentally fond of Penrose for this reason, and I would cheer on any agency that funded a test. However and nonetheless, "testable" is not actually the same as "plausible", scientifically virtuous as it may be.
↑ comment by V_V · 2012-08-28T23:40:48.524Z · LW(p) · GW(p)
FTL
Not if it doesn't allow FTL communication, unless you want to argue that quantum entanglement is a FTL phenomenon, but that wouldn't be an issue of the particular interpretation.
discontinuous non-differentiable
Not necessarily. Irreversible and stochastic quantum processes can be time-continuous and time-differentiable.
Consider the processes described by the Lindblad equation, for instance.
non-CPT-symmetric
CPT symmetry is a property of conventional field theories, not all quantum theories necessarily have it, and IIUC, there are ongoing experiments to search for violations. CPT symmetry is just the last of a series of postulated symmetries, the previous ones (C symmetry, P symmetry, T symmetry and CP symmetry) have been experimentally falsified.
non-unitary
Right, and that's the point of objective collapse theories.
non-local-in-the-configuration-space
I'm not sure what you mean by that, but locality in physics is defined with respect to space and time, not to arbitrary configuration spaces.
to explain a phenomenon (why do we see only one outcome?) that doesn't need explaining.
AFAIK, there have been attempts to derive the Born rule in Everett's interpretation, but they didn't lead to uncontroversial results.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-09-06T14:23:15.738Z · LW(p) · GW(p)
Not necessarily. Irreversible and stochastic quantum processes can be time-continuous and time-differentiable.
I have never seen a proposed mechanism of ontological collapse that actually fits this, though.
Not if it doesn't allow FTL communication
The inability to send a signal that you want, getting instead a Born-Rule-based pure random signal, doesn't change that this Born-Rule-based pure random signal is, under ontological collapse distributed FTL.
Replies from: V_V↑ comment by V_V · 2012-09-08T18:58:36.967Z · LW(p) · GW(p)
I have never seen a proposed mechanism of ontological collapse that actually fits this, though.
AFAIK, Penrose's interpretation doesn't describe the details of the collapse process, it just says that above about the "one graviton" level of energy separation collapse will occur.
It doesn't commit to collapse being instantaneous: It could be that the state evolution is governed by a non-linear law that approximates very well the linear Schrödinger equation in the "sub-graviton" regime and has a sharp, but still differentiable phase transition when approaching the "super-graviton" regime.
The GRW interpretation assumes instantaneous collapse, IIUC, but it would be a trivial modification to have fast, differentiable collapse.
My point is that non-differentiable collapse is not a requirement of objective collapse interpretations.
The inability to send a signal that you want, getting instead a Born-Rule-based pure random signal, doesn't change that this Born-Rule-based pure random signal is, under ontological collapse distributed FTL.
But that's an issue of QM, irrespective of the particular interpretation. Indeed the "spooky action at distance" bugged Einstein and many people of his time, but the modern view is that as long as you don't have causal influences (that is, information transmission) propagating FTL, you don't violate special relativity.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-09-09T21:16:27.564Z · LW(p) · GW(p)
But that's an issue of QM, irrespective of the particular interpretation.
No, it isn't. QM is purely causal and relativistic. You can look into the equations and prove that nothing FTL is in there. The closest you get is accounting for the possibility of a vacuum bubble having appeared nearby a particle with exactly its energy, and the antimatter part of it the bubble then cancels with the particle. And that isn't much like FTL.
When you do an EPR experiment, the appearance of FTL communication arises from the assumption that the knowledge you gain about what you'll see if you go check the other branch of the experiment is something happens at the other end of the experiment, instead of locally, with the information propagating to the other end of the experiment as you go to check. The existence of nonlocal states does not imply nonlocal communication.
Replies from: V_V↑ comment by Peterdjones · 2013-01-09T23:37:20.039Z · LW(p) · GW(p)
BTW, it's important to note that by some polls an actual majority of theoretical physicists now believe in MWI,
But there is s case to be made for relatioal QM as superior to both MWI an collpase interpretations. I have metuioned it several times. I am still waiting to hear back.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-01-09T23:39:56.330Z · LW(p) · GW(p)
Relational QM is gibberish. Whether the cat is dead or alive is "relative to the observer". How could that make sense except via many worlds?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-09T23:48:35.818Z · LW(p) · GW(p)
It makes sense the way rQM says: there is no non-relational state, so there is not answer to "is the cat dead or alive (absent an observer)". Since rQM says there is no state, you don't disprove it by insisting there is state.
BTW, there is no simultaneity either.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-01-10T00:18:44.392Z · LW(p) · GW(p)
OK, so suppose we have an observer. Now look at the cat. Is it alive or dead? If it is alive and only alive, well, we can affix the phrase "relative to the observer" but it doesn't diminish the absoluteness of the cat's being alive. But if the cat is alive "relative to one observer to which it is alive", and dead "relative to another observer to which it is dead", how can we possibly make sense of that except in many-worlds fashion, by saying there are two cats and two observers?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T01:26:35.412Z · LW(p) · GW(p)
If two observers measure a cat, they will get compatible results. However one observer can have less complete information ("the cat collapsed") and another more complete ("the cat is uncollapsed"). Observers can disagree about "collapse" because that is just an issue of their information, not an objective property.
"Relational interpretation
The relational interpretation makes no fundamental distinction between the human experimenter, the cat, or the apparatus, or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunction evolution, and all may be considered "observers." But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system.[11] The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of it being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to "collapse"; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to "collapse" into the same definite result, a cat that is either alive or dead." - WP
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-01-10T02:10:44.488Z · LW(p) · GW(p)
In the interpretation of QM, one of the divides is between ontic and epistemic interpretations of the wavefunction. Ontic interpretations of the wavefunction treat it as a thing, epistemic interpetations as an incomplete description or a tabulation of uncertainty, just like a probability distribution.
In the relational interpretation of QM, are the states understood as ontic or as epistemic? The passage you quote makes them sound epistemic: the cat knows but the observer outside the box doesn't, so the observer outside the box uses a different wavefunction. That undoubtedly implies that the wavefunction of the observer outside the box is epistemic, not ontic; the cat knows something that the outside observer doesn't, an aspect of reailty which is already definite even though it is not definite in the outside observer's description.
Or at least, this ought to imply that quantum state in the relational interpretation are epistemic. However, this is never explicitly stated, and instead meaningless locutions are adopted which make it sound as if the quantum states are to be regarded as ontic, but "relative".
There are certain very limited senses in which it makes sense to say that the state of something is relative. For example, we may be floating in space, and what is up to you may be down to me, so whether one object is above another object may be relative to an observer. But clearly such a dodge will not work for something like Schrodinger's cat. Either the cat is alive, dead, both, or neither. It can't be "alive for one observer and dead for another" and still be just one cat. But that is the ontological implication one gets, if "relational QM' is interpreted as an ontic interpertation.
On the other hand, if it is an epistemic interpretation, then it still hasn't answered the question, "what is the nature of reality? what is the physical ontology behind the formalism and the instrumental success?"
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-10T15:38:46.520Z · LW(p) · GW(p)
It can't be "alive for one observer and dead for another"
It can't in rQM:
"However, the comparison does not lead to contradiction because the comparison is itself a physical process that must be understood in the context of quantum mechanics. Indeed, O′ can physically interact with the electron and then with the l.e.d. (or, equivalently, the other way around). If, for instance, he finds the spin of the electron up, quantum mechanics predicts that he will then consistently find the l.e.d. on (because in the first measurement the state of the composite system collapses on its [spin up/l.e.d. on] component). That is, the multiplicity of accounts leads to no contradiction precisely because the comparison between different accounts can only be a physical quantum interaction. This internal self-consistency of the quantum formalism is general, and it is perhaps its most remarkable aspect. This self consistency is taken in relational quantum mechanics as a strong indication of the relational nature of the world"--SEP
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-06T23:38:39.008Z · LW(p) · GW(p)
There were plenty of physicists reading those posts when they first came out on OB (the most famous name being Scott Aaronson). Some later readers have indeed asserted that there's a problem involving a physically wrong factor of i in the first couple of posts (i.e. that's allegedly not what a half-silvered mirror does to the phase in real life), which I haven't yet corrected because I would need to verify with a trusted physicist that this was correct, and then possibly craft new illustrations instead of using the ones I found online, and this would take up too much time relative to the point that talking about a phase change of -1 instead of i so as to be faithful to real-world mirrors is an essentially trivial quibble which has no effect on any larger points. If anyone else wants to rejigger the illustration or the explanation so that it flows correctly, and get Scott Aaronson or another known trusted physicist to verify it, I'll be happy to accept the correction.
Aside from that, real physicists haven't objected to any of the math, which I'm actually pretty darned proud of considering that I am not a physicist.
Replies from: CarlShulman, Quantumental↑ comment by CarlShulman · 2012-08-07T02:17:34.243Z · LW(p) · GW(p)
There were plenty of physicists reading those posts when they first came out on OB (the most famous name being Scott Aaronson)
As Scott keeps saying, he's not a physicist! He's a theoretical computer scientist with a focus on quantum computing. He clearly has very relevant expertise, but you should get his field right.
↑ comment by Quantumental · 2012-08-08T12:00:15.980Z · LW(p) · GW(p)
I still wonder why you haven't written a update in 4 years regarding this topic. Especially in regards to the Born Rule probability not having a solution yet + the other problems.
You also have the issue of overlap vs non-overlapping of worlds, which again is a relevant issue in the Many Worlds interpretation. Overlap = the typical 1 world branching into 2 worlds. Non-overlap = 2 identical worlds diverging (Saunders 2010, Wilson 2005-present)
Also I feel like the QM sequence is a bit incomplete when you do not give any thought to things like Gerard 't Hoofts proposal of a local deterministic reality giving rise to quantum mechanics from a cellular automaton at the planck scale? It's misleading to say the MWI is "a slam dunk" winner when there are so many unanswered questions. Mitchell Porter is one of the few persons here who seem to have a deep understanding of the subject before reading your sequence, so he has raised some interesting points...
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T22:02:25.182Z · LW(p) · GW(p)
I agree that EY is probably overconfident in MWI, although I'm uniformed about QM so I can't say much with confidence. I don't think it's accurate to damn all of Less Wrong because of this. For example, this post questioning the sequence was voted up highly.
I don't think EY claims to have any original insights pointing to MWI. I think he's just claiming that the state of the evidence in physics is such that MWI is obviously correct, and this is evidence as to the irrationality of physicists. I'm not too sure about this myself.
As for why SI's approach is dangerous, I think Holden put it well in the most upvoted post on the site.
Well there have been responses to that point (here's one). I wish you'd be a bit more self-skeptical and actually engage with that (ongoing) debate instead of summarizing your view on it and dismissing LW because it largely disagrees with your view.
Replies from: aaronsw↑ comment by aaronsw · 2012-08-06T22:43:02.895Z · LW(p) · GW(p)
It seems a bit bizarre to say I've dismissed LessWrong given how much time I've spent here lately.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-07T00:49:21.247Z · LW(p) · GW(p)
Fair enough.
↑ comment by MugaSofer · 2013-01-10T12:27:29.591Z · LW(p) · GW(p)
EY is able to keep this stuff up by systematically ignoring every bit of his own advice
Your examples seem ... how do I put this ... unreliable. The first two are less examples and more insults, since you do not provide any actual examples of these tendencies; the last one would be more serious, if he hadn't written extensively on why he believes this to be the safest way - the only way that isn't suicidal - or if you had provided some evidence that his FAI proposals are "extremely dangerous". And, of course, airily proclaiming that this is true of "pretty much every entry in the sequences" seems, in the context of these examples, like an overgeneralization at best and ... well, I'm not going to bother outlining the worst possible interpretation for obvious reasons.
↑ comment by Peterdjones · 2013-01-09T23:53:46.433Z · LW(p) · GW(p)
It's like Greeks trying to do physics by pure reasoning. They got atoms right because of salt crystallizing,
Obviously, observeing salt is not prure reasoning. Very little philsophy is pure reasoning, the salient distinction is between informal, everyday observation and deliberately arranged experiements.
comment by Joshua Hobbes (Locke) · 2012-03-15T02:21:24.595Z · LW(p) · GW(p)
It's a rather unavoidable side-effect of claiming that you know the optimal way to fulfill one's utility function, especially if that claim sounds highly unusual (Unite the human species in making a friendly AI that will create Utopia). There are many groups that make such claims, and either one or none of them can be right. Most people(Who haven't already bought into a different philosophy of life) think it's the later, and thus tend not to take someone seriously when they make extraordinary claims.
Until recognition of the Singularity's imminence and need for attention enters mainstream scientific thought, the people most likely to join us (Scientifically-Literate Atheists and Truth-Lovers) will not seriously consider our claims. I haven't read nearly as much about the nonexistence of Zues as I have about the nonexistence of Yaweh, because the number of intelligent people who believe in Zues is insignificant compared to the number of educated Christians. So when 99% of the developed world isn't focusing on friendly-AI-theory, it was difficult for me to come to the conclusion that Richard Dawkins and Stephen Hawking and Stephen Fry were all ignorant of one of the most important things on the planet. A few months ago I gave no more thought to cryonics than to cryptozoology, and without MoR I doubt anything would have changed.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-15T03:01:16.015Z · LW(p) · GW(p)
Is the goal of the community really to get everyone into the one task of creating FAI? I'm kind of new here, but I'm personally interested in a less direct but maybe more certain (I don't know the hard numbers) (but, I feel, its synergistic), goal of achieving a stable post-scarcity economy which could free up a lot more people to become hackers/makers of technology and participate in the collective commons, but I'm interested in FAI and particularly machine ethics, and I hang out here because of the rationality and self improvement angles. In fact I got into my current academic track (embedded systems) because I'm interested in robotics and embodied intelligence, and probably got started reading Hofstadter stuff and trying to puzzle out how minds work.
"Come for the rationality... stay for the friendly AI" maybe?
Replies from: Grognor↑ comment by Grognor · 2012-03-15T03:24:01.055Z · LW(p) · GW(p)
Is the goal
Please don't talk about 'the' goal of the community as if there's only one. There are many.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-15T03:40:08.373Z · LW(p) · GW(p)
That's what I was wondering, thank you for providing the link to that post. I wasn't sure how to read Locke's statement.
comment by ShortName · 2012-03-26T17:32:03.586Z · LW(p) · GW(p)
Long time lurker, I think LW is not capable enough as a social unit to handle it's topic and I currently view that participating in LW is not a good way to efficiently drive it's goals.
In order to reach a (hostile) audience one needs to speak the language. However ambient ways of carrying out discussion are often intermingling status / identity / politics with epistemology. In order to forward a position that biased / faith / economy based thinking are not epistemologically efficient tools one needs to make at least the initial steps in this twisted up "insane troll logic" . The end product is to reject the premise the whole argument stands on but it will never be produced if the thinking doesn't get started. In making it public and praising this kind of transitioning of modes of thinking, a lot of the machinery temporarily required to play the drama out gets reinforced into a kind of bedrock. It complicates matters people are simultaneously in need of completing a particular step while others need to dispel them. Thus there is a tendency to fixate on a "development step" relevant to the majority and becoming hostile to everything else.
I don't see the need to profess stances on things if the relevant anticipations work correctly. Coding the keys of insights on a single namespace and honing them to work against a static context makes applying and discussing them in other contexts needlessly complex. If someone knows a bias / heuristic / concept by some other name and that makes a LW participant not recognize or fail to apply things that they have learnt the password for, LW has managed to isolate insights from their usual and most needed application area.
Things that "hardcore" pursuits find valuable are passed "as is" or "as finalized by AwesomeDude42". This is faith based "cause they say so". Hooked by the "quality of the merchandise" this communal activity is more of a distribution system of those closed packages of tools rather than an epistemic engine in it's own right. I think that even school should be a place of learning rather than a place to receive data about what others have learned.
Because there is a caliber difference not all members can follow or participate in the production of the "good stuff" they wait to be distributed right out of the oven. Doing a passive "level up" handbook in the form of sequences still leaves a big "you must be this tall to participate in this facet of this community". There is no escaping the cognitive work of the individual but LW functions more as a price rather than the workbench.
The activity of LW is limited in a content-independent way by social structure in areas that it wishes to be more. This is not the optimal venue of thinking, but that shouldn't come as a big surprise.
comment by Craig_Heldreth · 2012-03-20T23:37:26.227Z · LW(p) · GW(p)
Do you know anyone who might fall into this category, i.e. someone who was exposed to Less Wrong but failed to become an enthusiast, potentially due to atmosphere issues?
Yes. I know a couple of people with whom I share interest in Artificial Intelligence (this is my primary focus in loading Less Wrong web pages) who communicated to me that they did not like the site's atmosphere. Atmosphere is not exactly the word they used. One person thought the cryonics was a deal breaker. (If you read the piece in the New York Times Sunday Magazine about Robin Hanson and his wife you will get a good idea of the global consensus distaste for the topic.) The other person was not so specific although it was clear they were turned off completely even if they couldn't or wouldn't explain how.
Is it possible that our culture might be different if these folks were hanging around and contributing? Presumably they are disproportionately represented among certain personality types.
It is obvious that the culture here would be different if the more controversial or unpopular topics were downplayed enough not to discourage people who don't find the atmosphere convivial.
If so, can you suggest any easy steps we could take?
Here is what I have personally heard or read in comments that people find most bothersome: cryonics, polyamory, pick up artistry, density of jargon, demographic homogeneity (highly educated white males). Any steps to water that down beyond those already taken (pick up artistry is regularly criticized and Bell Curve racial IQ discussion has been all but tabooed) would not be easy to implement quickly and would have consequences beyond making for a more inclusive atmosphere.
I am not in agreement with the suitability of the word cult to characterize this issue accurately. I did the google test you describe and was surprised to see cult pop up so fast, but when I think cult I think Hare Krishnas, I think Charles Manson, I think David Koresh; I don't think Singularity Institute, and I don't think about a number of the organizations on Rick Ross' pages. Rick Ross is a man whose business makes money by promoting fear of cults. The last time I looked he had Landmark Education listed as a cult; this might be true with an extremely loose definition of the word but they haven't killed anybody yet to the best of my knowledge. I have taken a couple of courses from them and the multi-level marketing vibe is irksome but they have some excellent (and rational!) content in their courses. The last time I looked Ross did not have the Ordo Templi Orientis listed as a cult. When I was a member of that organization there were around a couple of thousand dues paying members in the United States, so I presume the OTO cult (this word is far more appropriately applied to them than Landmark) is too small for him to spend resources on.
The poster who replied that he and his wife refer to his Less Wrong activity as his cult membership is understandable to me in a light and humorous manner; I would be surprised if they really classify Less Wrong with Scientology and Charles Manson.
comment by Onelier · 2012-03-16T07:27:56.446Z · LW(p) · GW(p)
What paper or text should I read to convince me y'all want to get to know reality? That's a sincere question, but I don't know how to say more without being rude (which I know you don't mind).
Put another way: What do you think Harry Potter (of HPMOR) would think of the publications from the Singularity Institute? (I mean, I have my answer).
comment by whpearson · 2012-03-15T11:15:17.306Z · LW(p) · GW(p)
I got a possible proto-small religion feeling from SL4 discussions with Eliezer and SI folk back in the day. Any possible cultishness feeling was with a small c, that is not harmful to the participants accept for their bank balance, as in the use of the word cult in cult following. There isn't a good word for this type of organization, which is why it gets lumped in with Cults.
Less wrong is better than SL4 for this feeling anyway.
comment by confusednewbie · 2014-08-23T16:49:18.652Z · LW(p) · GW(p)
Well, it's nice to know at least you guys see it. Yes, that was one of my reactions. I started reading some of the sequences (which really aren't put at a level that the mass public, or, I'd hazard to say, though not with certainty, even people whose IQs don't fall one standard deviation above the mean or higher can easily understand). I liked them, though I didn't fully understand them, and have referred people to them. However, at the time I was looking into a job and did some kind of search through the website. Anyways, I encountered a post with a person who was asking for advice on a job...I can't find it now, but from what I remember (this has been a long time, the memory is greatly degraded, but I think what little I remember may be actually more insightful in this case than a faithful representation of the actual post) the poster talked about divorce and doing a job they hated and the like to be able to donate more to charity, and how that was an acceptable though not valued trade-off. And, though I actually agree to a point, that did raise HUGE red flags in my mind for cult...particularly when combined with the many messages that seem to be embedded here about donating to the LW non-profits. I fled after reading that and stayed away for a long time. I dunno if it helps or not, but figured I'd share.
Also, I just Googled "Less Wrong" and all I did was added a space and Google auto-suggested cult. So things seem to have worsened since this was published.
comment by mwengler · 2012-03-15T22:22:01.495Z · LW(p) · GW(p)
The c-word is too strong for what LW actually is. But "rational" is not a complete descriptor either.
It is neither rational nor irrational to embrace cryonics. It may be rational to conclude that someone who wants to live forever and believes body death is the end of his life will embrace cryonics and life extension technologies.
It is neither rational nor irrational to vaunt current human values over any other. It is most likely that current human values are a snapshot in the evolution of humans, and as such are an approximate optimum in a natural selection sense for an environment that existed 10,000 years ago. The idea that "we" lose if we change our values seems more rooted in who "we" decide "we" are. Presumably in the past a human was more likely to have a narrower definition of "we" to include only a few hundred or a few thousand culture-mates. As time has gone on, "we" has grown to cover nationalities, individual races, pan-national, pan-race, for most people. Most Americans don't identify American with a particular race or national background, and many of us don't even require being born within the US or of US parents to be part of "we." Why wouldn't we extend our concept of "we" to include mammals, or all life that evolved on earth, or even all intelligences that evolved or were created on earth? Why would we necessarily identify a non-earth intelligence as "they" and not "we" as in "we intelligences can stick together and do a better job exploting the inanimate universe."
Rationality is a tool, not an answer. Having certain value decisions vaunted over others restricts LessWrong to being a community that uses rationality rather than a community of rationalists or a community serving all who use rationality. It is what Buffett calls "an unforced error."
Let the downvotes begin! To be clear, I don't WANT to be downvoted, but my history on this site suggests to me that I might be.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-16T03:28:49.680Z · LW(p) · GW(p)
Dunno bout you, but I value my values.
Replies from: mwengler↑ comment by mwengler · 2012-03-16T06:54:38.324Z · LW(p) · GW(p)
I think I have the same emotional response to "wrong" things as most people. The knowledge that this is bred in to me by natural selection sorta takes the wind out of my rationalizations of these feelings in two ways. 1) Although they "feel" like right and wrong, I realize they are just hacks done by evolution. 2) If evolution has seen fit to hack our values in the past to keep us outsurviving others, than it stands to reason that the "extrapolated" values of humanity are DIFFERENT from the "evolved" values of humanity. So no matter how Coherent our Extrapolation of Values will be, it will actually subvert whatever evolution might do to our race. So once we have an FAI with great power and a sense of CEV, we stop evolving. Then we spend the rest of eternity relatively poorly adapted for the environment we are in, with FAI scarmbling to make it alright for us. Sounds like the cluster version of wireheading in a way.
On the other hand, I suppose I value the modifications that occur to us through evolution and natural selection. Presumably an attempt at CEV would build that in and perhaps the FAI would decide to leave us alone. Don't we keep reading sci fi where that happens?
comment by Eneasz · 2012-03-16T22:40:28.962Z · LW(p) · GW(p)
This comment will be heavy with jargon, to convey complex ideas with the minimum required words. That is what jargon is for, after all. The post's long enough even with this shortening.
Less Wrong inspires a feeling of wonder.
To see humans working seriously to advance the robot rebellion is inspiring. To become better, overcome the programming laid in by Azathoth and actually improve our future.
The audacity to challenge death itself, to reach for the stars, is breathtaking. The piercing insight in many of the works here is startling. And the gift of being able to find joy in the merely real again is priceless. It doesn't hurt that it's spearheaded by an extremely intelligent and honest person who's powers of written communication are among the greatest of his generation.
And that sense of awe and wonder makes people flinch. Especially people who have been trained to be wary of that sort of shit. Who've seen the glassed-over eyes of their fundamentalist families and the dazed ramblings of hippies named Storm. As much as HJPEV has tried to train himself to never flinch away from the truth, to never let his brain lie to him, MANY of us have been trained just as strongly to always flinch away from awe and wonder produced by charismatic people. In fact, if we had a "don't let your brain lie to you" instinct as strong as our "don't let awe and wonder seduce you into idiocy" instinct we'd be half way to being good rationalists already.
And honestly, that instinct is a good one. It saves us from insanity 98% of the time. But it'll occasionally result in a woo/cult-warning where one could genuinely and legitimately feel wonder and awe. I don't blame people for trusting their instincts and avoiding the site. And it'll mean we forever get people saying "I dunno what it is, but that Less Wrong site feels kinda cultish to me."
We're open, we're transparent, we are a positive force in the lives of our members. We've got nothing to fear, and that's why occasional accusations of cultishness will never stick. We've just got to learn to live with the vibe and realize that those who stick around long enough to look deeper will bear out that we're not.
It's nice to still have that awe and wonder somewhere. I wouldn't ever want to give that up just so a larger percentage of the skeptic community accepts us. That feeling is integral to this site, giving it up would kill LW for me.
Replies from: Jakeness