Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult

post by Algernoq · 2014-07-13T17:54:51.004Z · LW · GW · Legacy · 193 comments

Contents

193 comments

A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.

  • Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

  • In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

  • LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

  • Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

  • LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

  • "Art of Rationality" is an oxymoron.  Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.

193 comments

Comments sorted by top scores.

comment by pjeby · 2014-07-13T20:11:20.505Z · LW(p) · GW(p)

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.

That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-13T21:57:24.789Z · LW(p) · GW(p)

I have also translated the Sequences, and organized a couple of meetups. :)

Here are some other things someone could do to go further:

  • organize a large international meetup;
  • rewrite the Sequences in a form more accessible for general public;
  • give a lecture about LW-style rationality at local university;
  • sign up your children for cryonics;
  • join a polyamorous community;
  • start a local polyamorous community;
  • move to Bay Area;
  • join MIRI;
  • join CFAR;
  • support MIRI and/or CFAR financially;
  • study the papers published by MIRI;
  • cooperate with MIRI to create more papers;
  • design a new rationality lesson;
  • build a Friendly AI.

Actually, PJ, I do consider your contributions to motivation and fighting akrasia very valuable. I wish they could someday become a part of an official rationality training (the hypothetical kind of training that would produce visible awesome results, instead of endless debates whether LW-style rationality actually changes something).

Replies from: None
comment by [deleted] · 2014-07-13T23:13:56.335Z · LW(p) · GW(p)
  • join a polyamorous community;
  • start a local polyamorous community;

Seriously? What does that have to do with anything?

Replies from: Algernoq, Viliam_Bur
comment by Algernoq · 2014-07-13T23:20:30.832Z · LW(p) · GW(p)

I agree, I don't see how polyamory or MIRI's research can be called "less wrong" than the alternatives.

A common LW belief is that polyamory is a better way to have relationships for most people. I disagree. I see how polyamory is the "best" way for a selfish, pleasure-seeking child-free high-status leader to have relationships.

Replies from: ephion, None
comment by ephion · 2014-07-14T20:03:37.888Z · LW(p) · GW(p)

In my experience with the LW community, they see polyamory as an equally valid alternative to monogamy. Many practice, many don't, and poly people include those with children and those without.

Replies from: Eliezer_Yudkowsky, Dentin
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-07-18T23:22:07.077Z · LW(p) · GW(p)

Affirm. It touches on cognitive skills only insofar as mild levels of "resist conformity" and "notice what your emotions actually are" are required for naturally-poly people to notice this and act on it (or for naturally-mono or okay-with-either people to figure out what they are if it ever gets called into question), and mild levels of "calm discussion" are necessary to talk about it openly without people getting indignant at you. Poly and potential poly people have a standard common interest in some rationality skills, but figuring out whether you're poly and acting on it seems to me like a very bounded challenge---like atheism, or making fun of homeopathy, it's not a cognitive challenge around which you could build a lasting path of personal growth.

Replies from: Algernoq
comment by Algernoq · 2014-07-22T01:38:42.973Z · LW(p) · GW(p)

I'd like to see more "calm discussion" of status differentials in relationships, because a general solution here would address nearly all concerns about polyamory. Thanks to HPMOR for helping me understand the real world.

One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them.

This isn't illegal. It's very difficult to prosecute actually giving other people STDs, or coercing them into sex. Merely telling lies to get sex (or, to swap genders and stereotype, get status and excessive support without providing sex) isn't so bad in comparison.

I'm indignant at Evolution (not at polyamory, monogamy, men, etc.) because I strongly suspect several of my previous partners were raped, and unable to prosecute it. They sort-of got over it and just didn't tell future partners (me) about it. My evidence for this includes being told stories that sounded like half-truths (a stalker followed me! and I was drugged! and now I have this scar! but nothing happened!) and overly-specific denials (nothing's happened to me that would give me panic attacks!). Another quoted a book about recovering from sexual assault. I haven't actually asked any of them, but I don't want to because this conversation would be massively unpleasant as well as unhelpful. Hypothetically:
F: So, yeah. That happened.
M: I'm sorry, not your fault, etc...
M: So, you know who did it?
F: ...yes (in 90% of cases)
M: I want to know who so...
F: No. I'm not a barbarian. Let's move forward.
M: If (when) someone threatens you again, will you threaten them back?
F: No. Again, I'm not a barbarian. I'll avoid them socially but that's it, and I'm out of luck if they're not breaking any laws in public.
M: In my experience with bullies, they don't care about social punishment. They only care about credible physical or legal threats. They're also generally cowards...
F: That's horrifying! I'd never threaten anyone like that!
M: OK, threatening mutually assured destruction comes more naturally to some people than others. Will you at least tell me if you feel scared because someone is pushing your boundaries...
F: No!
M: Well, I don't want to let people push you around and disrupt both of our lives even more. The alternatives (getting rid of privacy, or always carrying high-powered non-lethal weapons) seem more inconvenient. What do you think?
F: I think I'm breaking up with you, because you're creepy.

I am tired of realizing that people I care about were damaged by abusive relationships, and I'm tired of competing with sociopaths in dating and at work. There aren't any good alternatives (ignoring the evil is irrational and near-impossible, fighting the evil is creepy and near-impossible, and becoming a player makes me sad). The "winning" strategy seems to be narcissism and salesmanship -- a mix of Donald Trump and Richard Feynman -- and not feeling guilty about hurting other people. My current "good" strategy is being single and focusing on technical skills now to minimize baggage in the future. Given that Einstein and JFK are adored despite their numerous affairs, perhaps I should update this, or hurry up and invent (a Hobbesian) Leviathan.

without people getting indignant at you

In summary, don't fuck your cultists unless you've turned evil.

Replies from: ephion, Viliam_Bur, MugaSofer, MugaSofer, Lumifer
comment by ephion · 2014-07-25T20:18:19.761Z · LW(p) · GW(p)

I'd like to see more "calm discussion" of status differentials in relationships, because a general solution here would address nearly all concerns about polyamory.

What concerns do you have, exactly? I've found that the increased fluidity and flexibility inherent to polyamory (vs monogamy, it can't touch singlehood there) are great for reducing the impact and duration for potentially abusive or unhealthy situations, as a) people often have other partners who can help mediate conflicts or alert red flags, b) to isolate a person, the abuser has to go to the additional step of having the person break up with all of their partners. Furthermore, individuals tend toward more satisfying relationships as time goes on as the availability of other relationships tends to either cause less healthy/happy relationships to take less time/attention from the people involved or grow into more healthy/happy relationships.

One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them.

We aren't talking about poly anymore, right? Because this would get a person a terrible reputation in any of the poly circles I know. Or, any social circle I'm a part of at all. Any social scene where this isn't frowned upon isn't the kind of scene I'd like to be a part of.

comment by Viliam_Bur · 2014-07-25T11:44:55.525Z · LW(p) · GW(p)

This is too interesting topic to be hidden deep inside a comment thread to an article with different topic. Some random thoughts here:

I would like to have more data about poly relationships. I don't have an example in my neighborhood, and even if I had, I wouldn't want to build my statistic on a single data point. And even if I found dozen examples now, I could only observe how they are now, not what happens in long term. (If I were 20, I could start poly dating and get data experimentally, but I am almost 40 now, so this doesn't seem like a good area for experimenting.) I would love to see an analysis by someone who is not a polyamory enthusiast, but who has observed many poly relationships in long term, has some statistics, and can compare them with mono statistics. For example, whether divorces in poly community are on average more or less civilized than in mono community.

What exactly happens in a relationship, depends not only on its formal structure, but very much on the personalities of people involved. If we use monogamous relationships as a more familiar model, some relationships are awesome, and some relationships are horrible. Generally, marriage is considered more serious than dating, but there is also a variance; some people take their dating more seriously than other people take their marriage. Shorty: mono relationships are different. I expect the same for poly relationships. If a polyamorous group contains some horrible people, I would expect horrible results; but that's not an argument against polyamory. (Perhaps in a statistical sense: a larger group has a greater chance to contain a horrible person. But maybe it is easier for the other people to send the horrible person way: they will not remain alone if they do. But maybe it is more difficult to coordinate more people. Again, not enough data.)

Like army1987 said, different environments have different proportions of evil people. If you experienced only one, it seems like the whole world is the same. Even having experienced two or three different situations doesn't mean they weren't of the same type. Sometimes our experiences and habits are holding us at the same situation. For example, a person who was abused in previous relationships may ignore obvious (for other people) red flags in the next relationship, simply because they believe that this is how all people behave. A person who had a shitty job experience may also walk straightly into another shitty job, because they believe this is how all jobs are. We don't have other people's data to compare: first, it is difficult to communicate because different people will use the same words to describe different things; second: we are more likely to talk with people who are in a similar situation.

It seems to me that we (people in general) would have less problems with sociopaths, if we communicated better. My experience with people who seemed like sociopaths was that they actively discouraged communication and honesty among other people around them; thus the non-sociopaths couldn't share their stories and possibly coordinate against the sociopath. I am not saying that more communication would solve everything: sociopaths can lie and manipulate. Just that it is even easier to manipulate if people don't share their data. The sociopath can simply use the same algorithm on many people in row, not afraid that their strategy will be exposed. ("He did this to me." "Oh, he did exactly the same thing to me." "Really? Now I am curious: based on your experience, what exactly will he do if this hapens?" "He will simply tell that you lied, you will have no proof, and you know how charming he is." "Well, now that you have warned me, I actually could obtain a proof...")

Reading your comment, I get a feeling that you didn't have the luck to find a nice community of people. Perhaps nice individuals, but not a "tribe". I wish I could help you, but I probably can't. I know a few nice groups around me; LW is my favourite becase it is "nice and not stupid", but if one doesn't insist on rationality too much, there are other nice groups, too. I will not recommend you to try LW, first because that would feel cultish, and second because I can't vouch for LW fans in a different country. A more general algorithm could be: look at non-profit organizations around you, there should be people who want to make world a better place. Although, there are also many bad apples. So perhaps an actionable plan would be: pick dozen non-profit organizations around you, meet them all and ask if they need little help (many non-profits do). With some luck, you could find many nice people this way.

Replies from: Algernoq
comment by Algernoq · 2014-07-26T04:32:28.452Z · LW(p) · GW(p)

didn't have the luck to find a nice community of people. Perhaps nice individuals, but not a "tribe"

I wish I could up-vote this whole comment more, and especially this line. I agree with your points and it'd be interesting to see a top-level post about this.

You're right; I don't feel like part of a "tribe" now, though I have some good friends/family, and it comes through in my writing. There are a few genuinely nice tribes I could join (by helping/entertaining tribe members to build reciprocity, and signaling belonging with my style choices), and I should prioritize this for sanity's sake. Ideally, I would find a tribe of smart and well-adjusted people who want to try to not die, i.e. try to get rich and then make the needed science happen. There are only a few people interested in this project, though, and they tend to be crazy, making forming such a tribe near-impossible. Joining a tribe that values being a good person and enjoying cultured recreation (and avoiding depressive patterns of thinking about how all conventional roads lead to death) is probably a good way to go. This is a strange game we all are playing, where only the meaningless rules are clearly written.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-26T14:34:34.222Z · LW(p) · GW(p)

(This could be just a hindsight reasoning, but the fact that you wrote this article is an evidence for not having the tribe -- otherwise you probably would have discussed the topic with your tribe, and got to some satisfactory conclusion, instead of asking us to defend ourselves.)

Having a tribe that shares at least some of your values is very good for mental health. I used to be in a tribe of smart religious people, with whom I could reasonably debate about many things (at the cost of silently suffering when they tried to apply similar reasoning to some supernatural topic, which fortunately didn't happen too often). I also was in a tribe of people interested in psychology, which later mostly fell apart, but some people stil see each other once in a while. Then I had a few friends to talk about programming, or other specialized topics. Also, when I had a girlfriend, we shared some interests.

This was all nice, but there was this... compartmentalization. I knew I can debate a topic X only in a group A, a topic Y only in a group B, and a topic Z nowhere. Sometimes merely because they wouldn't be interested in a topic, but sometimes the topic would go directly against the values of the group. (You can't debate atheism with religious people, or skepticism with people who believe that "positive thinking" is the answer to everything and that reality is only as much real as you believe it to be.) Or perhaps I wanted to put two ideas together, like self-improvement and rationality, but I only knew people interested in self-improvement through irrational means, or in the kind of skepticism that opposed any desire for self-improvement as naiveté. Or I wanted to become more rational, in everything, consistently, as a lifestyle, and people just didn't understand why or how. So I felt like my mind was cut to multiple aspects, some of them acceptable in some groups, some of them acceptable nowhere. And it seemed like the best thing I could realistically have, and that perhaps I should stop being unrealistic and be happy with what is realistically possible.

Then I found LessWrong, but which I mean I've read the Sequences, and I was like: "Oh, great! I am not insane. There are people with similar ideas." And then I was like: "Oh, fuck! They are on the other side of the planet." Then, I think I realized Eliezer's strategy... instead of talking with hundreds of people and trying to find the few compatible ones, he wrote a blog, and let the compatible people contact him. So I was like: Okay, I can try the same thing; and the advantage is I can simply translate Eliezer's texts and publish them in my blog. Well, after two years and over thousand pages translated... I have found less than a dozen of such people. Luckily, another dozen is at Vienna, one hour of travel from my home. This is what I consider my tribe now. But since I see them once in a month, I still have enough time left for non-rationalists. -- There is this little problem with this internet strategy: many people who are active on internet, are not active anywhere else. For example, I wrote an article about LW that 6000 people read, and 1000 of them "liked" it; my estimate is that 200-300 of them should live in my city, and yet, there was not a single new person at our next meetup. (I didn't expect hundreds, but two or three would have been nice.)

I don't have a tribe-building strategy. If I had, I would definitely use it. (There are a few things I haven't tried yet, such as using HP:MoR as a recruitment tool.) But maybe, if finding a tribe with your values is high priority for you, you could start blogging about things you value... and then other people will be happy to meet you. When you will have enough people commenting on your blog, you can just announce once in a month that at some given time you will be in a cafe and they are free to join you.

Sometimes the important things are difficult to express.

Replies from: Algernoq
comment by Algernoq · 2014-07-27T00:08:06.250Z · LW(p) · GW(p)

there was this... compartmentalization...And it seemed like the best thing I could realistically have

Exactly! I tend to affiliate with different friends/groups for different reasons. It tends to be easier to find friends for normal, low-risk goals (living well, studying) than for weird, high-risk goals (getting very rich, ending death), and with any friend there tends to be points of disagreement. My understanding is this is a challenge for most adult city-dwellers.

One other approach (also implemeted by EY) is to slowly change his friends' beliefs to more closely agree with his own.

If the goal is simply acceptance, a successful strategy seems to be to use higher status/value in some areas to make up for a tendency to share "weird" thoughts in other areas. In other words, attract friends who will acknowledge and support the rationalist self-improvement even though they don't take that path themselves, by providing value/fun/leadership in other areas.

Good luck with the meetups/tribe!

comment by MugaSofer · 2014-09-05T18:33:52.901Z · LW(p) · GW(p)

Firstly, I just want to second the point that this is way too interesting for, what, a fifth-level recursion?

Secondly:

One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them.

Is this ... a winning strategy? In any real sense?

I mean, yes, it's easier to sleep with unattractive people. But you don't want to sleep with unattractive people. That is what "attractiveness" refers to - the quality of people wanting you [as a sexual/romantic partner, by default.]

Now, the fact that it then becomes easy for attractive psychopaths to create relationships for nefarious purposes is ... another matter.

But I'm confused as to why you see the choices as "player, but unethical" or "non-player, but good". Surely you want to be a "player" who has sex with people you are actually attracted to?

comment by MugaSofer · 2014-09-05T18:32:35.558Z · LW(p) · GW(p)

double-post

comment by Lumifer · 2014-07-22T02:50:32.801Z · LW(p) · GW(p)

I'm tired of competing with sociopaths in dating and at work.

I have doubts that it is actually true, but if it were, you are dating wrong people and working at a wrong place.

Replies from: Algernoq, Algernoq, Algernoq
comment by Algernoq · 2014-07-23T00:07:35.593Z · LW(p) · GW(p)

Wow. You just:

  1. ignored my evidence

  2. blamed me for society's problems

More politely, you fell into the cognitive bias of incorrectly discounting unpleasant information.

This kind of shit is exactly what I've read rape victims have to put up with. People don't want to believe unpleasant things, and prefer to blame the victim's normal choices instead of recognizing that there's a problem.

If you actually have evidence to support me being unable to perceive the world accurately, please tell me what it is. Otherwise, don't tell me that I'm not feeling what I know I'm feeling.

Some of my specific examples:

I've met two sociopaths socially, coincidentally both management consultants. Trustworthy mutual friends confirmed they had long-term partners and that they also cheated a lot without regard for others' feelings. I also saw this personally: on different occasions I saw each of them with a long-term partner and with a short-term hookup. One of these people tried to seduce my long-term girlfriend, and the other tried to set me up with someone he was tired of hooking up with, without disclosing his involvement with her. Both of them failed, but it wasn't a sure thing in either case. This is an extreme example; more generally I don't like seeing people get lied to, and don't like competing in an environment where the baseline assumption is that the other people are emotionally-damaged liars (because the people with these issues tend to do the most dating). I'm also somewhat bothered that the social norm is generally to pretend not to know about cheating/lying in friends' relationships, because there's no positive reward to sharing the information.

At work, in my current job, the technically competent senior engineer with average social skills was passed over for promotion in favor of a technically incompetent senior engineer who covers for his incompetence with posturing and salesmanship. I'm also tired of frequent calls from salesmen who want me to pay 30% too much for something I don't need.

More generally, the structure of many organizations rewards sociopaths. Look up the MacLeod hierarchy for one popular theory.

Please update on this information, and let me know if you have any true or useful information that's relevant here. In other circumstances I'd recommend an apology as well, for following a conversational pattern that typically offends people and is factually incorrect.

Replies from: MugaSofer, Lumifer
comment by MugaSofer · 2014-09-05T18:37:19.410Z · LW(p) · GW(p)

I'm curious, how do you know they were sociopaths? You seem to imply your evidence was that they were unfaithful and generally skeevy individuals besides, but was there anything else?

(Actually, does anyone know how we know that sociopaths are better at manipulating people? I've absorbed this belief somehow, but I don't recall seeing any studies or anything.)

comment by Lumifer · 2014-07-23T01:16:53.149Z · LW(p) · GW(p)

Wow

LOL

ignored my evidence

Evidence? That's hearsay and it seems to me to be not too reliable...

blamed me for society's problems

I didn't blame you for society's problems. Quote, please.

you fell into the cognitive bias of incorrectly discounting unpleasant information

I don't find anything particularly unpleasant about your information and I don't have any triggers about discussing rape or sociopaths. Maybe you should project less onto other people.

don't tell me that I'm not feeling what I know I'm feeling.

I didn't tell you anything about your feelings. Quote, please.

Trustworthy mutual friends confirmed they had long-term partners and that they also cheated a lot.

So? Maybe they have an open relationship. Not to mention that cheating on your girlfriend is not a criterion for being a sociopath.

the technically competent senior engineer with average social skills was passed over for promotion in favor of a technically incompetent senior engineer who covers for his incompetence with posturing and salesmanship.

And what does that have to do with sociopathy?

Please update on this information

Sure. I've updated towards your perception of your environment not being adequate because you seem to be unhappy and angry with it.

I continue to recommend changing your environment, both dating and work.

I'd recommend an apology as well

You recommendation was considered and rejected.

Replies from: Algernoq
comment by Algernoq · 2014-07-23T02:24:26.180Z · LW(p) · GW(p)

I have doubts that it is actually true

What do you mean? The conventional meaning of these words, in context, is to tell me that:

  1. I didn't see what I saw.

  2. I'm not tired of interacting with sociopaths.

To quote XKCD: "communicating badly and then acting smug when you're misunderstood is not cleverness. I hope we've learned something today."

you are dating wrong people and working at a wrong place

What do you mean? It sounds like you're just telling me to change my environment.

Do you know of a human society to join that does not contain sociopaths?

Do you know of a reliable way to identify sociopaths prior to interacting with them?

That's hearsay and it seems to me to be not too reliable...

Can you be more specific about how things that I observe firsthand are "heresy"?

How do you reconcile dismissing my statements with the base rate for sociopathy? A few percent is enough for most people to meet many sociopaths during their life.

So? Maybe they have an open relationship.
what does that have to do with sociopathy?
You recommendation was considered and rejected.

I disagree with each of these statements for obvious reasons. If you're not trolling, I would be happy to discuss further.

In summary:

  1. I believe I have met sociopaths. I believe the evidence strongly supports this. It looks like you're ignoring evidence. What is your motivation/goal here?

  2. The suggestion to change my environment is not useful, because other environments will also have sociopaths. I agree that avoiding unpleasant environments is bad in general.

  3. My earlier post was intended to say that large status differentials are usually bad for the lower-status person in the relationship, whether in poly- or mono- relationships. I also wanted to get confirmation that other people have similar problems with sociopaths and rape, and hopefully get ideas for addressing these from the unique perspective of LW. Both of these goals were apparently not communicated clearly.

Again, my goal was:

I'd like to see more "calm discussion" of status differentials in relationships, because a general solution here would address nearly all concerns about polyamory.

This seems to have failed.

I give you one karma in the spirit of the Iterated Prisoner's Dilemma.

Replies from: Lumifer
comment by Lumifer · 2014-07-23T03:52:15.327Z · LW(p) · GW(p)

What do you mean?

I mean that I doubt your assessment of the situation. "Sociopath" is a clinical diagnosis, are you sure you're qualified to make it?

What do you mean? It sounds like you're just telling me to change my environment.

Why, yes, I do :-) Note that I do not blame you for society's problems. And neither do I tell you what you are feeling. But I did offer you a piece of advice -- to change your environment.

Do you know of a human society to join that does not contain sociopaths?

You don't interact with the whole society. You interact with your social circle and your co-workers.

"heresy"

My spelling problems are not that bad :-) I did mean hearsay.

A few percent is enough for most people to meet many sociopaths during their life.

You were not talking about meeting some sociopaths. You said "I'm tired of competing with sociopaths in dating and at work" -- that's goes quite a fair bit above and beyond the base rate.

It looks like you're ignoring evidence.

I don't have any evidence other than your assertions. I don't doubt your honesty but I doubt your capability to evaluate the situation in an unbiased way.

because other environments will also have sociopaths.

I don't have sociopaths either in my social circle or among my co-workers -- at least easily recognizable ones.

large status differentials are usually bad for the lower-status person in the relationship

Probably true, but I'm not sure what are you suggesting -- that relationships do not cross status (class) boundaries? That doesn't sound too appealing.

I also wanted to get confirmation that other people have similar problems with sociopaths and rape

One data point for you -- I don't.

Replies from: army1987, Algernoq
comment by A1987dM (army1987) · 2014-07-23T11:38:40.164Z · LW(p) · GW(p)

I'm going to partially agree with both of you and say that, whereas certain social circles and workplaces do contain many fewer evil people than others, it's not easy for certain people to change social circle or workplace in certain circumstances.

comment by Algernoq · 2014-07-23T06:35:47.239Z · LW(p) · GW(p)

I mean that I doubt your assessment of the situation.

I'm claiming is that people have tried to take advantage of me, including the examples I gave above, as well as every car salesman I've ever met. It's not a high percentage -- most people are good/neutral -- but there are some people who are mildly amused by trying to hurt me.

What are you trying to prove? Why is it so important to you that the world is...what? Free of evil intent in your immediate social circle? Free of injustice affecting people you consider your peers?

It sounds like you're twisting my words to fit your worldview, and trying to make me doubt my sanity.
Specifically:

You were not talking about meeting some sociopaths.
I doubt your capability to evaluate the situation in an unbiased way.
"Sociopath" is a clinical diagnosis, are you sure you're qualified to make it?
You don't interact with the whole society.

Who is truly unbiased and therefore in your view able to make reliable decisions? How sociopathic would an example have to be to meet your arbitrary criteria? Are you proposing I stop interacting with unknown people?

Do you want to be better? Do you want the truth, or not?

I want you to acknowledge that the people in the examples I gave more likely than not tried to hurt me for reasons including their own amusement.

This is important as a first step toward talking about solutions. I want better solutions for negotiating with and ideally changing sociopathy, both personally as I advance in my career and encounter high-stakes situations more often, and for my own dreams of world improvement.

I also wanted to get confirmation that other people have similar problems with sociopaths

One data point for you -- I don't.

Given the base rate for these things, it appears that you're choosing to ignore information so you don't feel obligated to deal with it.

To change your mind, among other options, I could tell you some true stories about people who are much richer than you taking advantage of people in your class.

Also, it sounds like your goal is to increase your status by bashing my position, not actually resolving the issue.

(Suggesting) that relationships do not cross status (class) boundaries?

Yes, because these relationships are risky for the lower-status person, and impose externalities on others. Social mobility is provided by education, skill, etc. -- I'm not proposing hereditary classes.

It sounds like you're OK with polyamory with status differences. In that case, it appears the "winning" strategy is to build a harem of lower-status partners. This approach is arguably good for the individual but bad for others (less investment in children, more crime by low-status men who can't get sex, leads to infighting within the harem). For example, several Google executives are in open relationships with a wife who'd rather be monogamous plus some more attractive young people, and several players I know usually have a few partners at any one time. As there's no incentive for the players to be honest, this imposes costs on others.

How much sociopathy do you see in your community? None? Why is this important to you? Why is your community different from the average community with equivalent wealth/background?

Replies from: Jiro, Lumifer
comment by Jiro · 2014-07-23T15:00:03.092Z · LW(p) · GW(p)

I also wanted to get confirmation that other people have similar problems with sociopaths

One data point for you -- I don't.

Given the base rate for these things, it appears that you're choosing to ignore information so you don't feel obligated to deal with it.

Isn't that the same sort of data-ignoring that you're complaining that he does? You just asked for some data, he gave it to you (he told you he doesn't have such problems) and you refused to believe it. What's the point of even asking people to confirm something if you won't accept "no, I confirm the opposite" as an answer?

Replies from: Algernoq
comment by Algernoq · 2014-07-24T00:54:02.147Z · LW(p) · GW(p)

I read his comment as "I don't want to know about other people having problems with sociopaths", not as "I don't have problems with sociopaths".

That makes sense...his comment isn't quite as bad then. To put it in another context (poverty instead of sociopathy), he meant something like "I'm not poor, hahah!" and I thought he meant "I like to ignore the poor, haha!" He's just saying he's high-status, and I thought he was saying he enjoys enjoying laughing at low-status people.

Replies from: Jiro
comment by Jiro · 2014-07-24T09:21:33.523Z · LW(p) · GW(p)

I think even that is being unfair to him. "I don't have problems with sociopaths and I think it's because I'm not the kind of person who sociopaths bother" may be a claim of high status, but "I don't have problems with sociopaths and I think that's because people in general don't have problems with sociopaths, and you're biased or unlucky" is not. (It can't be a claim of high status--if it is, that would mean that your question is a catch-22, where anyone who actively fails to confirm you is automatically claiming high status.)

Replies from: Algernoq
comment by Algernoq · 2014-07-25T02:01:18.018Z · LW(p) · GW(p)

people in general don't have problems with sociopaths

I agree it sounds like he's claiming the above. I don't see how this is useful or accurate, because it fits the pattern of "people in general don't have problems with (X widely known problem)".

I agree, someone who does not notice sociopaths likely has higher status than someone who does.

I can believe he genuinely doesn't see sociopaths in his community. Given the base rate for sociopathy is ~1% and that he has probably met, very roughly, 4,000 people, the probability that he has never met a sociopath in his community is (.99)^(4,000)=3.47*10^-18. In other words, the probability that he has met a sociopath and didn't realize it is ~100%.

This conversation becomes pointless. As Thucydides said: questions of justice only exist between equal powers.

Replies from: Jiro
comment by Jiro · 2014-07-25T02:25:34.776Z · LW(p) · GW(p)

In other words, the probability that he has met a sociopath and didn't realize it is ~100%.

"I don't have problems with sociopaths" doesn't mean that he has met absolutely zero sociopaths, so this calculation is meaningless.

I agree, someone who does not notice sociopaths likely has higher status than someone who does.

The point is that it's not higher status. What you basically did was a catch-22 where you "asked for information", but set it up so that everyone would either have to agree with you, or be interpreted as making a status grab.

comment by Lumifer · 2014-07-23T14:53:17.370Z · LW(p) · GW(p)

I'm claiming is that people have tried to take advantage of me, including the examples I gave above, as well as every car salesman I've ever met. It's not a high percentage -- most people are good/neutral -- but there are some people who are mildly amused by trying to hurt me.

I want you to acknowledge that the people in the examples I gave more likely than not tried to hurt me for reasons including their own amusement.

Ah. It looks to me that some of our disagreement, as is often the case, is a terminology problem.

You've been talking about sociopaths. "Sociopath" is a diagnosis of a mental illness, a personality disorder. You asked:

How sociopathic would an example have to be to meet your arbitrary criteria?

The answer to that is provided by the DSM. You can read it here. Sociopathy is not common.

I don't think you are using this term properly. Instead I'd like to offer you two other alternative expressions.

On a colloquial level those you've been talking about are usually called assholes (and sometimes dicks/bitches as appropriate). Assholes are certainly plentiful in the world and complaints about being surrounded by dicks and assholes...

...must...not...make...bad...allegories...

...oh, where was I? sorry. So, there are lots of assholes and there are lots of complaints about them throughout the history in pretty much every society. You want to join the litany? Sure, the line is over there, please take a number, it is sevenbillionmumblemumble, wait for it to be called.

On a slightly more analytical level, I think a better word for you to use is amoral (or, maybe, immoral). It seems to me that you're not bothered by these people's lack of emotional reaction, you're bothered by what they find fine to say and do -- and that's morality and ethics. You probably think that their morals are either absent or bad.

That is different from claims of sociopathy and -- surprise -- also very very common.

Why is it so important to you that the world is...what?

I am participating in a conversation on an internet forum. In the great scheme of things that's not particularly important to me. But the (meta) subject of this discussion is accuracy of maps, not features of the territory.

I want better solutions for negotiating with and ideally changing sociopathy

If you want to learn negotiating, any B&N will have a shelf of books devoted to that. I think you'll be able to find a variety of materials on LW as well which will be helpful.

By "changing sociopathy" I think you mean changing the morality of some people. I don't think it's going to be a fruitful endeavour, but that's just me.

Who is truly unbiased and therefore in your view able to make reliable decisions?

Someone who is not too excitable and not emotionally invested in particular conclusions.

I could tell you some true stories about people who are much richer than you taking advantage of people in your class.

LOL. You are not referring to something like political assassination stories from Salon about how the Koch brothers orgasm every time their tentacles tighten a bit more around the throats of hard-working widows and orphans..?

And what is my "class", by the way?

because these relationships are risky for the lower-status person

A relationship with another low-status person is risky, too, in different ways. If you are a trailer-park girl, is it really better for you to choose a trailer-park mate?

It sounds like you're OK with polyamory with status differences. In that case, it appears the "winning" strategy is to build a harem of lower-status partners.

Interesting. That's a... revealing comment. What "winning" means is defined by your values. Looking at myself (since I can speak for myself), I would not consider lording over a harem of low-status partners to be "winning", in fact if I would probably actively try to avoid such a situation. But do you think that's what "winning" is?

with a wife who'd rather be monogamous

LOL. And how do you know this, do tell... You're not engaging is some blatant gender stereotyping, are you?

Replies from: Algernoq
comment by Algernoq · 2014-07-25T02:55:40.295Z · LW(p) · GW(p)

How sociopathic would an example have to be to meet your arbitrary criteria?

The answer to that is provided by the DSM. You can read it here.

I looked at your link (dsm.pdf at the psi.uba.ar website) and it doesn't mention sociopathy. Do a word search. If you're going to troll with sources, at least pick relevant ones.

Sociopathy is not common.

The DSM does say that antisocial personality disorder includes what used to be called sociopathy. Multiple sources indicate that about 0.6% of US adults have antisocial personality disorder. If you've met an unusually small number of people (which my limited interaction with you suggests is possible), for example only 2000 people, then the probability of you never having met a sociopath is about (.994)^(2000)=0.000006. In other words, the probability that you're ignoring evidence is ~100%.

LOL

This is how you practice Rationality?? Protip: this is a bias called "emotional reasoning".

there are lots of assholes and there are lots of complaints about them

I hoped LW would discuss clever ways to solve universal problems, such as death and assholery.

You are not referring to something like political assassination stories from Salon

Correct. Real GDP growth is minimal and real stock market values are flat/decreasing on average, while capital owners are becoming much richer. The law also heavily favors people who can afford expert lawyers.

And what is my "class", by the way?

I'm guessing you're white and male, average attractiveness/intelligence, early 20s, earning less than $50,000/year, and spend a lot of time unproductively on the Internet. So, higher than the global average, but not on track to ever reach the 1%. Am I right?

If you want to learn negotiating

I'm learning already! Based on your helpful input, I've found a solution to my problem and I'm enjoying some new-found freedom of expression.

any B&N

They haven't gone bankrupt yet?

I think you'll be able to find a variety of materials on LW as well which will be helpful.

Yup, if you're a white male, being a narcissistic asshole without self-awareness is a winning strategy in most modern situations.

That's a... revealing comment.

Hahaahaha. Ha. Ha. Ha. Nice rhetoric. Oh, my aching sides.

the (meta) subject of this discussion is accuracy of maps

Let me know when you're ready to talk about the actual subject of this discussion.

Replies from: wedrifid, Lumifer
comment by wedrifid · 2014-07-25T10:23:33.753Z · LW(p) · GW(p)

I'm guessing you're white and male, average attractiveness/intelligence, early 20s, earning less than $50,000/year, and spend a lot of time unproductively on the Internet. So, higher than the global average, but not on track to ever reach the 1%. Am I right?

Lumifer is distinctly above average in intelligence. That, combined with what I infer is somewhat more experience at the game, is why he is beating you at the one-upmanship contest you two are having. He is coming off less badly despite displaying more antagonism. Feel free to keep practicing, but note that you can tell from the voting patterns that the audience has tired of the show (downvoting both sides) so it may be better to discuss with different, more cooperative, discussion partners for a while.

comment by Lumifer · 2014-07-25T04:19:36.172Z · LW(p) · GW(p)

If you're going to troll with sources ... The DSM does say that antisocial personality disorder includes what used to be called sociopathy.

You look confused. So, was I trolling with sources or my link to the antisocial personality disorder was actually relevant to the discussion of sociopathy?

If you've met an unusually small number of people (which my limited interaction with you suggests is possible), for example only 2000 people, then the probability of you never having met a sociopath is about (.994)^(2000)=0.000006. In other words, the probability that you're ignoring evidence is ~100%.

Not only confused, but with reading comprehension problems, too.

I said: "I don't have sociopaths either in my social circle or among my co-workers -- at least easily recognizable ones." That is not talking about how many people I have met. My social circle + co-workers is much smaller than 2000 people. I have certainly met sociopaths, in fact one works nearby and I see him on a regular basis. But he is neither part of my social circle nor a co-worker.

This is how you practice Rationality??

Laughing is an excellent way to practice rationality. As to the capital-R Rationality, I don't do that.

real stock market values are flat/decreasing on average, while capital owners are becoming much richer

The first part of that sentence does not exactly match the second part... X-)

I'm guessing .... Am I right?

Wrong, of course.

if you're a white male, being a narcissistic asshole without self-awareness is a winning strategy in most modern situations.

LOL. So, since rationality = winning, you are claiming that the rational thing for a white male to do is to become a narcissistic asshole. An interesting point of view.

the actual subject of this discussion.

Which is..?

Replies from: Algernoq
comment by Algernoq · 2014-07-29T04:53:03.163Z · LW(p) · GW(p)

It still sounds like you're trolling. Let me know when you're ready to have a real discussion, and apologize for trolling.

comment by Algernoq · 2014-07-23T00:20:26.042Z · LW(p) · GW(p)

In addition to disagreeing with Lumifer's position here for the obvious reasons stated below, I humbly submit that the up-votes on his comment above are evidence that "many LWers are not very rational". While I don't know what the base rate is for this, I hoped for better.

comment by Algernoq · 2014-07-23T00:29:14.039Z · LW(p) · GW(p)

In addition to disagreeing with Lumifer's position here for the obvious reasons stated above, I humbly submit that the up-votes on his comment above are evidence that "many LWers are not very rational". While I don't know what the base rate is for this, I hoped for better.
Edit: Looks like the vote total has corrected itself to the negative.

comment by Dentin · 2014-07-15T21:56:14.491Z · LW(p) · GW(p)

There's a lot of biases and cultural norms to overcome in making the transition from mono- to poly-amory. While I've remained monogomas myself, it's purely for time and efficiency reasons, and if I didn't have Stuff To Do, I'd probably go that direction as well.

Replies from: James_Miller, Algernoq
comment by James_Miller · 2014-07-17T03:34:03.437Z · LW(p) · GW(p)

While I've remained monogomas myself, it's purely for time and efficiency reasons

Worst Valentine's Day card ever.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2014-07-17T12:34:15.320Z · LW(p) · GW(p)

Yeah, that's funny. But Dentin does have a point, even if he didn't formulate it very romantically. It takes time and effort to do a relationship justice; and if you don't have that time, it's better to stay monogamous.

comment by Algernoq · 2014-07-22T00:10:42.024Z · LW(p) · GW(p)

biases and cultural norms to overcome

Some of these exist for good reasons. Among other issues, polyamory gives high-status men an excuse to tell low-status men that their feelings of discomfort are "biases and cultural norms to overcome".

I'd say it's just like monogamous sex: it's best not to (if you're trying to maximize productivity), but if you're going to do it anyway you might as well do it in a well-thought-out happiness-increasing way.

comment by [deleted] · 2014-07-13T23:25:22.335Z · LW(p) · GW(p)

For each their own; I'm not judging. I didn't know that was a common belief here. I can see how it makes sense for certain people's lifestyle choices. I just don't see the connection to rationality.

Replies from: TheMajor
comment by TheMajor · 2014-07-14T00:22:48.464Z · LW(p) · GW(p)

I think a part of the reason is that most people would never even consider a polyamorous relationship, whereas it might for quite a lot of people be a better option than the alternatives. If this is true then being in a polyamorous relationship is a strong indicator of actually considering alternatives and embracing the truth when stumbling upon it.

Having said all that I think it is not one of the central activities related to LW, the implication mentioned above is valid only so long as people don't make a habit out of trying (radically) different sorts of romance.

comment by Viliam_Bur · 2014-07-14T06:41:24.883Z · LW(p) · GW(p)

That topic used to be discussed on LW... but now I realize I haven't heard about it much recently.

comment by Viliam_Bur · 2014-07-13T21:34:07.922Z · LW(p) · GW(p)

I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...

Rationality doesn't guarantee correctness.

Which is why this site is called "Less Wrong" in the first place. (Instead of e.g. "Absolutely Correct".) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.

deciding what to do in the real world requires non-rational value judgments

Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)

Many LWers seem to assume that being as rational as possible will solve all their life problems.

Let's rephrase it with "...will provide them a better chance at solving their life problems."

instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done.

Not sure exactly what you suggest here. We should not waste time reflecting, but instead pick a path quickly, because time is important. But we should find data. Uhm... I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.

LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

You seem to suggest some sinister strategy is used here, but I am not sure what other approach would you recommend as less sinister. Math, science, philosophy... are the topics mostly nerds care about. How should we do a debate about math, science and philosophy in a way that will be less attractive to nerds, but will attract many extraverted highly-social non-intellectuals, and the debate will produce meaningful results?

Because I think many LWers would actually not oppose trying that, if they believed such thing was possible and they could organize it.

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW

This is not a strong evidence against usefulness of LW. If you imagine a parallel universe with alternative LW that does increase average success of its readers, then even in that parallel universe, most of the most impressive LW readers became that impressive before reading LW. It is much easier to attract a PhD student at a top university by a smart text, than to attract a smart-but-not-so-awesome person and make them a PhD student at a top university during the next year or two.

For example, the reader may be of a wrong age to become a PhD student during the time they read LW; they may be too young or too old. Or the reader may have done some serious mistakes in the past (e.g. choosing a wrong university) that even LW cannot help overcome in the limited time. Or the reader may be so far below the top level, that even making them more impressive is not enough to get them PhD at a top university.

the LW community may or may not ... encourage them ... to drop out of their PhD program, go to "training camps" for a few months ...

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months -- EDIT: I was wrong, actually there was one).

Here is a real LW discussion with a PhD student; you can see what a realistic LW advice would look like. Here is some general study advice. Here is a CFAR "training camp" for students, and it absolutely doesn't require anyone to drop out of the school... hint: it takes two weeks in August.

In summary: real LW does not resemble the picture you described, and is sometimes actually more close to the opposite of it.

Replies from: jsteinhardt, jsteinhardt, TheMajor, Algernoq, TheAncientGeek, hairyfigment
comment by jsteinhardt · 2014-07-13T23:41:45.419Z · LW(p) · GW(p)

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months).

When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don't know anything about the training camp part (well, I've certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).

Replies from: pianoforte611
comment by pianoforte611 · 2014-07-14T00:45:44.672Z · LW(p) · GW(p)

Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.

Also, there are lots of people, many of them graduate students and PhD's themselves, who will try to convince you not to do a PhD. Its not an unusual position.

comment by jsteinhardt · 2014-07-13T23:34:09.401Z · LW(p) · GW(p)

I mean, are we so bad at communicating our ideas?

I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I've seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.

Replies from: pianoforte611, Luke_A_Somers
comment by pianoforte611 · 2014-07-14T00:49:44.821Z · LW(p) · GW(p)

I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2014-07-14T08:07:37.253Z · LW(p) · GW(p)

Whether rationality guarantees correctness depends on how one defines "rationality" and "correctness". Perfect rationality, by most definitions, would guarantee correctness of process. But one aspect of humans' irrationality is that they tend to focus on results, and think of something as "wrong" simply because a different strategy would have been superior in a particular case.

comment by Luke_A_Somers · 2014-09-05T19:40:45.545Z · LW(p) · GW(p)

When you believe ~A and someone says 'You believe A', what else is there? From most generous to least:

  • I misspoke, or I misunderstood your saying something else as saying I believe A.

  • You misheard me, or misspoke when saying that I believe A.

  • You're arguing in bad faith

Note that 'I actually secretly believe A' is not on the list, so it seems to me that Villiam was being as generous as possible.

comment by TheMajor · 2014-07-13T22:38:49.073Z · LW(p) · GW(p)

I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.

For what it's worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):

Rationality doesn't guarantee correctness.

Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!

In particular, AI risk is overstated

I am not sure but I was under the impression that your suggestion of 'just building some AI, it doesn't have to be perfect right away' is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when people were optimistically attempting to make an AI and kept failing. Why should our attempt be different? As for AI risk itself: I don't know whether or not LW is blowing the risk out of proportion (in particular I do not disagree with them, I am simply unsure).

LW has a cult-like social structure.

I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than 'intellectuals' but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience. However from my personal experience I can confidently assert that the material from LW (and OB, by the way) indeed is of high quality. So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience? The obvious answer is that the nerds are most interested in the material discussed, but as there are many many more non-nerds than nerds it would make sense to appeal to a broader audience (at the cost of quality), right? This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?

Many LWers are not very rational.

In my experience non-LWers are even less rational. I fear that again you have set the bar too high - reading the sequences will not make you a perfect Bayesian with Solomonoff priors, at best it will make you a bit closer of an approximation. And let me mention again that personally I have gotten decent mileage out of the sequences (but I am also counting the enjoyment I have reading the material as one of the benefits, I come here not just to learn but also to have fun).

LW membership would make me worse off.

This I mentioned earlier. I notice that you define success in terms of money and status (makes sense), and the easiest ways to try to get these would be using the 'Dark Arts'. If you want a PhD, just guess the teachers password. It worked for me so far (although I was also interested in learning the material, so I read papers and books with understanding as a goal in my spare time). However these topics are indeed not discussed (and certainly not in the form of: 'In order to get people to do what you want, use these three easy psychological hacks') on LW. Would it solve your problem if such things were available?

"Art of Rationality" is an oxymoron.

Just because something is true does not mean that it is not beautiful?

Replies from: David_Gerard, dxu
comment by David_Gerard · 2014-07-14T11:39:05.924Z · LW(p) · GW(p)

LW has a cult-like social structure.

I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than 'intellectuals' but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience.

I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW.

Note that this does not require malice or even intent on the part of said leader! It's something happening in the heads of the recipients. But the leader needs to be aware of it - it's part of the cult attractor, selecting for people looking for stuff to cut'n'paste into their heads.

I know this one because the loved one is pursuing ordination in the Church of England ... and basically has this superpower: convincing people of pretty much anything. To the point where they'll walk out saying "You know, black really is white, when you really think about it ..." then assume that that is their own conclusion that they came to themselves, when it's really obvious they cut'n'pasted it in. (These are people of normal intelligence, being a bit too easily convinced by a skilled and sincere arguer ... but loved one does pretty well on the smart ones too.)

As I said to them, "The only reason you're not L. Ron Hubbard is that you don't want to be. You'd better hope that's enough."

Edit: The tell is not just cut'n'pasting the substance of the opinions, but the word-for-word phrasing.

Replies from: Risto_Saarelma, FeepingCreature
comment by Risto_Saarelma · 2014-07-15T12:22:46.220Z · LW(p) · GW(p)

I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW.

The failure mode might be that it's not obvious that an autodidact who spent a decade absorbing relevant academic literature will have a very different expressive range than another autodidact who spent a couple months reading the writings of the first autodidact. It's not hard to get into the social slot of a clever outsider because the threshold for cleverness for outsiders isn't very high.

The business of getting a real PhD is pretty good at making it clear to most people that becoming an expert takes dedication and work. Internet forums have no formal accreditation, so there's no easy way to distinguish between "could probably write a passable freshman term paper" knowledgeable and "could take some months off and write a solid PhD thesis" knowledgeable, and it's too easy for people in the first category to be unaware how far they are from the second category.

comment by FeepingCreature · 2014-07-17T22:38:15.996Z · LW(p) · GW(p)

I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut'n'pasted the leader's opinions into their heads. And that's definitely something that happens around LW.

I don't know. On the one hand side, that's how you would expect it to look if the leader is right. On the other hand, "cult leader is right" is also how I would expect it to feel if cult leader was merely persuasive. On the third hand side, I don't feel like I absorbed lots of novel things from cult leader, but mostly concretified notions and better terms for ideas I'd held already, and I remember many Sequences posts having a critical comment at the top.

A further good sign is that the Sequences are mostly retellings of existing literature. It doesn't really match the "crazy ideas held for ingroup status" profile of cultishness.

Replies from: David_Gerard
comment by David_Gerard · 2014-09-07T10:38:35.408Z · LW(p) · GW(p)

The cut'n'paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we're talking about one-draft daily blog posts here.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-09-07T10:42:43.884Z · LW(p) · GW(p)

I feel like charitably, another explanation would just be that it's simply a better phrasing than people come up with on their own.

but we're talking about one-draft daily blog posts here.

So? Fast doesn't imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.

comment by dxu · 2014-11-23T21:41:42.132Z · LW(p) · GW(p)

So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience?

This (to me) reads like you're implying intentionality on the part of the writers to target "a very susceptible audience". I submit the alternative hypothesis that most people who make posts here tend to be of a certain personality type (like you, I'm looking for a better term than "personality type" but failing to find anything), and as a result, they write stuff that naturally attracts people with similar personality types. Maybe I'm misreading you, but I think it's a much more charitable interpretation than "LW is intentionally targeting psychologically vulnerable people". As a single data point, for instance, I don't see myself as a particularly insecure or unstable person, and I'd say I'm largely here because much of what EY (and others on LW) wrote makes sense to me, not because it makes me feel good or fuels my ego.

This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?

With respect, I'd say this is most likely an impossible endeavor. Anyone who wants to try is welcome to, of course, but I'm just not seeing someone who can't grok fractions being able to comprehend more than 5% of the Sequences.

comment by Algernoq · 2014-07-13T23:13:06.259Z · LW(p) · GW(p)

are we so bad at communicating our ideas?

Not generally -- I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion.

Not sure exactly what you suggest here. We should not waste time reflecting...but...

Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it's optimal but because it's fun.

There's also a sentiment that being more Rational would fix problems. Often, it's a lack of information, not a lack of reasoning, that's causing the problem.

This is not a strong evidence against usefulness of LW.

I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I'm thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a "girl still liked him" based on her not calling, upvoted answers containing Bayes' Theorem and percentage numbers, and downvoted my answer telling him he didn't provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take "(X) Advice" because they are part of the (X) community even though the advice is not the best available advice.

Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it's only advisable to think about Rationality all day if that's what you enjoy for its own sake. I'm not sure to what extent you already agree with this.

A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn't the place for that argument.

How should we do a debate about math, science and philosophy... for non-intellectuals?

Not answering this question is limiting the spread of LW, because it's easy to dismiss people as not sufficiently intellectual when they don't join the group. I don't know the answer here.

A movement aiming to remove errors in thinking is claiming a high standard for being right.

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out

The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.

real LW does not resemble the picture you described

I'm glad your experience has been more ideal.

Replies from: Viliam_Bur, TheMajor, ChristianKl, ThisSpaceAvailable
comment by Viliam_Bur · 2014-07-14T07:03:17.786Z · LW(p) · GW(p)

The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.

If it did happen, then I want to know that it happened. It's just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance -- EDIT: it was, indeed --, since till yesterday I didn't even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)

I heard a lot of "quit the school, see how successful and rich Zuckerberg is" advice, but it was all from non-LW sources.

I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW "crowd wisdom".

Replies from: kbaxter
comment by kbaxter · 2014-07-16T02:23:27.284Z · LW(p) · GW(p)

Here's the program he went to, which did happen exactly once. It was a precursor to the much shorter CFAR workshops: http://lesswrong.com/lw/4wm/rationality_boot_camp/

That said, as his friend I think the situation is a lot less sinister than it's been made out to sound here. He didn't quit to go to the program, he quit a year or so afterwards to found a startup. He wasn't all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-07-16T09:00:01.044Z · LW(p) · GW(p)

Thanks!

Now I remember I heard about that in the past, but I forgot completely. It actually took ten weeks!

comment by TheMajor · 2014-07-14T00:32:40.340Z · LW(p) · GW(p)

Often, it's a lack of information, not a lack of reasoning, that's causing the problem.

Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I'd like to point out that some members (me) wouldn't have been able to take your advice if it wasn't for the material here on LW.

I'm thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a "girl still liked him" based on her not calling

The problem with this example is really interesting - there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don't see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes' Theorem is an answer to the question.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-14T12:29:13.480Z · LW(p) · GW(p)

Saying you didn't provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that's presented. The probability estimate will be better with more information but it's still possible to do an estimate with low information.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-09-05T19:43:30.652Z · LW(p) · GW(p)

Using a Value of Information calculation would be best, especially if tied to proposed experiments.

comment by ChristianKl · 2014-07-14T12:12:12.537Z · LW(p) · GW(p)

At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn't maximize life outcomes.

I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn't applied.

In the case of deciding whether "a girl still likes a guy" a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.

However that doesn't mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.

Do you argue that calibrating your prediction for high stakes emotional situations isn't a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?

At LW we try to do something new. The fact that new ideas often fail doesn't imply that we shouldn't experiment with new ideas. If you aren't curious about exploring new ideas and only want practical advice, LW might not be the place for you.

The simple aspect of feeling agentship in the face of uncertainty also shouldn't be underrated.

The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.

Are you arguing that there aren't cases where a PhD student has a great idea for a startup and shouldn't put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?

The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.

I don't know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn't have thought about before.

Replies from: Algernoq
comment by Algernoq · 2014-07-15T00:10:31.015Z · LW(p) · GW(p)

Do you argue that calibrating your prediction for high stakes emotional situations isn't a skill worth exploring ...?

No, I agree it's generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.

At LW we try to do something new. The fact that new ideas often fail doesn't imply that we shouldn't experiment with new ideas. If you aren't curious about exploring new ideas and only want practical advice, LW might not be the place for you.

I guess what's really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.

The simple aspect of feeling agentship in the face of uncertainty also shouldn't be underrated.

Hooray, agency! This is a question I hope to answer.

Are you arguing that there aren't cases where a PhD student has a great idea for a startup and shouldn't put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?

I'm arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.

Replies from: kbaxter, ChristianKl
comment by kbaxter · 2014-07-16T14:28:26.049Z · LW(p) · GW(p)

By what metric was his decision wrong?

If he's trying to maximize expected total wages over his career, staying in academia isn't a good way to do that. Although he'd probably be better off at a larger, more established company than at a startup.

If he's trying to maximize his career satisfaction, and he wasn't happy in academia but was excited about startups, he made a good decision. And I think that was the case here.

Some other confounding factors about his situation at the time:

  • He'd just been accepted to YCombinator, which is a guarantee of mentoring and venture capital

  • Since he already had funding, it's not like he was dumping his life savings into a startup expecting a return

  • He has an open invitation to come back to his PHD program whenever he wants

If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.

Replies from: Algernoq
comment by Algernoq · 2014-07-16T16:26:02.747Z · LW(p) · GW(p)

YC funding is totally worth going after! He made the right choice given that info. That's what I get for passing on rumors.

comment by ChristianKl · 2014-07-15T08:18:17.529Z · LW(p) · GW(p)

No, I agree it's generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.

It's an online discussion. There a bunch of information that might not be shared because it's too private to be shared online. I certainly wouldn't share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.

I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.

I'm arguing that it was the wrong move in this case, and hurt him and others.

If I'm understanding you right, you don't even know the individual in question. People drop out of Phd programs all the time. I don't think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.

comment by ThisSpaceAvailable · 2014-07-14T08:17:35.319Z · LW(p) · GW(p)

I'd just like to point out that ranking is a function of both the school and the metric, and thus the phrase "top-10 school" is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.

comment by TheAncientGeek · 2014-07-17T18:00:02.548Z · LW(p) · GW(p)

deciding what to do in the real world requires non-rational value judgmentsIn

deed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)

But here's the training in refining your values?

comment by hairyfigment · 2014-07-14T19:13:48.251Z · LW(p) · GW(p)

Uhm... I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.

And when I think of 'LW failure modes', I imagine someone acting without further analysis. For example, let's say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving. If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-14T21:21:20.233Z · LW(p) · GW(p)

For example, let's say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving.

Plenty of US liberals consider people who voted for Bush irrational and wouldn't want them to be part of the political discourse. The same goes in the other direction.

If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.

Welcome to the internet. There are plenty of people who misbehave in online forums. Most online forums are simply not very public about members who they ban and whose posts they delete.

I don't think stalking is a good word for the documented behavior in this case as all actions happened on this website. There are people who actually do get stalked for things they write online and who do get real life problems from the stalking.

Replies from: hairyfigment
comment by hairyfigment · 2014-07-14T21:28:27.004Z · LW(p) · GW(p)

I don't think stalking is a good word for the documented behavior

Sure, OK.

consider people...irrational

You don't say. My point is that many would verbally agree with such claims, but very few become Dennis Markuze.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-14T21:57:05.551Z · LW(p) · GW(p)

As far as I know nobody in this community did become Dennis Markuze.

I don't have the feeling that LW is over the internet base rate. Given how little LW is moderated it's an extremely civil place.

comment by buybuydandavis · 2014-07-14T07:19:25.154Z · LW(p) · GW(p)

LW has a cult-like social structure. ...

Where the evidence for this is:

Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.

Seems a rather gigantic net to cast for "cults".

Replies from: Cyan
comment by Cyan · 2014-07-14T14:32:07.622Z · LW(p) · GW(p)

Well, there's this:

However, involvement in LW pulls people away from non-LWers.

But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.

Replies from: Algernoq
comment by Algernoq · 2014-07-16T07:06:51.200Z · LW(p) · GW(p)

Compared to a martial arts club, LW goals are typically more all-consuming. Martial arts is occasionally also about living well, while LW encourages optimizing all aspects of life.

Replies from: Cyan
comment by Cyan · 2014-07-16T16:38:16.494Z · LW(p) · GW(p)

Sure, that's a distinction, but to the extent that one's goals include making/maintaining social connections with people without regard to their involvement in LW so as to be happy and healthy, it's a distinction that cuts against the idea that "involvement in LW pulls people away from non-LWers".

This falls under the utility function is not up for grabs. It finds concrete expression in the goal factoring technique as developed by CFAR, which is designed to avoid failure modes like, e.g., cutting out the non-LWers one cares about due to some misguided notion that that's what "rationality" requires.

comment by dthunt · 2014-07-13T18:38:14.804Z · LW(p) · GW(p)

Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Art in the other sense of the word. Think more along the lines of skills and practices.

Replies from: Nornagest, Algernoq
comment by Nornagest · 2014-07-14T01:19:31.465Z · LW(p) · GW(p)

I think "art" here is mainly intended to call attention to the fact that practical rationality's not a collection of facts or techniques but something that has to be drilled in through deliberate long-term practice: otherwise we'd end up with a lot of people that can quote the definitions of every cognitive bias in the literature and some we invented, but can't actually recognize when they show up in their lives. (YMMV on whether or not we've succeeded in that respect.)

Some of the early posts during the Overcoming Bias era talk about rationality using a martial arts metaphor. There's an old saying in that field that the art is 80% conditioning and 20% technique; I think something similar applies here. Or at least should.

(As an aside, I think most people who aren't artists -- martial or otherwise -- greatly overstate the role of talent and aesthetic invention in them, and greatly underestimate the role of practice. Even things like painting aren't anywhere close to pure aesthetics.)

Replies from: None
comment by [deleted] · 2014-07-14T02:18:30.455Z · LW(p) · GW(p)

Wang Yangming may be relevant here.

comment by Algernoq · 2014-07-13T23:43:06.713Z · LW(p) · GW(p)

Ahh, that makes more sense.

comment by pianoforte611 · 2014-07-13T19:27:40.326Z · LW(p) · GW(p)

Would it be fair to characterize most of your complaints as roughly "Less Wrong focuses too much on truth seeking and too little on instrumental rationality - actually achieving material success"?

Replies from: Algernoq
comment by Algernoq · 2014-07-13T23:39:40.525Z · LW(p) · GW(p)

I agree with that.

Replies from: pianoforte611, iarwain1
comment by pianoforte611 · 2014-07-14T12:11:37.909Z · LW(p) · GW(p)

In that case, I'm afraid your goals and the goals of many people here may simply be different. The common definition of rationality here is "systematic winning". However, this definition is very fuzzy because winning is goal dependent. Whether you are "winning" is dependent on what your goals and values are.

Can't speak for anyone else, but the reason why I am here is because I like polite but vigorous discussion. Its nice to be able to discuss topics with people on the internet in a way that does not drive me crazy. People here are usually open to new ideas, respectful, yet also uncompromising in the force of their arguments. Such an environment is much more helpful to me in learning about the world than the adversarial nature of most forum discussions. My goal in reading LessWrong is mostly finding likeminded people who I can talk to, share ideas with, learn form and disagree with, all without any bad feelings. That is a rare thing.

If your goal is achieving material success there are certainly very general tools and skills you can learn like getting over procrastination, managing your emotional state, or changing your value system to achieve your goals. CFAR is probably a better resource than Lesswrong for learning about these tools (but I've never actually been to a workshop). However, there is no general way to achieve success that is specific enough to be useful to one person's goals. No one resource can possibly provide that. There are heuristics like "Find someone who is as successful as you would like to be and is willing to help you on your path; if necessary harass them enough so that they help you". Which I would advocate for medicine or "Find someone who is both very high status and willing to help lesser beings and get them to be your mentor" which I would tentatively advocate for graduate school (my sample size is small though). But I don't know of a general path.

comment by iarwain1 · 2014-07-14T00:23:36.167Z · LW(p) · GW(p)

See this post and Anna Salamon's (partial) response.

Replies from: Will_BC, Algernoq
comment by Will_BC · 2014-07-14T02:01:25.960Z · LW(p) · GW(p)

Those posts are 4 years old and 2 years older than CFAR. I do think that LW could and should do better with instrumental rationality.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-07-14T04:04:25.734Z · LW(p) · GW(p)

Note that opinions differ on this topic, e.g. someone recently referred to LW as a "signaling and self-help cesspit" and got upvoted. Personally, I like seeing self-help stuff and I would encourage you to be the change you want to see :)

Replies from: Algernoq, Will_BC
comment by Algernoq · 2014-07-16T07:12:41.457Z · LW(p) · GW(p)

signaling and self-help cesspit

I saw a LOT of this, as well as some good/useful stuff.

comment by Will_BC · 2014-07-14T20:29:54.617Z · LW(p) · GW(p)

It's in the works. I've got a few ideas, but right now I'm running them by family and friends. I have some ambitious goals but I'll probably start small. I would like to see some big changes happen in the world, and I don't think that working in the most straightforward way towards the Singularity is the only way to bring them about.

Replies from: beth
comment by beth · 2014-07-15T18:13:30.091Z · LW(p) · GW(p)

When you're ready to share these ideas, please let me know how I can help.

Replies from: Will_BC
comment by Will_BC · 2014-07-16T18:10:37.615Z · LW(p) · GW(p)

Thank you very much for the offer. I should have a post up in over a week and under a month.

comment by Algernoq · 2014-07-16T07:10:03.535Z · LW(p) · GW(p)

Interesting links. Looks like this discussion has happened before.

comment by Nornagest · 2014-07-14T17:22:47.366Z · LW(p) · GW(p)

[I]nvolvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. [...] LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

I think you've got the causation going the wrong way here. LW does target a lot of socially awkward intellectuals. And a lot of LWers do harbor some contempt for their "less rational" peers. I submit, however, that this is not because they're LWers but rather because they're socially awkward intellectuals.

American geek culture has a strong exclusionist streak: "where were you when I was getting beaten up in high school?" Your average geek sees himself (using male pronouns here because I'm more familiar with the male side of the culture) as smarter and morally purer than Joe and Jane Sixpack -- who by comparison are cast as lunkish, thoughtless, cruel, but attractive and socially successful -- and as having suffered for that, which in turn justifies treating the mainstream with contempt and suspicion.

That's a pretty fragile worldview, though. It's threatened by any deviations from its binary classification of people, and indeed things don't line up so neatly in real life. LW's style of rationality provides a seemingly more robust line of division: on this side you have the people brave and smart enough to escape their heuristics and biases, and on that one you have the people that aren't, who continue to play bitches and Blunderbores as needed. You may recognize this as a species of outgroup homogeneity.

Where's the flaw in this line of thinking? Well, practical rationality is hard. A lot harder than pointing out biases in other people's thinking. But if all you're looking for from the community is a prop for your ego, you probably aren't strongly motivated to do much of that hard work; it's a lot easier just to fall back on old patterns of exclusion dressed up in new language.

comment by blacktrance · 2014-07-13T20:18:39.447Z · LW(p) · GW(p)

involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals.

Alternative hypothesis: Once a certain kind of person realizes that something like the LW community is possible and even available, they will gravitate towards it - not because LW is cultish, but because the people, social norms, and ideas appeal to them, and once that kind of interaction is available, it's a preferred substitute for some previously engaged-in interaction. From the outside, this may look like contempt for Normals. But from personal experience, I can say that form the inside it feels like you've been eating gruel all your life, and that's what you were used to, but then you discovered actual delicious food and don't need to eat gruel anymore.

Replies from: buybuydandavis
comment by buybuydandavis · 2014-07-14T07:48:58.416Z · LW(p) · GW(p)

Yes, it's rather odd to call a group of like minded people a cult because they enjoy and prefer each other's company.

In grad school I used to be in a couple of email lists that I enjoyed because of the quality of the intellectual interaction and the topics discussed, one being Extropians in the 90s. I'd given that stuff up for a long time.

Got back into it a little a few years ago. I had been spending time at a forum or two, but was getting bored with them primarily because of the low quality of discussion. I don't know how I happened on HPMOR, but I loved it, and so naturally came to the site to take a look. Seeing Jaynes, Pearl, and The Map is not the Territory served as good signaling to me of some intellectual taste around here.

I didn't come here and get indoctrinated - I saw evidence of good intellectual taste and that gave me the motivation to give LW a serious look.

This is one suggestion I'd have for recruiting. Play up canonical authors more. Jaynes, Kahneman, and Pearl convey so much more information than bayesian analysis, cognitive biases, and causal analysis. None of those guys are the be all and end all of their respective fields, but identifying them plants a flag where we see value that can attract similarly minded people.

comment by MathiasZaman · 2014-07-15T01:45:56.067Z · LW(p) · GW(p)

I've debated myself about writing a detailed reply, since I don't want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I'm coming from.

I'm in more-or-less the same position as you are. The main difference being that I've read pretty much all of the Sequences (and am slowly rereading them) and I haven't signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW - Non-LW scale are pretty similar.

And yet my experience has been almost completely opposite of yours. I don't like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I'm going to have to do it.

Rationality doesn't guarantee correctness.

I'm not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.

Some short points, though:

Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. This is in a part of the Sequences you've probably haven't read. I generally advice "Three Worlds Collide" to people struggling with this distinction, but I haven't gotten any feedback on how useful that is.;

Rationality can help you make "should"-statements, if you know what your preferences are. It helps you optimize towards your preferences.

When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving.

I believe the Sequences give the example that to be good at baseball, one shouldn't calculate the trajectory of the ball. One should just use the intuitive "ball-catching" parts of the brain and train those. While overanalyzing things seems to be a bit of a hobby for the aspiring rationalist community, if you think that they're the sort of persons who will spend 25% of their time to shave 5% of driving time you're simply wrong about who's in that particular community.

LW tends to conflate rationality and intelligence.

This is actually a completely different issue. One worth addressing, but not as part of "rationality doesn't guarantee correctness."

In particular, AI risk is overstated

I'm not the best suited to answer this, and it's mostly about your estimate towards that particular risk. As ChristianKl points out, a big chunk of this community doesn't even think Unfriendly AGI is currently the biggest risk for humanity.

What I will say is that if AGI is possible (which I think it is), than UFAI is a risk. And since Friendliness is likely to be as hard as actually solving AGI, it's good that groundwork is being lain before AGI is becoming a reality. At least, that how I see it. I'd rather have some people working on that issue than none at all. Especially if the people working for MIRI are best at working on FAI, rather than another existential risk.

LW has a cult-like social structure

No more than any other community. Everything you say in that part could be applied to the time I got really into Magic: The Gathering.

I don't think Less Wrong targets "socially awkward intellectuals" inasmuch as it was founded by socially awkward intellectuals and that socially awkward intellectuals are more likely to find the presented material interesting.

However, involvement in LW pulls people away from non-LWers.

This has, in my case, not been true. My relationships with my close friends haven't changed one bit because of Less Wrong or the surrounding community, nor have my other personal relationships. If anything, Less Wrong has made me more likely to meet new people or do things with people I don't have a habit doing things with. LessWrong showed my that I needed a community to support myself (a need that I didn't consciously realized I had before) and HPMOR taught me a much-needed lesson about passing up on opportunities.

For the sake of honesty and completeness, I must say that I do very much enjoy the company of aspiring rationalists, both in meatspace at the meetups or in cyberspace (through various channels, mostly reddit, tumblr and skype). Fact of the matter is, you can talk about different things with aspiring rationalists. The inferential differences are smaller on some subjects. Just like how the inferential differences about the intricacies of Planeswalkers and magic are lower with my Magic: The Gathering friends.

Many LWers are not very rational.

This is only sorta true. Humans in general aren't very rational. Knowing this gets you part of the way. Reading Influence: Science and Practice or Thinking: Fast and Slow won't turn you into a god, but they can help you realize some mistakes you are making. And that still remains hard for all but the most orthodox aspiring rationalists. And I keep using "aspiring rationalists" because I think that sums it up: The Less Wrong-sphere just strives to do better than default in the area of both epistemic and instrumental rationality. I can't think of anyone I've met (online or off-) that believes that "perfect rationality" is a goal mere humans can attain.

And it's hard to measure degrees of rationality. Ideally, LWers should be more rational than average, but you can't quite measure that, can you. My experience is that aspiring rationalists at least put in greater effort to reaching their goals.

For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality

Rationality is a tool, not a goal. And the best interventions in my life have been shorter-term: Get more exercise, use HabitRPG, be aware of your preferences, Ask, Tell and Guess culture, Tsuyoku Naritai, Spaced Repetition Software... are the first things that come to mind that I use regularly that do actually improve my life and help me reach my goals.

And as anecdotal evidence: I once put it to the skype-group of rationalists that I converse with that every time I had no money, I felt like I was a bad rationalist, since I wasn't "winning." Not a single one blamed it on a Lack of Rationality.

Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

If you want to understand that behavior, I encourage you to read the Sequences on morality. I could try to explain it, but I don't think I can do it justice. I generally hate the "just read the Sequences"-advice, but here I think it's applicable.

LW membership would make me worse off.

This is where I disagree the biggest. (Well, not that it would make you worse off. I won't judge that.) Less Wrong has most definitely improved my life. The suggestion to use HabitRPG or leechblock, the stimulating conversations and boardgames I have at the meetup each month, the lessons I learned here that I could apply in my job, discovering my sexual orientation, having new friends, picking up a free concert, being able to comfort my girlfriend more effectively, being able to better figure out which things are true, doing more social things... Those are just the things I can think of off the top of my mind at 3.30 AM that Less Wrong allowed me to do.

I don't intend to convince you to become more active on Less Wrong. Hell, I'm not all that active on Less Wrong, but it has changed my life for the better in a way that a different community wouldn't have done.

Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

It does, at least for me, and I seriously doubt that I'm the only one. I haven't reached a successful career (yet, working on that), but my life is more successful in other areas thanks in part to Less Wrong. (And my limited career-related successes are, in part, attributable to Less Wrong.) I can't quantify how much this success can be attributed to LW, but that's okay, I think. I'm reasonably certain that it played a significant part. If you have a way to measure this, I'll measure it.

"Art of Rationality" is an oxymoron.

I like that phrase because it's a reminder that (A) humans aren't perfectly rational and require practice to become better rationalists and (B) that rationality is a thing you need to do constantly. I like this SSC post as an explanation.

Replies from: Algernoq, TheAncientGeek
comment by Algernoq · 2014-07-15T02:44:14.630Z · LW(p) · GW(p)

Thanks for the detailed reply!

Based on this feedback, I think my criticisms reflect mostly on my fit with the LWers I happened to meet, and on my unreasonably high standards for a largely informal group.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-15T16:52:22.315Z · LW(p) · GW(p)

Upvoted for updating.

comment by TheAncientGeek · 2014-07-17T17:55:41.914Z · LW(p) · GW(p)

No more [cultishness] than any other community.

One could reasonably expect significantly less.

comment by jsteinhardt · 2014-07-13T23:36:45.339Z · LW(p) · GW(p)

Hi Algernoq,

Thanks for writing this. This sentence particularly resonated:

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills).

I was definitely explicitly discouraged from pursuing a PhD by certain rationalists and I think listening to their advice would have been one of the biggest mistakes of my life. Unfortunately I see this attitude continuing to be propagated so I am glad that you are speaking out against it.

EDIT: Although, it looks like you've changed my favorite part! The text that I quoted the above was not the original text (which talked more about dropping out of PhD and starting a start-up).

Replies from: Bruno_Coelho, Algernoq, Algernoq
comment by Bruno_Coelho · 2014-07-14T22:21:17.672Z · LW(p) · GW(p)

This anti-academic feeling is something I associate with lesswrong, mostly because people can find programming jobs without necessarily having a degree.

comment by Algernoq · 2014-07-14T00:11:15.943Z · LW(p) · GW(p)

Glad to hear it!

For others considering a PhD: usually the best (funded) PhD program you got into is a good choice for you. But only do it if you enjoy research/learning for its own sake.

Replies from: jsteinhardt
comment by jsteinhardt · 2014-07-14T01:18:54.908Z · LW(p) · GW(p)

Tangential, but:

usually the best (funded) PhD program you got into is a good choice for you. But only do it if you enjoy research/learning for its own sake.

I'm not sure I agree with this, except insofar as any top-tier or even second-tier program will pay for your graduate education, at least in engineering fields, and so if they do not then that is a major red flag. I would say that research fit with your advisor, caliber of peers, etc. is much more important.

Replies from: tslarm
comment by tslarm · 2014-07-16T03:21:14.180Z · LW(p) · GW(p)

I interpreted "the best (funded) PhD program you got into" to mean 'the best PhD program that offered you a funded place', rather than 'the best-funded PhD program that offered you a place'. So Algernoq's advice need not conflict with yours, unless he did mean 'best' in a very narrow sense.

comment by Algernoq · 2014-07-16T07:15:13.140Z · LW(p) · GW(p)

OK, I'll change it back. I heard it secondhand so I deleted it.

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-14T03:58:40.079Z · LW(p) · GW(p)

Thanks for being bold enough to share your dissenting views. I'm voting you up just for that, given the reasoning I outline here.

I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I'm not sure about the usefulness of debating the goodness or badness of "LW" as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like "LW seems weird, and weird things make me feel uncomfortable" that is not as valuable.

I'll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don't want to change it without the change being a consensus decision.) In my opinion, the sequences are a lot longer than they need to be, not especially information-dense, and also hard to update (there have been controversies over whether some point or another in the Sequences is correct, but those controversies never get appended to the Sequences).

Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

I'm having a hard time forming a single coherent argument out of this paragraph. Yep, value judgements are important. I don't think anyone on Less Wrong denies this. Yep, it's hard to extrapolate beyond limited data. Is there a a particular LW post that advocates extrapolating based on limited data? I haven't seen one. If so, that sounds like a problem with the post, not with LW in general. Yes, learning from real-world data is great. I think LW does a decent job of this; we are frequently citing studies. Yes, it's possible to overthink things, and maybe LW does this. It might be especially useful to point to a specific instance where you think it happened.

I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world.

Makes sense. In my work as a software developer, I've found that it's useful to think for a bit about what I'm going to program before I program it. My understanding is that mathematicians frequently prove theorems, etc. without testing them, and this is considered useful. So to the extent that AI is like programming/math, highly theoretical work may be useful.

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

This seems like it deserves its own Open Thread comment/post if you want to explain it in detail. (I assume you have arguments for this idea as opposed to having it pop in to your head fully formed :])

One way this happens is by encouraging contempt for less-rational Normals.

I agree this is a problem.

I imagine the rationality "training camps" do this to an even greater extent.

I went to a 4-day CFAR workshop. I found the workshop disappointing overall (for reasons that probably don't apply to other people), but I didn't see the "contempt for less-rational Normals" you describe present at the workshop. There were a decent number of LW-naive folks there, and they didn't seem to be treated differently. Based on talking to CFAR employees, they are wise to some of the problems you describe and are actively trying to fight them.

LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

Well sure, I might as well say that Comic-Con or Magic the Gathering attracts socially awkward people without many existing social ties. "LW recruiting" is not quite as strategic as you make it out to be (I'm speaking as someone who knows most of the CFAR and MIRI employees, goes to lots of LWer parties in the Bay Area, used to be housemates with lukeprog, etc.) I'm not saying it's not a thing... after the success of HPMOR, there have been efforts to capitalize on its success more fully. To the extent that specific types of people are "targeted", I'd say that intelligence is the #1 attribute. My guess is if you were to poll people at MIRI & CFAR, and other high-status Bay Area LW people like the South Bay meetup organizers, etc. if anything they would have a strong preference for having community members who are socially skilled and well-connected over socially awkward folks.

For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should").

Rationality seems like a pretty vague "solution" prescription. To the extent that there exists a hypothetical "LW consensus" on this topic, I think it would be that going to a CFAR workshop would solve these problems more effectively than reading the sequences, and a CFAR workshop is not much like reading the sequences.

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW

Well, I think I have become substantially more successful during the time period when I've been a member of the LW community (got in to a prestigious university and am now working at a high paying job), and I think I can attribute LW to some of this success (my first two internships were at startups I met through the bay area LW network, and I think my mental health improved from making friends who think the same way I do). But that's just an anecdote.

"Art of Rationality" is an oxymoron.

Agreed. One could level similar criticisms at books with titles like The Art of Electronics or The Art of Computer Programming. But I think Eliezer made a mistake in trying to make rationality seem kind of cool and deep and wise in order to get people interested in it. (I think I remember reading him write this somewhere; can't remember where.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-07-14T04:19:42.962Z · LW(p) · GW(p)

[pollid:737]

Note: Here's Yvain on why the sequences are great, to provide some counterpoint to my criticism above.

Replies from: buybuydandavis
comment by buybuydandavis · 2014-07-14T08:07:21.294Z · LW(p) · GW(p)

Where available, I would emphasize the original source material over the sequence rehash of them.

This would greatly lower the Phyg Phactor, limit in group jargon, better signal to outsiders who also value that source material, and possibly create ties to other existing communities.

Replies from: David_Gerard, drethelin, John_Maxwell_IV
comment by David_Gerard · 2014-07-14T11:31:54.583Z · LW(p) · GW(p)

Where available, I would emphasize the original source material over the sequence rehash of them.

Needed: LW wiki translations of LW jargon into the proper term in philosophy. (Probably on the existing jargon page.)

comment by drethelin · 2014-07-14T20:18:37.468Z · LW(p) · GW(p)

I strongly disagree with this. I don't care about cult factor: The sequences are vastly more readable than the original sources. Almost every time I've tried to read stuff a sequence post is based on I've found it boring and given up. The original sources already exist and aren't attracting communities of new leaders who want to talk about and do stuff based on them! We don't need to add to that niche. We are in a different niche.

Replies from: blacktrance, buybuydandavis
comment by blacktrance · 2014-07-15T02:49:51.829Z · LW(p) · GW(p)

Seconded. I think HPMOR and the Sequences are a better introduction to rationality than the primary texts would be.

comment by buybuydandavis · 2014-07-14T22:40:41.291Z · LW(p) · GW(p)

Almost every time I've tried to read stuff a sequence post is based on I've found it boring and given up.

I didn't. I've read them all. Don't know how someone finds Jaynes "boring", but different strokes, etc.

The original sources already exist and aren't attracting communities of new leaders who want to talk about and do stuff based on them!

Phyg +1

Jaynes, Pearl, Hahneman, and Korzybski had followings long before LW and the sequences existed. Korzybski's Institute for General Semantics has been around since 1938, and was fairly influential, intellectually and culturally. They actually have some pretty good summary material, if reading Korzybski isn't your thing (and I can understand that one, as he was a tiresome windbag).

If you like the sequences, great, read them. I think you're missing out on a lot if you don't read the originals.

Simply as an outreach method, listing the various influences would pique more interest than "We've got a smart guy here who wrote a lot of articles! Come read them!" The sequences aren't the primary outreach advantage here - HPMOR is. Much like Rand's novels are for her.

Replies from: drethelin
comment by drethelin · 2014-07-15T02:09:04.637Z · LW(p) · GW(p)

My outreach method is usually not to do that but to link to a specific article about whatever we happened to be talking about which is a lot faster than saying "Here read a textbook on probability" or "look at this tversky and kahneman study!"

Then again I don't do a ton of LW outreach

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-14T09:46:57.126Z · LW(p) · GW(p)

We could direct people to Wikipedia's list of cognitive biases (putting effort in to improving the articles as appropriate and getting a few people to add the articles to their Wikipedia watchlists). Improving Wikipedia articles has the positive externality of helping anyone who reads the article (of which the LW-curious will make up a relatively small fraction).

I think the ideal way to present rationality might be a diagnostic test that lets you know where your rationality weaknesses are and how to improve them, but I'm not sure if this is doable/practical.

comment by kalium · 2014-07-13T21:48:01.404Z · LW(p) · GW(p)

I read LW for entertainment, and I've gotten some useful phrases and heuristics from it, but the culture bothers me (more what I've seen from LWers in person than on the site). I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid. People of similar intelligence who don't consider themselves rationalists are much nicer when you tell them "I'm not sure why, but I don't feel like doing xyz right now." (To be fair, my sample is not large. And I hope it stays that way.)

Replies from: philh, mare-of-night, Algernoq, ChristianKl
comment by philh · 2014-07-14T13:06:22.935Z · LW(p) · GW(p)

FWIW, I have the opposite experience with online versus offline.

I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid.

It wouldn't surprise me at all to see this on the website, but I wouldn't expect it to happen in meatspace.

(Obviously meetups vary, but I help organize the London meetup, I went to the European megameetup, I went to CFAR, and I've spent a small amount of time with the SF/Berkeley crowd.)

comment by mare-of-night · 2014-07-15T00:30:14.551Z · LW(p) · GW(p)

I once had a friend who got really worried when I invited him to come to a LW meetup with me, and later found out he had another friend who'd read this site and then decided that everyone else needed to be more rational to make her own life easier. The worst I've encountered in meatspace personally was being asked why I believe what I believe a whole lot (which can be really useful when you're actually deciding something, but being asked to cite your sources in conversation also really interrupts the flow of things), which was more than balanced out by the good conversations. So my general impression is that LW as a high standard deviation in acquaintance/conversationalist quality, and either there's more good than bad or I've had good luck.

comment by Algernoq · 2014-07-13T23:35:23.367Z · LW(p) · GW(p)

there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid

I experienced this too, though I claimed an explicit utility function (making self-replicating robots) that no one was prepared to argue with, so I didn't get anyone telling me my feelings were irrational and should be ignored.

I also noticed some slow decision-making. Recommendation: in a large group, use a heuristic that takes less than 10 minutes of discussion to decide where/when to go for dinner.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-14T08:19:39.016Z · LW(p) · GW(p)

Any suggestions for a better heuristic?

Replies from: Toggle, NancyLebovitz, Algernoq, Prismattic
comment by Toggle · 2014-07-15T02:32:15.954Z · LW(p) · GW(p)

The 'veto method' has worked quite well for me, although I haven't tested it for groups larger than about ten.

Assuming that the group has reached a consensus on eating, any member of the group is free to suggest a restaurant. After the location is suggested, any member of the group can veto that suggestion, but in exchange the vetoing member is required to suggest a different restaurant. Repeat until a suggestion is made that no member of the group vetoes.

comment by NancyLebovitz · 2014-07-15T16:43:24.061Z · LW(p) · GW(p)

I'm not sure that any of those would take less than 10 minutes for a large group. Also, it gets tougher if any in the group have serious dietary or financial constraints.

comment by Algernoq · 2014-07-15T00:33:22.434Z · LW(p) · GW(p)

Sure:

  1. take a straw poll to see who wants to go get dinner at time X
  2. if "enough" people want to go, they then pick a restaurant...
  3. anyone can make a pitch for one new restaurant that the group should check out
  4. in a group of n, one person suggests n*2/3 possible restaurants to eat dinner at (max. 7)
  5. everyone else, one at a time, may then either pass or name 2/3 of the restaurants named by the person immediately before them
  6. if reservations are required, calls to the restaurants are made when 2 possibilities remain
  7. when only one restaurant is named, the group goes there.

This algorithm is a work in progress.

comment by Prismattic · 2014-07-14T23:25:40.052Z · LW(p) · GW(p)

Not a heuristic, but I would suggest an auction. Example: You have 5 people, A and B want seafood, C wants Thai, D wants Mexican, and E wants steak.

E -- I'll pay for 1% of everyone else's bill if we get steak. A -- 2%, seafood, C -- 3%, Thai, B -- 4% seafood (all pass)

Result, A + B get the food they want, but C, D, and E pay less (with B picking up 2.67% of their bills and A picking up 1.33%).

There are edge cases where this doesn't necessarily work well (e.g. someone with a severe food allergy gets stuck bidding a large amount to avoid getting poisoned), but overall I think it functions somewhat similarly to yootling.

comment by ChristianKl · 2014-07-14T21:11:39.953Z · LW(p) · GW(p)

Which LW meetup did you visit? The LWler I meet in Berlin and at the European mega meetup were generally nice and didn't pressure other people into giving justifications for preferences. There might be some curious questions when someone doesn't understand why someone else is doing what they are doing, but I didn't witness anything I would label as pressuring even towards people running around with crookers rules tags.

Replies from: kalium
comment by kalium · 2014-07-15T03:47:37.406Z · LW(p) · GW(p)

Not an actual meetup, but some people I knew from college who happened to be LWers/rationalists.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-07-15T07:54:16.117Z · LW(p) · GW(p)

This is going to sound like a stupid excuse... okay, instead of the originally planned excuse, let me just give you an example of what happened to me a week or two ago...

I wrote an introductory article about LW-style rationality in Slovak language on a website where it quickly got 5000 visitors. (link) About 30 of them wrote something in a discussion below the article, some of them sent me private messages about how they like what I wrote, and some of them "friended" me on Facebook.

The article was mostly about that reality exists and map is not the territory, and how politics is the mindkiller. With specific examples about how the politics is the mindkiller, and mentioning the research about how political opinions reduced subjects' math abilities.

One guy who "friended" me because of this article... when I looked at his page, it was full of political conspiracy theories. He published a link to some political conspiracy theory article every few hours. (Judging from the context, he meant it seriously.) When I had him briefly in the friend list (because I clicked "okay" without checking his page first), my Facebook homepage turned mostly to a list of conspiracy links. After seeing this, I quickly "unfriended" him.

What does this mean? This guy somehow believed that we are on the same page, although in my eyes, he is the complete opposite of what I try to achieve. I was horrified by seeing his Facebook page. But he liked my article about rationality. I believe that politics is the mindkiller... and my best model of him (not very reliable) says that he believes that his opponents are mindkilled, but this doesn't apply to him, because he is simply telling the truth, the facts. Anyway, for some reasons he identified with what I wrote, and... I wouldn't be quite surprised if he decided that he likes LW, too, using a similar thought process like he applied to me.

Okay, so here is the originally planned stupid excuse... just because someone likes LW and decides to call themselves rationalist, it doesn't necessarily make them a characteristic member of the culture. People are perfectly able to pick the parts they like, adopt the language, and ignore the incovenient parts.

(And I know it sounds like "No true Scottsman", and I am not sure how to fix this. Let's just make it "No typical Scottsman", perhaps?)

Replies from: kalium, ChristianKl
comment by kalium · 2014-07-16T02:25:15.345Z · LW(p) · GW(p)

I don't think all or even most LWers exhibit the behavior I'm complaining about. But I think people who do it are attracted to LW/rationalism, and it bothers me enough that I'm willing to give up completely on a community after I've seen it from a few people there.

comment by ChristianKl · 2014-07-15T10:53:24.239Z · LW(p) · GW(p)

(And I know it sounds like "No true Scottsman", and I am not sure how to fix this. Let's just make it "No typical Scottsman", perhaps?)

I think "true" LW members are people who go to meetups or who participate on LW by writing comments.

comment by ChristianKl · 2014-07-15T10:53:31.370Z · LW(p) · GW(p)

There are plenty of people who call themselves rationalists who have no relationship to LW. Give than you have 636 karma, you might even have a stronger bond to LW then them.

Replies from: kalium
comment by kalium · 2014-07-16T02:29:39.750Z · LW(p) · GW(p)

That is probably true, though at least one of the people in question now attends meetups in my area. In fact I even got a job through LW. On the other hand I don't feel like I use this site socially. I don't have conversations with other users, or remember which users have been friendly or hostile to me. I never even met any of my employers. I just get an urge to nitpick, or shout into the void, or point out facts, or read interesting articles, and so I come here.

comment by dthunt · 2014-07-14T00:59:26.859Z · LW(p) · GW(p)

So, specifically with respect to "cult' and "elitist" observations I see, in general, I would like to offer a single observation:

"Tsuyoku naritai" isn't the motto of someone trying to conform to some sort of weird group norm. It's not the motto of someone who hates people who have put in less time or effort than himself. It's the recognition that it is possible to improve, and the estimation that improving is a worthwhile investment.

If your motivation for putting intellectual horsepower into this site isn't that, I'd love to hear about it, because that previous phrase really resonates with me, and while I can imagine other motivations for being on a site with forums and stuff, I have a hard time thinking that anyone would go grab a copy of Jaynes merely because they wanted to blend in better.

Replies from: Algernoq
comment by Algernoq · 2014-07-16T07:21:34.182Z · LW(p) · GW(p)

Yup, I'm all about continuous improvement, or at least try to be.

comment by Sophronius · 2014-07-13T18:45:43.705Z · LW(p) · GW(p)

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality cannot dictate terminal values does not contradict this.

I do agree with the general sense that less wrong is subject to a lot of group think however, and agree that this is a big issue.

Replies from: Algernoq
comment by Algernoq · 2014-07-13T23:28:32.829Z · LW(p) · GW(p)

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that.

I agree. My concern is that LW claims to be "less wrong" than it is.

Everything is objectively true or false

A third possibility is "undecidable" (as in Godel incompleteness). There's something weird going on with consciousness that may resolve this question once understood.

Replies from: Sophronius
comment by Sophronius · 2014-07-14T18:19:52.307Z · LW(p) · GW(p)

I don't really understand your objection. When I say that everything is objectively true or false, I mean that any particular thing is either part of the universe/reality at a given point in time/space or it isn't. I don't see any other possibility*. Perhaps you are confusing the map and the territory? It is perfectly possible to answer questions with "I don't know" or "mu" but that doesn't mean that the universe itself is in principle unknowable. The fact that consciousness is not properly understood yet does not mean that it occupies a special state of existing/not existing: We are the one's that are confused, not the universe.

*Ok, my brain just came up with another possibility but it's irrelevant to the point I'm making.

Replies from: Algernoq
comment by Algernoq · 2014-07-15T00:00:00.320Z · LW(p) · GW(p)

I think we are in agreement that rational decision-making is usually valuable, and that some people sometimes cite rationality in order to give false weight to their opinions. To continue your analogy, I'm saying that studying the rules of the road ceases to be a good use of time for most people once a basic driver's license is earned, even if it can slightly reduce accident risk. The possibility of upvotes while having this discussion is making me reconsider.

The universe could be fundamentally unknowable, though this possibility doesn't seem very useful.

comment by orthonormal · 2014-07-20T23:56:28.165Z · LW(p) · GW(p)

I feel like everyone in this community has ridiculous standards for what the community should look like in order to be considered a success. Considering the demographics Less Wrong pulls from, I consider LW to be the experimental group where r/atheism is the control group.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2014-10-17T22:41:29.239Z · LW(p) · GW(p)

Agreed.

comment by [deleted] · 2014-07-13T20:15:32.763Z · LW(p) · GW(p)

I basically agree with this post, with some exceptions like:

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials

But for the moment I will keep reading LessWrong sometimes. This is because of useful guides like "Lifestyle interventions to increase longevity" and "Political Skills which Increase Income" and also that the advice I've gotten has often been better than on Quora. And I do like the high quality evidence-based discussion of charitable/social interventions.

comment by ChristianKl · 2014-07-13T19:55:15.409Z · LW(p) · GW(p)

Rationality doesn't guarantee correctness

That's a strawman. I don't think a majority of LW thinks that's true.

In particular, AI risk is overstated

The LW consensus on the matter of AI risk isn't that it's the biggest X-risk. If you look at the census you will find that different community members think different X-risks are the biggest and more people fear bioengineered pandemics than an UFAI event.

LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to "training camps" for a few months

I don't know what you mean with training camps but the CFAR events are 4 day camps.

If you mean App Academy with training camp, then yes some people might do it instead of a PHD program and then go on to work. There a trend that companies like Google do evidence-based hiring and care less about degrees of employees than they care about skills. AS companies get better at evaluating the skill of potential hires the signaling value of a degree gets less. Learning practical skills in App academy might be more useful for some people but it's of course no straightforward choice.

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

Having a lot of surplus income that gets thrown in a brute-force way at research problems might increases Xrisk instead of reducing it.

comment by [deleted] · 2014-07-14T00:04:27.233Z · LW(p) · GW(p)

Yeah, this is all true. In any helpful community, there will be some drawbacks and red flags. The question is always if engaging in the community is the highest expected value you can get. For most people, I think the answer is obviously no.

Less wrong should really be viewed as an amusing diversion, which can be useful in certain situations (this weekend I did calibration training, would have been hard to find people who wanted to join without LW). I think people for the most part aren't on here because they think this is the absolute best use of their time, or that it's a perfect community that has no drawbacks or flaws.

Replies from: John_Maxwell_IV, None
comment by John_Maxwell (John_Maxwell_IV) · 2014-07-14T04:08:45.992Z · LW(p) · GW(p)

To be fair, most online communities aren't an especially good use of your time if you're an ambitious, driven person.

comment by [deleted] · 2014-07-14T00:58:10.869Z · LW(p) · GW(p)

Was it something I said? This seems to have a surprising number of downvotes.

comment by CAE_Jones · 2014-07-13T21:09:43.777Z · LW(p) · GW(p)

Replacing my original comment with this question:

What has Lesswrong done for you?

We talk about strengthening the community, etc. But what does LW actually do? What do LWers get out of it? What about value Vs. time spent with LW? Ex, if you got here in 2011, was most of the value concentrated in 2011? Has it trickled out over time?

Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?

This is not a new question. But the more I think about it, the more I wonder: what are we getting out of LW? Obviously something, since I'm still here. But can we quantify it?

Replies from: DanielLC, Richard_Kennaway, Viliam_Bur, Algernoq
comment by DanielLC · 2014-07-14T00:48:50.028Z · LW(p) · GW(p)

Do we contribute to the world in any way other than encouraging people to donate money responsibly?

You say that like it isn't a big contribution.

comment by Richard_Kennaway · 2014-07-14T00:04:26.949Z · LW(p) · GW(p)

Do we accomplish things? Are we some kinda networking platform for pockets of smart people spread out across the globe? Do we contribute to the world in any way other than encouraging people to donate money responsibly?

Have you read the monthly bragging threads?

comment by Viliam_Bur · 2014-07-13T22:04:53.554Z · LW(p) · GW(p)

Different people may get different things. For example, I am very picky about people I spend my time with (always was, even before I found LW), and organizing local meetups helped me meet a few interesting people. As a side effect of LW, I stopped debating politics, which saves a lot of time and negative emotions; now I can spend the time on getting some free internet education, which so far didn't bring me further benefits, but at least feels better. But I imagine other people can get different things, and what I wrote here may be irrelevant for them.

comment by Algernoq · 2014-07-13T23:31:01.152Z · LW(p) · GW(p)

I get a feeling that I am smart and special. I also get interesting discussions/ideas. I also get distracted for hours.

comment by DanielLC · 2014-07-13T19:05:32.346Z · LW(p) · GW(p)

"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Science follows objective evidence. You're not allowed to publish a paper where you conclude something based on a hunch, because anyone can claim they have a hunch. You can only do science with evidence that is undeniable. Not undeniably strong. You only need p = 0.05. But it has to be unquestionable that there really are those 4.3 bits of evidence.

Rationality follows subjective evidence. There often simply isn't enough objective evidence to make all of your decisions. You have to use every trick in the book to make sure your model of reality is as accurate as possible.

It's not about aesthetics, but this is using "art" in the sense of "more an art then a science," which doesn't imply aesthetics.

Replies from: Algernoq
comment by Algernoq · 2014-07-13T23:24:04.081Z · LW(p) · GW(p)

I would equate rationality with logic. Thus, the (subjective) priors are an input to rationality. LW Rationality appears to mix in a few subjective priors with the rationality.

Replies from: philh, DanielLC
comment by philh · 2014-07-14T11:59:16.339Z · LW(p) · GW(p)

I would equate rationality with logic.

That's not what the word usually means on this site. You seem to be simultaneously objecting that (a) your idea of rationality is not optimal, and (b) LW rationality doesn't perfectly follow your idea of rationality.

comment by DanielLC · 2014-07-13T23:53:32.324Z · LW(p) · GW(p)

I wasn't talking about priors. If you have a hunch because something is simpler, then that would be priors, but if you have a hunch because you've been subconsciously collecting evidence too vague to be put into words, then reality is causing the hunch, so it's just evidence.

comment by jg909 · 2014-07-29T10:22:40.029Z · LW(p) · GW(p)

.

comment by ChristianKl · 2014-07-13T20:01:23.022Z · LW(p) · GW(p)

"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Ockham's razor is inherently an aesthetic principle. Between two explanations that both explain the data you have equally well you prefer one explanation over the other. Aesthetics matters in theoretical physics as a guiding principle.

A skill such as noticing confusion is also not directly about objective evidence.

comment by jimrandomh · 2014-07-13T20:00:35.497Z · LW(p) · GW(p)

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally.

Out of curiosity, which meetup group was it, and what was that meetup like?

comment by V_V · 2014-07-14T10:11:13.103Z · LW(p) · GW(p)

I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:

  • The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outsiders.

  • Rituals: Deliberated modelled after religious rituals, including "public confession" sessions AFAIK similar to those performed by cults like the Church of Synanon.

  • MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

  • CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.

  • Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don't probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.

Replies from: Kaj_Sotala, None, XiXiDu, ChristianKl
comment by Kaj_Sotala · 2014-07-16T08:32:17.040Z · LW(p) · GW(p)

In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :

We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.

http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :

We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]

Update 05/14/14: This paper has been accepted to AGI-14.

Replies from: V_V
comment by V_V · 2014-07-16T08:48:53.019Z · LW(p) · GW(p)

Didn't know about that. Thanks for the update.

comment by [deleted] · 2014-07-14T12:31:48.430Z · LW(p) · GW(p)

We only really agree on the first point. I'm skeptical of CFAR and the ritual crew but don't find these supposed comparisons to be particularly apt.

I've watched MIRI improve their research program dramatically over the past four four years, and expect it to improve. Yes, obviously they had some growing pains in learning how to publish, but everyone who tries to do publishable work goes through that phase (myself included).

I'm not on board with the fifth point:

cryonics (you are signed up, so you don't probably agree)

Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so ("P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]"). At the least I'd say it's a controversial topic here, for all the usual reasons. (No, I'm not signed up for cryonics. No, I don't think it's very likely to work.)

paleo diets/ketogenic diets

Most of the comments on What is the evidence in favor of paleo? are skeptical. The comment with highest karma is very skeptical. Lukeprog said he's skeptical and EY said it didn't work for him.

armchair evopsych

Not really sure what you're referring to.

Surprised you didn't bring up MWI; that's the usual hobby horse for this kind of criticism.

Replies from: V_V
comment by V_V · 2014-07-14T13:58:04.309Z · LW(p) · GW(p)

We only really agree on the first point. I'm skeptical of CFAR and the ritual crew but don't find these supposed comparisons to be particularly apt.

Ok.

I've watched MIRI improve their research program dramatically over the past four four years, and expect it to improve.

I agree that it improved dramatically, but only because the starting point was so low.
In recent years they released some very technical results. I think that some are probably wrong or trivial while others are probably correct and interesting, but I don't have the expertise to properly evaluate them, and this probably applies to most other people as well, which is why I think MIRI should seek peer-review by independent experts.

Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so ("P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]"). At the least I'd say it's a controversial topic here, for all the usual reasons. (No, I'm not signed up for cryonics. No, I don't think it's very likely to work.)

As I said, these beliefs aren't necessarily held by a majority of lesswrongers, but are unusually common.

Surprised you didn't bring up MWI; that's the usual hobby horse for this kind of criticism.

MWI isn't pseudo-scientific per se. However, the claim that MWI is obviously true and whoever thinks otherwise must be ignorant or irrational is.

Replies from: None
comment by [deleted] · 2014-07-14T16:27:25.073Z · LW(p) · GW(p)

I agree that it improved dramatically, but only because the starting point was so low.

The starting point is always low. Your criticism applies to me, a mainstream, applied mathematics graduate student.

  • I started research in my area around 2009.
  • I have two accepted papers, both of which are relatively technical but otherwise minor results.

I also wasn't working on two massive popularization projects, obtaining funding, courting researchers (well, I flirted a little bit) and so on.

Applied math is widely regarded as having a low barrier to publication, with acceptable peer-review times in the six to eighteen month range. (Anecdote: My first paper took nine months from draft to publication; my second took seven months so far and isn't in print yet. My academic brother's main publication took twenty months.) I think it's reasonable to consider this a lower bound on publications in game theory, decision theory, and mathematical logic.

Considering this, even if MIRI had sought to publish some of their technical writings in independent journals, we probably wouldn't know if most of them had been either accepted or rejected by now. If things don't change in five years, then I'll concede that their research program hasn't been particularly effective.

comment by XiXiDu · 2014-07-14T10:41:45.669Z · LW(p) · GW(p)

It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.

I use the term "new rationalism".

Replies from: David_Gerard, ChristianKl
comment by David_Gerard · 2014-07-14T11:30:57.024Z · LW(p) · GW(p)

I'd still really love a better term than that. One that doesn't use the R-word at all, if possible. ("Neorationalism" is tempting but similarly well below ideal.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-07-14T11:48:10.405Z · LW(p) · GW(p)

"Pseudo-rationalism."

Since that is exactly what is being claimed about it, one might as well put it in the name. It does use the R-word, but only to negate it, which is the point. "New rationalism" suggests there is something wrong with actually being rational, which I hope isn't anyone's intention in this thread.

Replies from: David_Gerard
comment by David_Gerard · 2014-07-14T11:52:30.084Z · LW(p) · GW(p)

Trouble is that echoes "pseudoskeptic", which is a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on ("critiquing me? You're doing skepticism wrong!"), to the point where it's a pretty useful crank detector.

Replies from: Richard_Kennaway, None
comment by Richard_Kennaway · 2014-07-14T13:08:16.862Z · LW(p) · GW(p)

That is not a problem with the word but the thing. It does not matter what opposition to bad skepticism is called. If it exists as a definite idea, it will acquire a name, and whatever name it is called by will be used in that way.

"New rationalism" is even worse: the name suggests not that there is such a thing as bad reasoning, but that reasoning is bad.

Perhaps a better idea would be to not call it anything, nor make of it a thing. Instead, someone dissatisfied with how it is being done on LW might more fruitfully devote their energies to demonstrating how to do it better.

comment by [deleted] · 2014-07-14T12:42:53.333Z · LW(p) · GW(p)

a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on ("critiquing me? You're doing skepticism wrong!"), to the point where it's a pretty useful crank detector.

Well, isn't that a self-evidently dangerous heuristic. ("Critiquing me? You're just doing the calling-me-a-pseudoskeptic crank behavior!")

comment by ChristianKl · 2014-07-14T16:31:16.097Z · LW(p) · GW(p)

I don't think that either armchair evopsych or the paleo movement are characterised by meta reasoning. Most individuals who believe in those things aren't on LW.

comment by ChristianKl · 2014-07-14T13:09:27.067Z · LW(p) · GW(p)

It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.

What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.

Rituals: Deliberated modelled after religious rituals, including "public confession" sessions

Could you provide a link? I'm not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.

Replies from: V_V
comment by V_V · 2014-07-14T13:41:51.673Z · LW(p) · GW(p)

What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.

Sure, but I'd expect that a community devoted to "refining the art of human rationality" would be more skeptical of that type of claims.

Anyway, I'm not saying that LessWrong is a terribly diseased community. If I thought it was, I wouldn't be hanging around here. I was just expressing my concerns about some aspects of the local culture.

Could you provide a link? I'm not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.

https://www.google.com/search?q=less+wrong+ritual&ie=utf-8&oe=utf-8#channel=fs&q=ritual+report+site:lesswrong.com

http://lesswrong.com/lw/9aw/designing_ritual/

And in particular the "Schelling Day", which bothers me the most: http://lesswrong.com/lw/h2t/schelling_day_a_rationalist_holiday/

Replies from: ChristianKl
comment by ChristianKl · 2014-07-14T16:18:52.097Z · LW(p) · GW(p)

Sure, but I'd expect that a community devoted to "refining the art of human rationality" would be more skeptical of that type of claims.

In that case I think you overrate the amount of energy the average person in the community invest in it. LW is very diverse as far as opinions go.

I myself dislike certain talk about signaling where sometimes armchair evopsych appear, but the idea of signaling is rooted in game theory.

There are also people on LW who do read real evopysch and make arguments on that basis.

And in particular the "Schelling Day", which bothers me the most: http://lesswrong.com/lw/h2t/schelling_day_a_rationalist_holiday/

I wasn't aware of Schelling Day.

comment by Dustin · 2014-07-13T18:27:32.934Z · LW(p) · GW(p)

Rationality doesn't guarantee correctness.

I think this point kind of corrupts what LW would generally call rationality. The rational path is the path that wins and this is mentioned constantly on LW.

Overall though, I think this is a decent critique.

ETA: I want to expand on my point. In your example about planning a car trip spending 25% of your time to shave 5% off your time is not what LW would call rationality.

You say "Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't". I'm going to assume you mean to say "LWers seem to assume that being as rational as possible will solve more of their life problem the other choices" rather than "all their life problems" as I doubt that's the LW consensus or that you think it's the LW consensus.

If my reading of your meaning is correct, then my retort is: Then whatever it is you're calling "being as rational as possible" is not being as rational as possible, and as I mention above rationality being about winning is what LW is about.

It feels like you're attacking a strawman of LW-style rationality here.

Replies from: Sophronius
comment by Sophronius · 2014-07-13T18:58:17.912Z · LW(p) · GW(p)

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It's a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)

Replies from: Sophronius, Viliam_Bur, ArisKatsaris, Luke_A_Somers, Dustin
comment by Sophronius · 2014-07-14T18:43:57.391Z · LW(p) · GW(p)

To clarify what I mean, take the following imaginary conversation:

Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to mean "winning", or "winning on purpose" so more rationality is always good. You don't want be like those crazy normals who lose on purpose, do you?
Normal: No, of course I want to succeed at the things I do.
Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.

As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.

comment by Viliam_Bur · 2014-07-13T19:58:24.051Z · LW(p) · GW(p)

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.

In my experience, the problem is not with disagreeing, but rather that most people won't even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that's the end of rationality discourse.

So, for me the main message of LW is this: A better definition of rationality is possible.

Replies from: DanielLC
comment by DanielLC · 2014-07-14T00:51:08.156Z · LW(p) · GW(p)

So, for me the main message of LW is this: A better definition of rationality is possible.

It's not a different definition of rationality. It's a different word for winning.

If they're not willing to use "rationality" that way, then just abandon the word.

Replies from: savageorange
comment by savageorange · 2014-07-23T11:01:13.106Z · LW(p) · GW(p)

We don't just use 'winning' because, well.. 'winning' can easily work out to 'losing' in real world terms. (think of a person who alienates everyone they meet through their extreme competitiveness. They are focused on winning, to the point that they sacrifice good relations with people. But this is both a) not what is meant by 'rationalists win' and b) a highly accessible definition of winning - naive "Competition X exists. Agent A wins, Agent B loses"). VASTLY more accessible than 'achieving what actually improves your life, as opposed to what you merely want or are under pressure to achieve'

I'd like to use the word 'winning', but I think it conveys even less of the intended meaning than 'rationality' to the average person.

comment by ArisKatsaris · 2014-07-13T23:21:29.607Z · LW(p) · GW(p)

Yvain criticized switching definitions depending on whether you want to defend an easily defensible position, or have others accept an untenable position.

With Lesswrong's definition of rationality (epistemic rationality the ability to arrive to true beliefs, instrumental rationality the ability to know how to achieve your goals) how is that happening?

comment by Luke_A_Somers · 2014-07-13T20:01:39.665Z · LW(p) · GW(p)

So what's the bailey, here? You make it seem like having obviously true premises is a bad thing.

Note, a progressive series of less firmly held claims are NOT Motte and Bailey, if you aren't vacillating on what each means.

Replies from: DanielLC
comment by DanielLC · 2014-07-14T00:54:13.789Z · LW(p) · GW(p)

It's a problem if anyone ends up sneaking in connotations.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-07-14T12:26:45.511Z · LW(p) · GW(p)

Yes, that's what an example would look like. Can anyone provide any?

Replies from: Algernoq
comment by Algernoq · 2014-07-20T04:13:55.427Z · LW(p) · GW(p)

To paraphrase someone else's example, the motte is that science/reason helps people be right, and the bailey is that the LW memeplex is all correct and the best use of one's time (the memeplex including maximum support of abstract research about "friendly" AI, frequent attendance of LW self-help events, cryonics, and evangelizing Rationalism).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-07-20T18:28:14.389Z · LW(p) · GW(p)

Here's the problem with your attempting to apply Motte and Bailey to that:

If challenged on those other things, we do not reply that 'rationalism is just science/reason helps people be right, how could you possibly oppose it?' Well, except for the last, which really seems like that actually addresses the problem.

So, it's just a perfectly ordinary (and acceptable) sequence of progressively more controversial claims, and not a Motte-and-Bailey system.

Replies from: Algernoq
comment by Algernoq · 2014-07-21T22:02:34.778Z · LW(p) · GW(p)

Different members act as different parts of the motte and bailey: some argue for extreme things; others say those extreme things are not "real" Rationalism

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-07-22T14:57:51.546Z · LW(p) · GW(p)

That structure makes it not motte and bailey - the motte must be friendly to the bailey, not hostile to it!

comment by Dustin · 2014-07-13T19:20:16.660Z · LW(p) · GW(p)

What do you mean exactly by "specifically designed"?

Anyway, I don't disagree with you exactly.

My original point was not that the LW defnition of rationality was a good or bad definition, but that the definition Algernoq was asserting as the LW consensus definition of rationality was probably not actually true.

ETA: I'm also not sure that I agree with you about the definition being useless, as I think the LW defintion seems designed specifically to counter thinking that leads to someone spending 25% of their time for a car trip planning to save 5%. By explicitly stating that rationality is about winning it helps to not get bogged down in the details and to remember what the point is. Whether or not the definition that has arisen is explicitly designed with that in mind, I can't say.

Replies from: Jiro
comment by Jiro · 2014-07-17T14:54:39.354Z · LW(p) · GW(p)

thinking that leads to someone spending 25% of their time for a car trip planning to save 5%

I don't understand this. You're saying that people spend 25% of their time planning the trip, and save 5% of their time on the trip? (Which is bad, but I doubt is that common)? Or they spend 25% of their time on the trip, and they plan to save 5% of their time on something else? (Which I also doubt is that common). Or that they spend 25% of their time on the trip, and they plan to save 5% of something else, like money? (Which may or may not be bad depending on how time translates to money).

This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I've argued that that's a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it's far away, nearby stores have an incentive to keep prices low.

Replies from: army1987, Dustin
comment by A1987dM (army1987) · 2014-07-18T11:45:04.513Z · LW(p) · GW(p)

This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I've argued that that's a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it's far away, nearby stores have an incentive to keep prices low.

But if you take into account both price and location when deciding where to shop, stores will have an incentive not only to keep prices low but also to be near where people are!

Replies from: Jiro
comment by Jiro · 2014-07-18T19:06:39.649Z · LW(p) · GW(p)

Stores can't move closer to where all the people are, however; at some point any incentives from moving close to some people would be countered by moving away from other people. There's also the problem that past a certain density stores do better when farther away from other stores. Not to mention the transaction costs moving in the first place. Prices don't have these problems.

Replies from: Algernoq
comment by Algernoq · 2014-07-20T04:15:18.746Z · LW(p) · GW(p)

All I'm saying is it looks like many people are being Rational because it's fun, not because it's useful.

comment by Dustin · 2014-07-21T22:27:05.901Z · LW(p) · GW(p)

You're saying...

I'm not particularly saying anything as I was just referring to the concept introduced in the main post. You'll have to ask Algernoq as to what the specific intention was.

comment by [deleted] · 2014-07-14T02:03:31.606Z · LW(p) · GW(p)

Rationality doesn't guarantee correctness.

What does? If there's a better way, we'd love to hear it. That's not sarcasm. It's the only thing of interest around here.

Many LWers are not very rational.

Now that's just mean.

Replies from: None
comment by [deleted] · 2014-07-14T12:44:52.955Z · LW(p) · GW(p)

Redacted my post. Doesn't add to the conversation.

Do, however, try not to conflate LessWrong with "rationality." Rationality is a method of approaching cognitive algorithms. LessWrong is a community that happens to focus on these methods a lot. Conflating them is like conflating the Democratic party with socialism (to choose a flippant, possibly ill-advised example). It makes a caricature of the former and diminishes the latter.