How I'd Introduce LessWrong to an Outsider

post by Adam Zerner (adamzerner) · 2017-05-03T04:32:21.396Z · LW · GW · Legacy · 25 comments

Contents

  Weird? Useful?
  Overview
  Gaps
  Community
  Diaspora
  Related Organizations
None
25 comments

Note/edit: I'm imagining explaining this to a friend or family member who is at least somewhat charitable and trusting of my judgement. I am not imagining simply putting this on the About page. I should have made this clear from the beginning - my bad. However, I do believe that some (but not all) of the design decisions would be effective on something like the About page as well.


There's this guy named Eliezer Yudkowsky. He's really, really smart. He founded MIRI, wrote a popular fanfic of Harry Potter that centers around rationality, and has a particularly strong background in AI, probability theory, and decision theory. There's another guy named Robin Hanson. Hanson is an economics professor at George Mason, and has a background in physics, AI and statistics. He's also really, really smart.

Yudkowsky and Hanson started a blog called Overcoming Bias in November of 2006. They blogged about rationality. Later on, Yudkowsky left Overcoming Bias and started his own blog - LessWrong.

What is rationality? Well, for starters, it's incredibly interdisciplinary. It involves academic fields like probability theory, decision theory, logic, evolutionary psychology, cognitive biases, lots of philosophy, and AI. The goal of rationality is to help you be right about the things you believe. In other words, the goal of rationality is to be wrong less often. To be LessWrong.

Weird? Useful?

LessWrong may seem fringe-y and cult-y, but the teachings are usually things that aren't controversial at all. Again, rationality teaches you things like probability theory and evolutionary psychology. Things that academics all agree on. Things that academics have studied pretty thoroughly. Sometimes the findings haven't made it to mainstream culture yet, but they're almost always things that the experts all agree on and consider to be pretty obvious. These aren't some weird nerds cooped up in their parents basement preaching crazy ideas they came up with. These are early adopters who are taking things that have already been discovered, bringing them together, and showing us how the findings could help us be wrong less frequently.

Rationalists tend to be a little "weird" though. And they tend to believe a lot of "weird" things. A lot of science-fiction-y things. They believe we're going to blend with robots and become transhumans soon. They believe that we may be able to freeze ourselves before we die, and then be revived by future generations. They believe that we may be able to upload our consciousness to a computer and live as a simulation. They believe that computers are going to become super powerful and completely take over the world.

Personally, I don't understand these things well enough to really speak to their plausibility. My impression so far is that rationalists have very good reasons for believing what they believe, and that they're probably right. But perhaps you don't share this impression. Perhaps you think those conclusions are wacky and ridiculous. Even if you think this, it's still possible that the techniques may be useful to you, right? It's possible that rationalists have misapplied the techniques in some ways, but that if you learn the techniques and add them to your arsenal, they'll help you level up. Consider this before writing rationality off as wacky.

Overview

So, what does rationality teach you? Here's my overview:

Sound interesting? Good! It is!

Eliezer wrote about all of this stuff in bite sized blog posts. About one per day. He claims it helps him write faster as opposed to writing one big book. Originally, the collection of posts were referred to as The Sequences, and were organized into categories. More recently, the posts were refined and brought together into a book - Rationality: From AI to Zombies.

Personally, I believe the writing is dense and difficult to follow. Things like AI are often used as examples in places where a more accessible example could have been used instead. Eliezer himself confesses that he needs to "aim lower". Still, the content is awesome, insightful, and useful, so if you could make your way past some of the less clear explanations, I think you have a lot to gain. Personally, I find the Wiki and the article summaries to be incredibly useful. There's also HPMOR - a fanfic Eliezer wrote to describe the teachings of rationally in a more accessible way.

Gaps

So far, there hasn't been enough of a focus on applying rationality to help you win in everyday life. Instead, it's been focusing on solving big, difficult, theoretical problems. Eliezer mentions this in the preface of Rationality: From AI to Zombies. Developing the more practical, applied part of The Art is definitely something that needs to be done.

Learning how to rationally work in groups is another thing that really needs to be done. Unfortunately, rationalists aren't particularly good at working together. Yet.

Community

From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey). Readers live throughout the globe, and tend to be come from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc. crowd. There are also a lot of effective altruists - people who try to do good for the world, and who try to do so as efficiently as possible. See the wiki's FAQ for results of these surveys.

There are meet-ups in many cities, and in many countries. Berkeley is considered to be the "hub". See How to Run a Successful LessWrong Meetup for a sense of what these meet-ups are like. Additionally, there is a Slack group, and an online study hall. Both are pretty active.

Community members mostly agree with the material described in The Sequences. This common jumping off point makes communication smoother and more productive. And often more fulfilling.

The culture amongst LessWrongians is something that may take some getting used to. Community members tend to:

In addition... they're totally awesome! In my experience, I've found them to be particularly, caring, altruistic, empathetic, open-minded, good at communicating, humble, intelligent, interesting, reasonable, hard working, respectful and honest. Those are the kinds of people I'd like to spend my time amongst.

Diaspora

LessWrong isn't nearly as active as it used to be. In "the golden era", Eliezer along with a group of other core contributors would post insightful things many times each week. Now, these core contributors have fled to work on their own projects and do their own things. There is much less posting on lesswrong.com than there used to be, but there is still some. And there is still related activity elsewhere. See the wiki's FAQ for more.

Related Organizations

MIRI - Tries to make sure AI is nice to humans.

CFAR - Runs workshops that focuses on being useful to people in their everyday lives.


Meta:

Of course, I may have misunderstood certain things. Ex. I don't feel that I have a great grasp on bayesianism vs. science. If so, please let me know.

Note: in some places, I exaggerated slightly for the sake of a smoother narrative. I don't feel that the exaggerations interfere with the spirit of the points made (DH6). If you disagree, please let me know by commenting.

25 comments

Comments sorted by top scores.

comment by RomeoStevens · 2017-05-03T18:19:01.285Z · LW(p) · GW(p)

Having spent years thinking about this and having the opportunity to talk with open minded, intelligent, successful people in social groups, extended family etc. I concluded that most explicit discussion of the value of inquiring into values and methods (scope sensitivity and epistemological rigor being two of the major threads of what applied rationality looks like) just works incredibly rarely, and only then if there is strong existing interest.

Taking ideas seriously and trusting your own reasoning methods as a filter is a dangerous, high variance move that most people are correct to shy away from. My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

I eventually realized that what I was really communicating to people's system 1 was something like "Hey, you know those methods of judgment like proxy measures of legitimacy and mimesis that have granted you a life you like and that you want to remain stable? Those are bullshit, throw them away and start using these new methods of judgment advocated by a bunch of people who aren't leading lives resembling the one you are optimizing for."

This has not resulted in many sales. It is unrealistic to expect to convert a significant fraction of the tribe to shamanism.

Replies from: satt, hg00, adamzerner, TheAncientGeek, The_Jaded_One
comment by satt · 2017-05-08T19:59:17.395Z · LW(p) · GW(p)

Maybe a side note, but it's not obvious to me that

When you are losing you increase variance. When you are winning you decrease it.

is in general true, whether normatively or empirically.

comment by hg00 · 2017-05-08T05:35:27.953Z · LW(p) · GW(p)

Earlier today, it occurred to me that the rationalist community might be accurately characterized as "a support group for high IQ people". This seems concordant with your observations.

Replies from: Viliam
comment by Viliam · 2017-05-09T10:51:37.282Z · LW(p) · GW(p)

I'd like to emphasise that in this context, "high IQ" means higher than Mensa level (which is what most people would probably imagine when you say "high IQ").

I used to regularly attend Mensa meetups, and now I regularly attend LW meetups, and seems to me that the difference between LW and Mensa is about the same as the difference between Mensa and the normies. This doesn't mean the whole difference is about IQ, but there seems to be a significant intelligence component anyway.

comment by Adam Zerner (adamzerner) · 2017-05-03T19:51:53.977Z · LW(p) · GW(p)

As for the comment that it's difficult to get people to be interested, that seems very true to me, and it's good to get the data of your vast experience with this.

A separate question is how we can best attempt to get people to be interested. You commented on the failure you experienced with the "throw your techniques away, these ones are better" approach. That seems like a good point. I sense that my message takes that approach too strongly and could be improved.

I'm interested in hearing about anything you've found to be particularly effective.

comment by TheAncientGeek · 2017-05-08T11:29:32.736Z · LW(p) · GW(p)

My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

There's also the issue of having plenty of spare time·

comment by The_Jaded_One · 2017-05-04T17:49:36.910Z · LW(p) · GW(p)

My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

This also applies to me

comment by eternal_neophyte · 2017-05-03T10:17:35.022Z · LW(p) · GW(p)

He's really, really smart.

This is the kind of phrasing that usually costs more to say than you can purchase with it. Anyone who is themselves really, really smart is going to raise hackles at this kind of talk; and is going to want strong evidence moreover ( and since a smart person would independently form the same judgement about Yudkowsky, if it is correct, you can safely just supply the evidence without the attached value judgment ).

Fiction authors have a fairly robust rule of thumb: show, don't tell. Especially don't tell me what judgement to form. I'd tack on this: don't negotiate. Haggling with a person over their impressions of a group of other people with suggestions like it's still possible that the techniques may be useful to you, right? immediately inspires suspicion in anyone with any sort of disposition to scepticism. Bartering _may_s simultaneously creates the impression of personal uncertainty and inability to demonstrate while coupling it to the obvious fact that this person wants me to form a certain judgement.

If I were to introduce a stranger to LessWrong I'd straightforwardly tell them what it is: it's where people attracted to STEM come go to debate and discuss mostly STEM-related ( and generally academic ) topics; with a heavy bias towards topics that are in a the twilight zone between sci-fi and feasible scientific reality, also with a marked tendency for employing a set of tools and techniques of thought derived from studying cognitive science and an associated tendency to frame discussions in the language associated with those tools.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2017-05-03T17:27:51.159Z · LW(p) · GW(p)

Thanks for calling this out. I was imagining explaining it to a friend or family member who is at least somewhat charitable and trusting of my judgement. In that case, I expect them to not raise hackles, and I think it's useful to communicate that I think the authors are particularly smart.

However, if this were something that were posted on Less Wrong's About page, for example, I could definitely see how this would turn newcomers away, and I agree with you. Self-promoting as "really, really smart" definitely does seem like something that turns people off and makes them skeptical.

Replies from: eternal_neophyte
comment by eternal_neophyte · 2017-05-03T17:36:29.684Z · LW(p) · GW(p)

Thank you for being gracious about accepting the criticism.

Replies from: adamzerner
comment by Vaniver · 2017-05-03T05:24:08.817Z · LW(p) · GW(p)

He never finished high school, but self taught himself a bunch of stuff.

Is this really the best second sentence to have? This, plus a few pieces later (like saying LW is fringe-y and cult-y before calling it mostly about noncontroversial things) seems like you're optimizing around an objection you're imagining the listener has ("isn't that place Yudkowsky's cult?"), which causes them to think that even if they weren't already.

That is, the basic structure here is something like:

  1. Founders

  2. Broad description of beliefs

  3. Detailed description of beliefs

  4. Problems

  5. Community

I suspect you're better off with a structure like:

  1. We know a lot more about thinking now than we did in the past, and it seems like thinking about thinking has multiplicative effects. This is especially important today, given how much work is knowledge work.

  2. There's a cluster of people interested in that who gathered around a clear explanation of the sort of worldview you'd build today as a cognitive psychologist and a computer programmer, that you couldn't have built in the past but is built on the past. That is, the fruit of lots of different intellectual traditions have fertilized the roots of this one.

  3. As an example of this, a core concept, "the map is not the territory," comes from General Semantics through Hayakawa. What it means is that we have mental models of external reality that, from the inside, seem to be reality, but are different, just like Google Maps might look a lot like the surface of the Earth but it isn't. This sort of mental separation between beliefs and reality allows for a grounded understanding of the relationships between one's beliefs and reality, which has lots of useful downstream effects.

  4. But that's just one out of many concepts; the really cool thing about the rationality community is that when everyone has the same language (and underlying concepts), they can talk much faster about much more interesting things, cutting quickly to the heart of matters and expanding the frontiers of understanding. Lots of dumb arguments just don't happen, because everyone knows how to avoid them.

Replies from: adamzerner, adamzerner
comment by Adam Zerner (adamzerner) · 2017-05-03T05:58:36.409Z · LW(p) · GW(p)

I get the impression that a lot of people start off with a feeling that it's weird and cult-y. For that reason, I feel it's important to address it and communicate that "actually, rationality is normal". If you didn't already find it to be weird (and wouldn't have come to find it weird after some initial investigation), my intuition is that such a forewarning wouldn't lead you to consider it weird, and thus has a minimal downside. I feel somewhat confident about that intuition, but not too confident.

This would be an interesting thing to test though. And I look forward to updating my beliefs based on what the experiences and intuitions of others are regarding this.

Replies from: eternal_neophyte
comment by eternal_neophyte · 2017-05-03T10:33:28.803Z · LW(p) · GW(p)

"actually, X" is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they've exhaustively studied a certain topic ) - but what the average person hears is the "you don't know what you're talking about" half of the implication which makes them deaf to the "I do know what I'm talking about" half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust their judgements accordingly.

Here's an interesting exercise - find anyone in the business of persuasion ( a lawyer, a salesman, a con artist ) and see how often you hear them say things like "no, actually..." ( or how often you hear them not saying these things ).

Replies from: adamzerner, Lumifer
comment by Adam Zerner (adamzerner) · 2017-05-03T17:53:33.910Z · LW(p) · GW(p)

My impression: a major issue is that other people get the idea that LessWrong comes from a few people preaching their ideas, when in reality, it's people who mostly preach the ideas that have been discovered by and are widely agreed upon by academic experts. Just saying, "it comes from academics" seems to not directly address this major issue directly enough.

That said, I see what you mean about "actually, X" being a pattern that may lead people to instinctively argue the other way. So I see that there is a cost, but my impression is that the cost doesn't outweigh the benefit that comes with directly addressing a major concern that others have. For most audiences; there are certainly some less charitable audiences who need to be approached more gently.

I'd consider my confidence in this to be moderate. Getting your data point has lead to me shift downwards a bit.

Replies from: eternal_neophyte
comment by eternal_neophyte · 2017-05-03T17:58:54.402Z · LW(p) · GW(p)

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real. Imagine a Scientologist offering to explain to you why Scientology isn't a cult.

Of the people I know of who are outright hostile to LW, it's mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can't exactly say that those things don't exist.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2017-05-03T18:29:36.200Z · LW(p) · GW(p)

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real.

I could see some people responding that way. But I could see others responding with, "oh, ok - that makes sense". Or maybe, "hm, I can't tell whether this is legit - let me look into it further". There are lots of citations and references in the LessWrong writings, so it's hard to argue with the fact that it's heavily based off of existing science.

Still, there is the risk of some people just responding with, "Jeez, this guy is getting defensive already. I'm skeptical. This LessWrong stuff is not for me." I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I'm starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it.

The good thing is that user research could totally answer this question. Maybe that'd be a good activity for a meet-up group or something. Maybe I'll give it a go.

comment by Lumifer · 2017-05-03T14:52:04.150Z · LW(p) · GW(p)

If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust their judgements accordingly.

Behold LW!

:-)

comment by Adam Zerner (adamzerner) · 2017-05-03T06:00:17.611Z · LW(p) · GW(p)

Is this really the best second sentence to have?

Hm, probably not. Seems unnecessarily to risk giving an even "cult-ier" impression. Also seems worthwhile to be more specific about why I claim that he's smart. Changed, thanks.

comment by Lumifer · 2017-05-03T15:45:43.297Z · LW(p) · GW(p)

That's... pretty bad.

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

I'd recommend to nuke this text from orbit and start anew.

Replies from: Kawoomba, username2
comment by Kawoomba · 2017-05-03T20:42:41.939Z · LW(p) · GW(p)

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

Well, glad you didn't choose the first option, then.

comment by username2 · 2017-05-03T16:02:39.152Z · LW(p) · GW(p)

I seldom agree with Lumifer but this comment is right on track. Sorry OP, I am not sure what kind of Outsider you are thinking of, but I am having trouble of thinking of anyone outside LW for whom this way of framing it would be at all appealing.

comment by [deleted] · 2017-05-04T16:18:29.632Z · LW(p) · GW(p)

Something that seems relevant is this attempt I made a while back as an friendly intro to rationality.

I think that you might be trying to get across a lot of information here. I think this might be fine in certain cases for conversation, but I definitely wouldn't recommend trying to send this as-is to people. Also, a lot of the mention of community norms / etc. seem like potential turn-offs.

What may be of interest is strategies that have worked for me in piquing people's interest:

  • Starting with cognitive psychology. People seem naturally interested to this area of study, and if you can present your group as one that has cool info worth delving into, they get interested. If you then follow up with this idea of "mental strategies" that can boost your thinking, you can move into basic rationality from there.

  • For AI risk, focusing on acknowledging the media strawmans from pop culture and trying to focus on how poor specification can cause problems. (EX: pointing to how code does what it says and not what you mean).

comment by ChristianKl · 2017-05-10T19:11:14.458Z · LW(p) · GW(p)

From 2009-2014 (excluding 2010), there were surveys of the LessWrong readership. There were usually about 1,500 responders, which tells you something about the size of the community (note that there are people who read/lurk/comment, but who didn't submit the survey).

We also have a survey for 2016: http://lesswrong.com/lw/nkw/2016_lesswrong_diaspora_survey_results/

comment by TheAncientGeek · 2017-05-08T13:41:54.621Z · LW(p) · GW(p)

There's this guy named Eliezer Yudkowsky. He's really, really smart.

Who told you that?