[Feedback please] New User's Guide to LessWrong

post by Ruby · 2023-04-25T18:54:40.379Z · LW · GW · 18 comments

Contents

    The Core of LessWrong: Rationality
    Philosophical Heritage: The Sequences
    Topics other than Rationality
    Artificial Intelligence
  How to get started
  How to ensure your first post or comment is approved
      Address the LessWrong audience
      Aim for a high standard if you're contributing on the topic AI
      Don't worry about it too hard.
  Helpful Tips 
    LessWrong moderator's tool kit.
None
18 comments

The LessWrong team is currently thinking a lot about what happens with new users: both the bar of their contributions being accepted, how we deliver feedback and restriction of not good contributions, but also most importantly, how we get them onboarded onto the site

This is a draft of a document we'd present to new users to help them understand what LessWrong is about. I'm interested in early community feedback about whether I'm hitting the right notes here before investing a lot more in it.

This document also references another post that's something of more of a list of norms, akin to Basics of Rationalist Discourse [LW · GW], though (1) I haven't written that yet, (2) I'm much less certain about the shape or nature of it. I'll share a post or draft about that too soon.

 


This document is aimed at new users but may also be a useful reference for established users. It elaborates on the about page [? · GW].

The Core of LessWrong: Rationality

LessWrong is an online forum and community that was built around the goal improving human reasoning and decision-making. The community believes there are ways of thinking, that if you figure them out and adopt them, you can become a person who systematically[1] arrives at true beliefs and good decisions more of the time than someone who didn't adopt those ways of thinking. Around here, the short word for "systematically arriving at truth, etc." is rationality, and that's at the core of this site.

More than that, LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinctive style from the rest of Internet.

Some of the features that set LessWrong apart:

Philosophical Heritage: The Sequences

Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality (collectively those blog posts are called The Sequences). In 2009, Eliezer founded LessWrong as a community forum for the people who were attracted to his ideas and worldview.

 While not everyone on the site agrees with everything Eliezer says, The Sequences (also known as Rationality: AI to Zombies) is the foundational cultural/values document of LessWrong. To understand LessWrong and participate well (and also for your own reasoning ability), we strongly encourage you to read the Sequences.

Topics other than Rationality

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion. - 12 Virtues of Rationality [LW · GW]

We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.

Artificial Intelligence

If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.

Even if you found your way to LessWrong because of your interest in AI, it's important for you to be aware of the site's focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc. 

How to get started

<TO-DO>

not necessarily a tonne of this, but if it's your first day on LessWrong, you'll be missing <something>

</TO-DO>

 

How to ensure your first post or comment is approved

This is a hard section to write. The new users who need to read it least are more likely to spend time worrying the below, and those who need it most are likely to ignore it. Don't stress too hard. If you submit it and we don't like it, we'll give you some feedback.

A lot of the below is written for the people who aren't putting in much effort at all, so we can at least say "hey, we did give you a heads up in multiple places".

There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking it, might be rejected and you'll get feedback on why.

Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if:

You demonstrate understanding of LessWrong rationality fundamentals. These are the kinds of things covered in The Sequences such as probabilistic reasoning [LW · GW], proper use of beliefs [LW · GW], being curious about where you might be wrong, avoiding arguing over definitions, etc.

You write a clear introduction. If your first submission is lengthy, i.e. a long post, it's more likely to get quickly approved if the site moderators can quickly understand what you're trying to say rather than having to delve deep into your post to figure it out. Once you're established on the site and people know that you have good things to say, you can pull off having a "literary" opening that doesn't start with the main point.

Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it's clear you're aware of prior relevant discussion and are building upon on it. It's not a big deal if you weren't aware, there's just a chance the moderator team will reject your submission and point you to relevant material.

This doesn't mean that you can't question positions commonly held on LessWrong, just that it's a lot more productive for everyone involved if you're able to respond to or build upon the existing arguments, e.g. showing why you think they're wrong.

Address the LessWrong audience

A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There's nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particular interesting or insightful, nor demonstrate an interest in LessWrong's culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).

It's good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts is good). 

Aim for a high standard if you're contributing on the topic AI

As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don't think your AI-related contribution is particularly valuable and it's not clear you've tried to understand the site's culture or values, then it's possible we'll reject it.

A longer list of guidelines on LessWrong can be found here [Link]

Don't worry about it too hard.

It's ok if we don't like your first submission, we can just give you feedback. In many ways, the bar isn't that high. As I wrote above, this document is so not being approved on your first submission doesn't come as a surprise. If you're writing a comment and not a 5,000 word post, don't stress about it.

If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one [link] or AI one [link]. That's a good place to say "I have an idea about X, does LessWrong have anything on that already?"

 

Helpful Tips <to-do>

FAQ

Intercom

OpenThreads

 

 

LessWrong moderator's tool kit.

 

 

 

 

 

  1. ^

    This means you won't necessarily do better on every occasion, but that on average you will.

  2. ^

    As opposed to beliefs being for signaling group affiliation and having pleasant feelings.

18 comments

Comments sorted by top scores.

comment by MondSemmel · 2023-04-26T11:33:03.690Z · LW(p) · GW(p)

Documents like these seem among the most important ones to get right.

If this is among the first essays a new user is going to see, then remember that they might have little buy-in to the site's philosophy, and furthermore don't know any of the jargon. Furthermore, not all users will be native English speakers.

So my recommendations and feedback come from this perspective.

Regarding the writing:

  • Be more concise. Most LW essays are way way way too long, and an essay as important as a site introduction should strive to be exemplary in regards like this. It should value its readers' time more than the median essay on this site does. (To be clear, this comment of mine does not satisfy this standard either.)
  • Use simpler language. XKCD made the Simple Writer at one time, which IIRC only uses the 1000 most common English words. That's overkill, but aim for that end of the spectrum, rather than the jargon end.
  • Aim for a tone that's enjoyable to read, rather than sounding dry or technical. Reconsider the title for the same reason; it sounds like a manual.
  • To make the essay more enjoyable to read, consider writing it with personality and character and your quirks as writers and individuals, and signing it with "By the LW Moderation team: X, Y, Z" or some such.

Regarding the content:

  • I have the overall impression that this document reads like "Here's how you can audition for a spot in our prestigious club". But new users assess the site at the same time as the site assesses them. So a better goal for such a document is, in my opinion, to be more invitational. More like courtship, or advertisement. A more reciprocal relationship. "Here's what's cool and unique about this site. If you share these goals or values, then here are some tips so we'll better get along with each other."
  • Also, the initial section makes it seem like LW's rationality discourse is unique, when it's merely rare. How about referencing some other communities which also do this well, communities which the new user might already be familiar with, so they know what to expect? E.g. other Internet communities which aim more in the direction of collaborative and truth-seeking discourse like reddit's ELI5 or Change My View; adjacent communities like Astral Codex Ten; discourse in technical communities like engineers or academics; etc. Also stress that all this stuff is merely aspirational: These standards of discourse are just goals we strive towards, and almost everyone falls short sometimes.
  • Re: the section "How to get started": There must be some way for new users to actively participate that does not require hours or days of prep work.
  • Re: the section "How to ensure your first post or comment is approved": This currently starts "in medias res", without properly explaining the context of content moderation or why new users would be subject to extra scrutiny. I would begin with something like a brief reference to the concepts from Well-Kept Gardens Die By Pacifism [LW · GW]: LW is aiming for a certain standard of discourse, and standards degrade over time unless they're intentionally maintained. So the site requires moderation. And just like a new user might be unfamiliar with LW, so LW is unfamiliar about the new user and whether they're here to participate or to troll or spam (potentially even with AI assistance). Hence the extra scrutiny. "We're genuinely sorry that we have to put new users through hoops and wish it wasn't necessary (moderation takes time and effort which we would rather put somewhere else)." Here's how to get through that initial period of getting to know each other in the quickest way possible.

Missing stuff:

  • Explain the karma system, and what it means for a post to have lots or little karma. Explain agreement karma. Explain that votes by long-time users have more karma power. Explain that highly upvoted posts can still be controversial; I wish we had some <controversial> flag for posts that have tons of upvotes and downvotes. Explain the meaning of downvotes, and how (not) to act when one of your posts or comments has received lots of downvotes.
Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-04-29T20:58:01.927Z · LW(p) · GW(p)

Regarding the writing

Agreed for the most part. However, all of the things you mention are difficult to get right. It would take a good deal of the team's time to improve the writing quality I presume. If so, the question becomes one of priorities. Is it worth spending that time or using that time on something else? My impression is that it's probably worth spending a week or so on it and then iterating periodically for a few months afterwards in response to feedback.

Be more concise

I think you can actually be both concise and lengthy. Have a "here's the quick version" section and then a "if you want more detail, here's the details" part that follows. Or maybe break it into two separate posts.

XKCD made the Simple Writer at one time

Thanks for pointing me to this. I never saw it before and think it's so cool!

Here's how you can audition for a spot in our prestigious club

I don't get that impression.

Re: the section "How to get started": There must be some way for new users to actively participate that does not require hours or days of prep work.

I don't agree with that. Large requirements will definitely filter more people out, but it's not clear that that's a bad thing. Personally my sense is that it's a good thing on balance.

Explain the karma system

This doesn't seem important enough to spend time on in this post. It seems more appropriate to have those questions addressed in the FAQs and perhaps have the post mention the FAQs as something to refer to.

comment by Shmi (shminux) · 2023-04-25T19:51:29.173Z · LW(p) · GW(p)

I'd make it clear that while most people aspire to be rational here, we fail more often than not. The discourse level is generally significantly better than the average on reddit, twitter or discord, but certainly falls short of the ideals described in this draft. 

I would also give a few examples of good/acceptable/bad posts and comments. Maybe link to some threads where people do it right, and where they do it wrong. I realize this is a lot of work, though.

Replies from: Viliam, habryka4
comment by Viliam · 2023-04-26T09:24:35.892Z · LW(p) · GW(p)

Yeah, I wouldn't call my writing "rational", but it seems like I got rid of some bad habits that are frequent on other parts of internet, and it is annoying when a new user unknowingly brings them here. I wish I could pinpoint them; that would probably be a useful list of do's and dont's.

One such example is exaggeration. In many debates, exaggeration is a way to get attention. People are screaming at each other; you need to scream louder than average in order to be noticed. Here, hyperbole will more likely make you seem stupid. We want a calibrated presentation of your case instead. If it is not the most important thing in the world, that is perfectly okay... unless you pretend that it is.

Similarly, humility. If you are not sure about something, that is okay as you admit it. The problem is when you write as if you are 100% sure of something, but make obvious mistakes.

Do not use CAPS LOCK, do not make clickbait titles... okay, this is probably obvious. It just seems to me that what the annoying stuff has in common is fighting for attention (by sacrificing to Moloch). The proper way to get attention is to write good content. -- Perhaps we should remind the new users that there are mechanisms that reward this. In short term, karma. In long term, selecting the best articles of the year.

(Generally, trying to seem cool can backfire? Or maybe it's just because I am mostly noticing the unsuccessful attempts to seem cool?)

If you make a mistake, do not double down. Your article getting -10 karma should not motivate you to write three more articles on the same topic. You are not going to win that way. More likely, you will get banned.

We are not one convincing article away from joining your religion or your political cause.

Replies from: MondSemmel, ldegrado1@gmail.com
comment by MondSemmel · 2023-04-26T12:34:15.282Z · LW(p) · GW(p)

One of the things I like least in comments is imputing (bad) motives: "Clearly you wrote this to X" or "Your purpose is to Y" etc.

Another thing I don't like is confidently paraphrasing what someone else said, in a way that's inevitably a misunderstanding or strawman. "You're clearly endorsing <politically disfavored concept, e.g. eugenics>. How dare you!". Trying to paraphrase others is good, if it's done in a spirit of curiosity: "Correct me if I'm wrong, but I understood you to say X.", or "As I understand it, you imply Y."

comment by ldegrado1@gmail.com · 2023-10-07T17:57:56.457Z · LW(p) · GW(p)

Thank you!

comment by habryka (habryka4) · 2023-04-25T21:14:00.322Z · LW(p) · GW(p)

I like the idea of linking to concrete examples. If we go far enough into the archives, we presumably also aren't really making anyone particularly defensive by spotlighting some 10-year old bad comment of theirs (and we should probably just quote it without linking to it).

Replies from: shminux
comment by Shmi (shminux) · 2023-04-25T21:23:16.885Z · LW(p) · GW(p)

Or even have mock thread examples...

comment by Vladimir_Nesov · 2023-04-26T01:27:14.794Z · LW(p) · GW(p)

The community believes there are ways of thinking, that if you figure them out and adopt them, you can become a person who systematically arrives at true beliefs and good decisions more of the time than someone who didn't adopt those ways of thinking.

What does it matter what the community believes? This phrasing is a bit self-defeating, deferring to community is not a way of thinking that helps with arriving at true beliefs and good decisions.

Also, I think references to what distinguishes rationality [LW · GW] from truth and other good things [LW · GW] are useful in that section (these posts are not even in the original sequences).

Replies from: Ruby
comment by Ruby · 2023-04-26T02:00:35.245Z · LW(p) · GW(p)

If you are joining a community and want to be accepted and welcomed, it matters what they believe, value, and are aiming to do. For that matter, knowing this might determine whether or not you want to be involved. 

Or in other words, that line means to say "hey, this is what we're about"

I do like those posts quite a fair bit. Will add.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-26T02:14:47.120Z · LW(p) · GW(p)

it matters what they believe

The phrasing is ambiguous between descriptive of this fact and prescriptive for it, especially for new people joining the community, which is the connotation I'm objecting to. It's bad as an argument or way of thinking in connection with that sentence, the implication of its relevance in that particular sentence is incorrect. It's not bad to know that it's true, and it's not bad that it's true.

comment by Adam Zerner (adamzerner) · 2023-04-29T05:50:04.168Z · LW(p) · GW(p)

Notes:

  • It felt a little abstract and difficult to understand to me. For example, "systematically arrives at true beliefs and good decisions" in the first paragraph. Not that it's easy or that I have a good idea myself, but I think it can be improved if it were explained more plainly and with some helpful examples.
  • I like the idea of talking about our feelings on well kept gardens [LW · GW]. Probably towards the beginning. I think people would empathize with it, respect it, and be more willing to invest the time to onboard.
  • Relatedly, this onboarding takes a very long time. The Sequences are super long. Not that you necessarily have to read them all, but still. There's just so much. I think that is something that we should be upfront about.
  • I really like this Shit Rationalists Say video. I think it captures an impressive amount things about what rationalists are like in a quick, fun, and entertaining way. It may seem like a toy, but I suspect that it'd be very useful for newcomers. (Maybe worth mentioning that it's hyperbolic.)
  • It seems worth pointing to the FAQ. I'd imagine that new users would wonder about various things that are addressed in the FAQ. Although maybe it's enough that the FAQ is discoverable in the side navigation.
  • User research is always important, of course. Getting feedback in the comments section of this post is one thing but it's also important to see how people who are actually new to LessWrong react to this.
  • Maybe it'd be good to mention that we have meetups in X number of cities across the world. I feel like the community aspect was undersold. Although I'm not sure how relevant that is to a new user. It seems nice to know, but I'm not sure.
  • To me it seems like a good idea to call out that we believe in a bunch of things that most people think are wacky. Intelligence explosion, cryonics, transhumanism, polyamory, circling. Better to filter out people who react strongly against those sorts of things from the get-go if you ask me.
  • HPMoR!
Replies from: MondSemmel, Ruby
comment by MondSemmel · 2023-04-29T13:50:41.913Z · LW(p) · GW(p)

To me it seems like a good idea to call out that we believe in a bunch of things that most people think are wacky. Intelligence explosion, cryonics, transhumanism, polyamory, circling. Better to filter out people who react strongly against those sorts of things from the get-go if you ask me.

I think it's good to point out that the LW audience is far more contrarian than the median, and that arguments from conformity or authority or public relations or the absurdity heuristic aren't received well. That said, I would not want to imply that there's a belief litmus test, and also expect that a significant fraction of LW members don't agree with / endorse / believe in at least one of these examples.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-04-29T19:30:27.215Z · LW(p) · GW(p)

Agreed. However, I think you can sort of have your cake and eat it too here. I think you can:

  1. Say that a lot of us believe things that most others see as wacky.
  2. Give examples of those things.
  3. Be clear that a significant number of people on LW don't believe in a lot of that stuff.
  4. Be clear that belief in that stuff isn't expected from new members, or expected that you have to eventually reach agreement.

I think 4 is a really good point though and it didn't occur to me when I wrote my initial comment, so thanks for pointing that out. At the same time, I do still endorse the "filter out people who react strongly against it" part. If 1, 2, 3 and 4 are all made clear and someone still, seeing that there's a lot of belief in wacky ideas is turned off, I expect that they wouldn't have been a good fit for the community anyway and so it's better to "fail fast".

comment by Ruby · 2023-04-29T17:14:16.237Z · LW(p) · GW(p)

Thanks, this is really helpful!

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-04-29T19:39:00.286Z · LW(p) · GW(p)

Sure thing!

comment by gilch · 2023-04-26T03:51:10.381Z · LW(p) · GW(p)

I think it hits a lot of good notes, but I'm not sure if it's all of them we'd need, and at the same time, I'm worried it may be too long to hit a new user with all at once. I'm not sure what I'd cut. What would go in a TL;DR?

I maintain that the 12 Virtues of Rationality is a good summary but a poor introduction. They seemed pretty useless to me until after I had read a lot of the Sequences. Not beginner material.

Inferential distances [LW · GW] and "scout mindset" might be worth mentioning.

I think Raising the Sanity Waterline [LW · GW] (if you follow its links) is a great mini-Sequence on fundamentals. I'm not sure how much that overlaps with the Highlights, but it's probably shorter.

comment by Monkle (Khai) · 2023-05-02T18:28:45.017Z · LW(p) · GW(p)

Hi, I’m a new user who stumbled across this so I figured it would be worth commenting. I came here via effective altruism and have now read a decent chunk of the Sequences so LW is not totally new to me as of reading this but still.

I definitely wish this introduction had been here when I first decided to take a look at LessWrong - it was a little confusing to figure out what the community was even supposed to be. The introductory paragraph is excellent for communicating what the core is that the community is built around, and the following sections seem super efficient at getting us up to speed on what LW is.

I find the How To Get Started section very confusing. It seems at first like a list of things you need to do before participating on the forum, but I guess it’s supposed to be rough progression of things you can do to become more of a LessWronger considering it has attend a meet-up on there? The paragraph afterwards also doesn’t make any sense to me - it says there’s not a tonne but you’ll probably be missing something on your first day… but it seems to me like it IS a tonne (the Sequences alone are really long!) and on your first day you won’t have done ANY of them (except general reading). Maybe you meant to say that it’s a list of possible things to get yourself clued up, but you don’t need to do a ton of it?

Finally, I already commented with no idea it would be moderated so heavily, so including that info is definitely helpful - plus the information about standards of content is just generally super useful to know from the start anyway.

Overall this seems really good and gets the important questions answered quickly. Honestly there’s not anything I wish was there that isn’t, or anything that is there that seems unnecessary. Great work 👍