New User's Guide to LessWrong

post by Ruby · 2023-05-17T00:55:49.814Z · LW · GW · 52 comments

Contents

  Why a new user guide?
        Although encouraged, you don't have to read this to get started on LessWrong!
    Contents of this page/email
  What LessWrong is about: "Rationality"
    Is LessWrong for you?
    Okay, what are some examples of what makes LessWrong different?
    Philosophical Heritage: The Sequences
    Topics other than Rationality
    Artificial Intelligence
  How to get started
        Foundational reading
        Exploring your interests
        Participate in welcome threads
        Attend a local meetup
  Helpful Tips
  How to ensure your first post or comment is well-received
      Don't worry about it too hard.
  In conclusion, welcome!
  Appendices
    The Voting System
      Strong Votes and Vote Strength
      Two-Axis System
    LessWrong moderator's toolkit
        Initial user/content review
        Moderator actions
        Rules to be aware of
None
52 comments
The road to wisdom? Well, it's plain
and simple to express:

Err
and err
and err again
but less
and less
and less.

– Piet Hein

Why a new user guide?

Although encouraged, you don't have to read this to get started on LessWrong! 

LessWrong is a pretty particular place. We strive to maintain a culture that's uncommon for web forums[1] and to stay true to our values. Recently, many more people have been finding their way here, so I (lead admin and moderator) put together this intro to what we're about.

My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn't the place for you, this guide will help you have a good "visit" or simply seek other pastures.

Contents of this page/email

 

If you arrived here out of interest in AI, make sure to read the section on LessWrong and Artificial Intelligence [LW · GW].

What LessWrong is about: "Rationality"

LessWrong is online forum/community that was founded with the purpose of perfecting the art of human[2] rationality.

While truthfulness is a property of beliefs, rationality is a property of reasoning processes. Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process. For example, a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what's convenient to believe. An aspiring rationalist[4] is someone who aspires to improve their own reasoning process to arrive at truth more often.

...a rationalist isn't just somebody who respects the Truth...All too many people respect the Truth. A rationalist is somebody who respects the processes of finding truth.Rationality: Appreciating Cognitive Algorithms [LW · GW

 

[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because because they're sitting atop a pile of utility. – Rationality is systematized winning [? · GW]
 

 

The Art [of rationality] must have a purpose other than itself, or it collapses into infinite recursion. - the 11th virtue of rationality [LW · GW]

On LessWrong we attempt (though don't always succeed) to apply the rationality lessons we've accumulated to any topic that interests us [? · GW], and especially topics that seem important, like how to make the world a better place. We don't just care about truth in the abstract, but care about having true beliefs [LW · GW] about things we care about so that we can make better and more successful decisions.

Right now, AI seems like one of the most (or the most) important topics for humanity. It involves many tricky questions, high stakes, and uncertainty in an unprecedented situation. On LessWrong, many users are attempting to apply their best thinking to ensure that the advent of increasingly powerful AI goes well for humanity.[5]

Is LessWrong for you?

LessWrong is a good place for someone who:

If many of these apply to you, then LessWrong might be the place for you.

LessWrong has been getting more attention (e.g. we get linked in major news articles somewhat regularly these days), and so have many more people showing up on the site. We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort, so we are putting more effort into tending to our well-kept garden [LW · GW]. 

If you're on board with our program and will help make our community more successful at its goals, then welcome!

Okay, what are some examples of what makes LessWrong different?

I just had a crazy experience. I think I saw someone on the internet have a productive conversation. 

I was browsing this website (lesswrong.com, from the guy who wrote that Harry Potter fanfiction I've been into), and two people were arguing back and forth about economics, and after like 6 back and forths one of them just said "Ok, you've convinced me, I've changed my mind". 

Has this ever happened on the internet before?

– paraphrased and translated chatlog (from german) by Habryka [LW · GW] to a friend of his, circa 2013-2014

The LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinct style from the rest of Internet.

Some of the features that set LessWrong apart:

Philosophical Heritage: The Sequences

“I re-read the Sequences”, they tell me, “and everything in them seems so obvious. But I have this intense memory of considering them revelatory at the time.”

This is my memory as well. They look like extremely well-written, cleverly presented version of Philosophy 101. And yet I distinctly remember reading them after I had gotten a bachelor’s degree magna cum laude in Philosophy and being shocked and excited by them. – Scott Alexander in Five Years and One Week of Less Wrong

Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality[7]; collectively those blog posts are called The Sequences. In 2009, Eliezer founded LessWrong as a community forum for the people who'd liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated.

If you go to a math conference, people will assume familiarity with calculus; the literature club likely expects you've read a few Shakespeare plays; the baseball enthusiasts club assumes knowledge of the standard rules. On LessWrong people expect knowledge of concepts like Conservation of Expected Evidence [LW · GW] and Making Beliefs Pay Rent [LW · GW]and Adaptation-Executers, not Fitness-Maximizers [LW · GW].

Not all the most commonly referenced ideas come from The Sequences, but enough of them do that we strongly encourage people to read The Sequences. Ways to get started

Much of the spirit of LessWrong can also be gleaned from Harry Potter and the Methods of Rationality [? · GW] (a fanfic by the same author as The Sequences). Many people found their way to LessWrong via reading it.

Don't worry! You don't have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is.

Topics other than Rationality

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion. - 12 Virtues of Rationality [LW · GW]

We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.

Artificial Intelligence

If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.

Even if you found your way to LessWrong because of your interest in AI, it's important for you to be aware of the site's focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc. 

How to get started

Because LessWrong is a pretty unusual place, it's usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you'll write something well received.

Here's the reading we recommend:

Foundational reading

LessWrong grew from the people who read Eliezer Yudkowsky's writing on a shared blog overcomingbias.com and then migrated to a newfound community blog in 2009. To better understand the culture and shared assumptions on LessWrong, read The Sequences [? · GW].

The full Sequences is pretty long, so we also have The Sequences Highlights [? · GW] for an initial taste. The Codex, [? · GW] a collection of writing by Scott Alexander (author of Slate Star Codex/Astral Codex Ten) is also a good place to start, as is Harry Potter and the Methods of Rationality [? · GW].

Exploring your interests

The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong

Participate in welcome threads

The monthly general Open and Welcome thread [? · GW] is a good place to introduce yourself and ask questions, e.g. requesting reading recommendations or floating your post ideas. There are frequently new "all questions welcome" AI Open Threads [? · GW] if that's what you'd like to discuss.

Attend a local meetup

There are local LessWrong (and SSC/ACX) meetups in cities around the world. Find one (or register for notifications) on our event page [? · GW].

Helpful Tips

If you have questions about the site, here are few places you can get answers:

How to ensure your first post or comment is well-received

This is a hard section to write. The new users who need to read it least are more likely to spend time worrying about the below, and those who need it most are likely to ignore it. Don't stress too hard. If you submit it and we don't like it, we'll give you some feedback.

A lot of the below is written for the people who aren't putting in much effort at all, so we can at least say "hey, we did give you a heads up in multiple places".

There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking, it might be rejected and you'll get feedback on why.

Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if you:

Demonstrate understanding of LessWrong rationality fundamentals. Or at least don't do anything contravened by them. These are the kinds of things covered in The Sequences such as probabilistic reasoning [LW · GW], proper use of beliefs [LW · GW], being curious about where you might be wrong, avoiding arguing over definitions, etc. See the Foundational Reading [LW · GW] section above.

Write a clear introduction. If your first submission is lengthy, i.e. a long post, it's more likely to get quickly approved if the site moderators can quickly understand what you're trying to say rather than having to delve deep into your post to figure it out. Once you're established on the site and people know that you have good things to say, you can pull off having a "literary" opening that doesn't start with the main point.

Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it's clear you're aware of prior relevant discussion and are building upon on it. It's not a big deal if you weren't aware, there's just a chance the moderator team will reject your submission and point you to relevant material.

This doesn't mean that you can't question positions commonly held on LessWrong, just that it's a lot more productive for everyone involved if you're able to respond to or build upon the existing arguments, e.g. showing why they're wrong.

Address the LessWrong audience. A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There's nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particularly interesting or insightful, nor demonstrate an interest in LessWrong's culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).

It's good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts are good). 

Aim for a high standard if you're contributing on the topic AI. As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don't think your AI-related contribution is particularly valuable and it's not clear you've tried to understand the site's culture or values, then it's possible we'll reject it.

Don't worry about it too hard.

It's ok if we don't like your first submission, we will give you feedback. In many ways, the bar isn't that high. As I wrote above, this document is so not being approved on your first submission doesn't come as a surprise. If you're writing a comment and not a 5,000 word post, don't stress about it.

If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one [? · GW] or AI one [? · GW]. That's a good place to say "I have an idea about X, does LessWrong have anything on that already?"

In conclusion, welcome!

And that's it, hopefully this intro sets you up for good reading and good engagement with LessWrong!

Appendices

The Voting System

The voting or "karma" system is pretty integral to how LessWrong promotes (or hides) content. The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less. 

Strong Votes and Vote Strength

LessWrong has strong votes too, for when you feel something particularly strongly. Different users have different vote strengths based on how many upvotes/downvotes they've received.

Two-Axis System

It's possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it's for a conclusion you agree with. LessWrong makes it possible to express to see more/less of something separately from whether you agree/disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too.

LessWrong moderator's toolkit

The LessWrong mod team like to be transparent about our moderation process. We take tending the garden [LW · GW]seriously, and are continuously improving our tools for maintaining a well-kept site. Here are some of our tools and processes.

Initial user/content review

Moderator actions

When there's stuff that seems to make the site worse, in order of severity, we'll apply the following:

Rules to be aware of

 

  1. ^

    I won't claim that we're entirely unique, but I don't think our site is typical of the internet.

    Some people pointed out to me that other Internet communities also aim more in the direction of collaborative and truth-seeking discourse such as Reddit's ELI5 or Change My View; adjacent communities like Astral Codex Ten;  and discourse in technical communities like engineers or academics; etc.

  2. ^

    We say "human" rationality, because we're most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that'd apply to AIs and aliens too).

  3. ^

    The definition of "rationality" on LessWrong isn't 100% universally agreed to, though this one is the most standard.

  4. ^

    This is ideally what we'd call ourselves all the time, but since it's a bit of a mouthful, people tend to just say rationalist without qualification. Nonetheless, we do not claim that we've definitely attained that much rationality. But we're aiming to.

  5. ^

    In fact, one of Eliezer Yudkowsky's (founder of LessWrong) ulterior motives [LW · GW] for founding LessWrong in 2009 was that rationality would help people think about AI. Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future.

  6. ^

    As opposed to beliefs being for signaling group affiliation and having pleasant feelings.

  7. ^

    In a 2014 comment, Eliezer described the Sequences as containing 60% standard positions, 25% ideas you could find elsewhere with some hard looking, and 15% original ideas. He says that the non-boring tone might have fooled people into thinking more is in original than there is, but also that the curation of which things he included and how they fit together into a single package was also originality.

52 comments

Comments sorted by top scores.

comment by David Gross (David_Gross) · 2023-05-17T02:10:17.757Z · LW(p) · GW(p)

LessWrong is a good place for:

Each of the following bullet points begins with "who", so this should probably be something like "LessWrong is a good place for people:"

Replies from: gilch
comment by gilch · 2023-11-29T23:35:46.459Z · LW(p) · GW(p)

Or "good place for those".

comment by MondSemmel · 2023-08-01T19:37:12.488Z · LW(p) · GW(p)

This is much much better than the draft version. In particular, I no longer have the same impression from my draft feedback, that it read like "Here's how you can audition for a spot in our prestigious club".

So kudos for listening to feedback <3, and apologies for my exhausting style of ultra-detailed feedback.

Anyway, you made the mistake (?) of asking for more feedback, so I have more of it T_T. I've split it into three separate comments: typos, language, and substantial feedback.

Substantial feedback (incl. disagreements)

Excessive demands on first contributions by new users

  • "Don't worry! You don't have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is." -> I'm confused who this kind of phrasing is addressed at, and wonder whether the current version would have the desired effect. After all, "Don't worry" often means "Do worry".
  • "Even if you found your way to LessWrong because of your interest in AI, it's important for you to be aware of the site's focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc." -> Once again I'm skeptical about these vague end-of-section paragraphs.
  • "How to ensure your first post or comment is well-received" -> Once again I don't like any sections which imply that you have to write a Bachelor's Thesis before you can begin participating on the site.
    • I would reconsider the motivation for that section and cut it entirely, or substantially rewrite and shorten it, or just spin it off into a separate post.
    • Who is this section even written for? "A lot of the below is written for the people who aren't putting in much effort at all, so we can at least say "hey, we did give you a heads up in multiple places"." -> That seems like a bad reason for something to be part of the New User's Guide. Brevity is a virtue; here you're displaying text to people in the expectation that those who should read it won't, and those who don't need to read it will.
    • Another indication for why this section seems dubious to me is that it once again ends on something like "Don't worry about it too hard.". If you don't want new users to worry about something too hard, don't put it into the New User's Guide in the first place.
  • Re: the section "Initial user/content review" -> See my comments on "How to ensure your first post or comment is well-received".

Excessive reading material for new users

  • The "How to get started" section begins with "Because LessWrong is a pretty unusual place, it's usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you'll write something well received." -> This section is drowning new users in potential reading material, and I'm skeptical of that approach.
  • Also, part of the advice in "How to get started" boils down to "read the Sequences", which is a ridiculously huge ask. That's not "how to get started", at best that's "how to get much more involved". (As an example, IIRC I read the Sequences back in 2013 as a university student, and reading them took me two full months during summer vacation.)

Suggestions for how to welcome new users instead

  • "Participate in welcome threads" -> This kind of suggestion should be at the top of the "How to get started" section, not (to paraphrase) "read several million words". That said, I don't know to which extent questions in those threads are currently answered. But since the mods already take the time to review any comments by new users, I think responding to questions in these threads would be a comparatively good use of mod time, to the point that you could even make it an unspoken rule that in Welcome threads, all requests for further reading will be answered.
  • "The monthly general Open and Welcome thread [? · GW] ... "all questions welcome" AI Open Threads [? · GW]" -> These tag pages are currently sorted by Most Relevant for me, which is to say, Not Relevant At All. If the site infrastructure allows this, I'd suggest setting these two tags to default to sort by New whenever they're linked without a preferred sorting method. If not, I suggest replacing all links to the open threads such that the sorting is part of the link. Like this: Open Threads [? · GW] (sorted by New) and AI Open Threads [? · GW] (sorted by New).

On the FAQ

  • The "Helpful Tips" section mentions that the FAQ is outdated. If the FAQ is outdated, either don't link to it, or maybe actually update it?
  • Alternatively, consider turning the FAQ from a post into a tag page; then other LW power users could update it rather than just the LW team. (This seems like a good rule of thumb for all "living documents" related to LW, i.e. ones which are meant to be kept up-to-date; blogposts aren't really the right format for documents which are meant to be continuously edited, whereas the tag pages are. Also, what if the FAQ essay is replaced by a new one in the future? Then you'd have to update all links to the old FAQ.)
  • In fact, I suspect that if you turned the FAQ into any format which the community can continuously edit, and then wrote a post à la "Request: Help Us Update our FAQ", then I expect that this problem might just "solve itself".

On the Length

  • Shorter is better. Approximately all LW posts, including my comments here, are way too long.
  • Here are some parts I think could be cut or spun off:
    • Footnote 2 (on why human rationality) seems superfluous. I don't think this footnote pulls its weight in this intro.
    • The section "How to ensure your first post or comment is well-received". See my "Substantial feedback" section for why I don't like it.
comment by trevor (TrevorWiesinger) · 2023-05-17T04:11:02.422Z · LW(p) · GW(p)

Could the CFAR handbook [LW · GW] or Tuning your cognitive strategies [LW · GW] be put in the foundational reading section, alongside the Sequences and Codex and HPMOR? 

Cognitive tuning isn't very foundational, and possibly not even safe (although people worried about the safety seem to be mistaken). But if enough people try it, then it has significant potential to become its own entire field of successful human intelligence augmentation. AFAIK it offers a more nuanced approach to intelligence-augmenting habit formation than anything I've seen from any other source.

Replies from: Ruby
comment by Ruby · 2023-05-17T05:36:04.938Z · LW(p) · GW(p)

The CFAR handbook is good stuff that gets at important aspects of rationality, but I don't think it counts either as something that core LessWrong userbase has mostly read, or is nearly as much the stuff that gets used regularly in conversations here. Among other things, the PDF of it wasn't generally available until 2020, and a nicely formatted sequence until a year ago.

comment by the gears to ascension (lahwran) · 2023-05-17T01:16:19.204Z · LW(p) · GW(p)

A bunch of the intro feels quite molochpilled to me. eg "stay true to our values" and the entire "systemetized winning" that we still seem to bring up here (concerning in the sense of implying conflict games). Since the negative interpretations aren't the intended ones, I suspect that we're a low edit distance from avoiding the implication. Unfortunately, it's late and I post this without any fixes in mind; just thought I'd express the viewpoint.

Sorry to have missed this while it was in draft form!

Replies from: Ruby, Ruby
comment by Ruby · 2023-05-17T05:41:24.256Z · LW(p) · GW(p)

Can you clarify the molochy-ness?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-05-17T09:53:49.987Z · LW(p) · GW(p)

short answer: apparently I'm not sure how to clarify it.

Before this change, which I feel fixes the main issue I was worried about:

rationalists should win [at life, their goals, etc]

it sounded to a large subset of my predictor of how my friends would react if I shared this to invite them to participate here, that I should predict that they would read it as "win at the zero sum game of life". this still has some ambiguity in that direction; by not clearly implying that life isn't zero sum, an implication that a certain kind of friend is worried anyone who thinks themselves smarter or more rational than others is likely to also believe, that sort of easily spooked friend will be turned away by this phrasing. I don't say this to claim this friend is correct; I say this because I want to invite more of this sort of friend to participate here. I also recognize that accommodating the large number of easily spooked humans out there can be a chore, which is why I phrase the criticism by describing in detail how the critique is based on a prediction of those who won't comment about it. Those who do believe life is zero sum, and those who spend their day angry at the previous group who believe life is zero sum, should, in my opinion, both be able to read this and get excited that this rational viewpoint has a shot at improving on their own viewpoint; the conflict between these types of friend should be visibly "third door"ed here. To do this needs a subtlety that I write out this long meta paragraph because I am actually not really sure how to manage; a subtlety that I am failing to encode. So I just want to write out a more detailed overview of my meta take and let it sit here. Perhaps this is because the post is already at the pareto frontier of what my level of intelligence and rationality can achieve, and this feedback is therefore nearly useless!

In other words: nothing actually specifically endorses moloch. But there's a specific kind of vibe that is common around here, which I think a good intro should help onramp people into understanding, and which presently is an easier vibe to get started with for the type of friend who believes life is zero sum and would like to win against others.

Btw, I unvoted my starting comment, based on a hunch about how I'd like comments to be ordered here.

Replies from: TAG
comment by TAG · 2023-05-17T11:24:54.363Z · LW(p) · GW(p)

The question of whether truth-seeking (epistemic) rationality is actually the same.as.winning (instrumental) rationality has never been settled. In the interests of epistemic rationality, it might have been better to phrase this as "we are interested in seeking both truth and usefulness".

comment by Ruby · 2023-05-17T01:25:52.529Z · LW(p) · GW(p)

Some of that changed from the last draft. I just made a change to clarify in the case of "winning" since that seemed easy.

comment by MondSemmel · 2023-08-01T19:07:29.688Z · LW(p) · GW(p)

Feedback on language, style, and phrasing

  • The table of contents at the top is currently not synced with the actual headings, and is missing most of the subheadings.
  • "My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn't the place for you, this guide will help you have a good "visit" or simply seek other pastures." -> Is the second sentence really necessary?
  • "We strive to maintain a culture that's uncommon for web forums[1] [LW(p) · GW(p)] and to stay true to our values." -> The "stay true to our values" part of the sentence seems rather empty because the values aren't actually listed until a later section. How about "We strive to main a culture and values which are uncommon for web forums" or some such?
  • Re: "Our definition of rationality" in the section 'What LessWrong is about: "Rationality"': Instead of the current footnote, I'd prefer to see a brief disambiguation on what similar-sounding concepts LW-style rationality is not equivalent to, namely philosophical rationalism. And even most of the criticisms on the Wikipedia page on rationality don't refer to the LW concept of rationality, but something different and much older.
  • "If you're on board with our program and will help make our community more successful at its goals, then welcome!" -> I know what you're going for here, but this currently sounds like "if you're not with us, you're against us", even though a hypothetical entirely passive lurker (who doesn't interact with the site at all) would be completely fine. In any case, I think this section warrants a much weaker-sounding conclusion. After all, aren't we fine with anyone who (to keep the metaphor) doesn't burn or trash the garden?
  • "We treat beliefs as being about shaping your anticipations of what you'll observe[6]" -> I currently don't understand the point of this sentence. Maybe something like "We consider the purpose of beliefs that they shape your anticipations of what you'll observe[6]"? That still sounds weird. I'm genuinely not sure, and thus in any case recommend rewriting this sentence.
  • "LessWrong is also integrated with the Alignment Forum" -> If you're going to mention the Aligment Forum, then I suggest also explaining what it is in one short sentence.
  • A significant chunk of the section "Foundational reading" is a redundant repetition of the section "Philosophical Heritage: The Sequences".
  • Throughout the essay, there are several instances of writing of the form "A/B/C", and in all cases they would read better as an actual sentence with commas etc.
  • "The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less." -> Isn't the actual advice to upvote if you want yourself and others to see more of something? Or phrased differently, "Upvote if you want LW to feature more of X".
  • "Different users have different vote strengths based on how many upvotes/downvotes they've received." -> This phrasing seems needlessly roundabout. Long-term community members with higher karma have stronger votes, that's it.
  • "we will soon be experimenting with automatic rate limits: users with very low or negative karma will be automatically restricted in how frequently they can post and comment. For example, someone who's quickly posted several negative-karma posts will need to wait before being allowed to post the next one." -> This entire paragraph is no longer up-to-date.

Nitpicky language feedback

  • "Why a new user guide?" (first heading) -> This might be clearer as "Why a guide for new users?"
  • "Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process." -> I know what you're going for here, but as written this sounds like you're presupposing your conclusion.
  • "If many of these apply to you, then LessWrong might be the place for you." -> "might be a good place for you"
  • Pretty much all bullet points after "Some of the features that set LessWrong apart:" look like full sentences and should therefore end on a period.
  • "Rather than treating belief as binary, we use probabilistic credences to express our certainty/uncertainty." -> would be shorter as "express our (un)certainty"
  • "examples here [LW(p) · GW(p)]" -> You can find some examples here [LW(p) · GW(p)]."
  • "Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts" -> That sounds like a confusing contradiction, unless it's a puzzle whose gotcha answer is "In 2007 and 2008". Were the sequence written in 2 years or in 3-4 years?
  • "blog posts that shared his philosophy/beliefs/models about rationality" -> philosophy, beliefs, and models"
  • "The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong" -> This reads a bit weirdly and could be rephrased.
  • The "Helpful Tips" section is unpolished, with inconsistent phrasing etc.
  • "Two-Axis System" -> "The Two-Axis Voting System"
  • "It's possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it's for a conclusion you agree with. LessWrong makes it possible to express to see more/less of something separately from whether you agree/disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too." -> Suggested phrasing: "Sometimes you might want to see more of something (like interesting arguments), even if you disagree with it, or to think an argument is weak even though it's for a conclusion you agree with. On LessWrong you can express your desire to see more (or less) of something separately from whether you (dis)agree with it. (Currently only comments.) So with this voting system, you can express judgments of quality separate from agreement."
  • "That page that exists so people can double-check our decisions." -> "That page exists so users can hold the LW mods accountable for their moderation decisions."
  • "If we don't like your submission, we mark it as rejected" -> Weird phrasing. How about: "If we reject your submission as not being a good fit for LW"
  • "When there's stuff that seems to make the site worse, in order of severity, we'll apply the following:" -> "stuff" seems too vague.

Sections with weird phrasing

  • "As I wrote above, this document is so not being approved on your first submission doesn't come as a surprise." -> Weird phrasing.
  • "hopefully this intro sets you up for good reading and good engagement with LessWrong!" -> Weird phrasing.
  • "The LessWrong mod team like to be transparent about our moderation process." -> Weird phrasing.
  • "Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future." -> Weird phrasing.
comment by Vladimir_Nesov · 2023-05-17T15:18:11.096Z · LW(p) · GW(p)

it's a lot more productive for everyone involved if you're able to respond to or build upon the existing arguments, e.g. showing why you think they're wrong

Good opportunity to say "showing why they're wrong" instead (without "you think"), to avoid connotation of "it's just your opinion" rather than possibility of actually correct bug reports.

Replies from: Ruby
comment by Ruby · 2023-05-17T18:20:30.378Z · LW(p) · GW(p)

Edited!

comment by David Gross (David_Gross) · 2023-05-17T02:09:12.860Z · LW(p) · GW(p)

A more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process.

It's not clear from this or what immediately follows in this section whether you intend this statement as a tautological definition of a process (a process that "tends to arrive at true beliefs and good decisions more often" is what we call a "more rational reasoning process") or as an empirically verifiable prediction about a yet-to-be-defined process (if you use a TBD "more rational reasoning process" then you will "tend[] to arrive at true beliefs and good decisions more often"). I could see people drawing either conclusion from what's said in this section.

Replies from: Ruby
comment by Ruby · 2023-05-17T05:40:52.525Z · LW(p) · GW(p)

Good point. I've edited to make this clearer.

Replies from: David_Gross
comment by David Gross (David_Gross) · 2023-05-17T15:43:56.368Z · LW(p) · GW(p)

Since you've gone with the definition, are you sure that definition is solid? A reasoning process like "spend your waking moments deriving mathematical truths using rigorous methods; leave all practical matters to curated recipes and outside experts" may tend to arrive at true beliefs and good decisions more often than "attempt to wrestle as rationally as you can with all of the strange and uncertain reality you encounter, and learn to navigate toward worthy goals by pushing the limits of your competence in ways that seem most promising and prudent" but the latter seems to me a "more rational reasoning process."

The conflation of rationality with utility-accumulation/winning also strikes me as questionable. These seem to me to be different things that sometimes cooperate but that might also be expected to go their separate ways on occasion. (This, unless you define winning/utility in terms of alignment with what is true, but a phrase like "sitting atop a pile of utility" doesn't suggest that to me.)

If you thought you were a shoe-in to win the lottery, and in fact you do win, does that retrospectively convert your decision to buy a lottery ticket into a rational one in addition to being a fortunate one? (Your belief turned out to be true, your decision turned out to be good, you got a pile of utility and can call yourself a winner.)

Replies from: Ruby
comment by Ruby · 2023-05-17T19:20:59.393Z · LW(p) · GW(p)

A thing I should likely include is something like the definition gets disputed, but what I present is the most standard one.

comment by Ruby · 2023-05-17T00:56:50.442Z · LW(p) · GW(p)

Thanks to everyone who posted feedback on the draft of this [LW · GW].

comment by MondSemmel · 2023-08-01T18:57:24.257Z · LW(p) · GW(p)

Typo feedback:

If you arrived here out of interested in AI

"out of interest"

LessWrong is online forum/community

"is an online forum and community"

a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what's convenient to believe."

"more likely to lead to true beliefs" (a reasoning process doesn't believe anything)

Rationality is systematized winning [? · GW]

a) The original article is capitalized as "Rationality is Systematized Winning"

b) After this line in the essay, there's an empty line inside the quote which can be removed.

- the 11th virtue of rationality [LW · GW]

For consistency, the dash here should be an em-dash: –

LessWrong is a good place for:

In all the following list of bullet points, the grammar doesn't work.

a) Currently they read as "LessWrong is a good place for who wants to work collaboratively" etc., so obviously a word like "someone" or "people" is missing. And the entire structure might work better if it was instead phrased as "LessWrong is a good place for people who..." or "LessWrong is a good place for you if you", with each bullet point beginning with "... <verb>".

b) The sentences also currently mix up two ways of address, namely "someone who" and "you". E.g. look at this sentence: "who likes acknowledging... to your reasoning"

We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.

I'm not entirely sure, but I think the "won't" here might be a wrong negation. How about something like the following:

"We, the site moderators, don't take for granted what makes our community special, and that preserving it will require intentional effort."

– paraphrased and translated chatlog (from german)

"German"

These give LessWrong a pretty distinct style from the rest of Internet.

"of the Internet"

Rather than say that is "extremely unlikely", we'd say "I think there's a 1% chance or lower of it happening".

"Rather than say that X is... that X happens."

that seem to make conversation worse

"conversations"

these are not official LessWrong site guidelines, but suggestive of the culture around here:

"These"

for the people who'd liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated

"wanted to have discussions"

"he'd described"

Ways to get started

"started:"

Also, some of the bullet points immediately after this are in past tense for some reason.

Rationality: A-Z was an edited and distilled version compiled in 2015 of ~400 posts.

"consisting of ~400 posts"

Highlights from the Sequences is 50 top posts from the Sequences. They're a good place to start.

"consists of 50 top posts"

this is just a heads up

heads-up

LessWrong is also integrated with the Alignment Forum

"Forum."

doing so ensures you'll write something well received

"well-received"

The full Sequences is pretty long

"are pretty long"

and see what the style is on LessWrong

"and see what the style is on LessWrong."

If you have questions about the site, here are few places you can get answers:

"here are a few places where"

many more people are flowing to LessWrong because we have discussion of it

I find the current phrasing a bit weird. Maybe "because we host discussions of it"?

It's possible to want to see more of something (e.g. interesting arguments) even if you disagree with them

", even if you disagree with it"

it's okay if your first submission or several don't meet the bar, we'll give you feedback on what to change if something's not good

All other bullet points here are phrased as full sentences with a period at the end.

Rules to be aware of

All bullet points following this are missing periods at the end.

comment by panos · 2023-06-17T19:59:11.682Z · LW(p) · GW(p)

[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because because they're sitting atop a pile of utility. – Rationality is systematized winning [? · GW]

"because because" should probably be "because"

We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.

"won't stay that way" should probably be "would stay that way"

comment by Thomas Sepulchre · 2023-05-19T12:15:05.352Z · LW(p) · GW(p)

What LessWrong is about: "Rationality"

I don't know how to phrase the question but, basically, "what does that mean"?

Assume a new user comes to LW, reads the New User's Guide to LessWrong first, then starts browsing the latest posts/recommandations, they will quickly notice that, in practice, LW is mostly about AI or, at least, most posts are about AI, and this has been the case for a while already.

And that is despite the positive karma bias towards Rationality and World modeling by default, which I assume is an effort from you (the LW team) to make LW about rationality, and not about AI (I appreciate the effort).

So, the sentence "What LW is about: "Rationality" ", is it meant to describe the website, in which case it seems like a fairly inaccurate description ; is it meant to be a promise made to new users, that is "we know that, right now, discussions are focused on AI, but we, the LW team, know that they will come back to rationality / are commited to make them come back to rationality"?

I don't want to criticize the actions of the LW team, I understand that your are aware of this situation, and that there might not exist a better equilibrium between wanting LW to be about rationality, not wanting to shut down AI discussions because they have some value, not wanting to prevent users from posting about anything (including AI) as long as some quality standards are met. Still, I am worried about the gap a new user would observe between the description of LW written here, and what they will find on the site.

Replies from: MondSemmel
comment by MondSemmel · 2023-05-23T13:59:34.827Z · LW(p) · GW(p)

A few points.

  1. This might be conflating "what this site is about" with "what is currently discussed". The way I see it, LW is primarily its humungous and curated archives, and only secondarily or tertiarily its feed. The New User experience includes stuff like the Sequence Highlights [? · GW], for example. If there's too much AI content for someone's taste (there certainly is for mine), then a simple solution is to a) focus on the enduring archives, rather than the ephemeral feed; and b) to further downweight the AI tag (-25 karma is nowhere near enough).
    1. That said, it might be warranted for the LW team to adjust the default tag weights for new users, going forward.
  2. Rationality is closely related to cognition and intelligence, so I don't think it's as far or distinct from AI as would be implied by your comment. AI features prominently in the original Sequences, for example.
  3. You registered in 2020. Back then, a new user might have asked whether the site is supposed to be about rationality, or rather about Covid.
Replies from: Thomas Sepulchre
comment by Thomas Sepulchre · 2023-05-23T15:29:24.598Z · LW(p) · GW(p)

Good points

  1. I'm not sure I share your view, I believe that new user care more about active discussions than reading already established content. I may very much be wrong here. 
  2. I agree with you
  3. I think there is more posts about AI now than posts about Covid back then, but I see your point. There were indeed a lot of posts about Covid.

Thank you

Replies from: MondSemmel
comment by MondSemmel · 2023-05-23T15:49:21.350Z · LW(p) · GW(p)

You may be right regarding what new users care about - usually one registers on a site to comment on a discussion, for example -, but the problem is that from that perspective, LW is definitely about AI, no matter what the New User's Guide or the mods or the long-term users say. After all, AI-related news is the primary reason behind the increased influx of new users to LW, so those users are presumably here for AI content.

One way in which the guide and mod team try to counteract that impression is by showing new users curated stuff from the archives, but it might also be warranted to further deemphasize the feed.

Replies from: NeroWolfe
comment by NeroWolfe · 2023-08-31T18:40:35.175Z · LW(p) · GW(p)

I'm a new member here and curious about the site's view on responding to really old threads. My first comment was on a post that turned out to be four years old. It was a post by Wei Dai and appeared at the top of the page today, so I assumed it was new. I found the content to be relevant, but I'd like to know if there is a shared notion of "don't reply to posts that are more than X amount in the past."

Replies from: Zack_M_Davis, adamzerner, Raemon
comment by Zack_M_Davis · 2023-08-31T19:05:49.957Z · LW(p) · GW(p)

I love getting comments on old posts! (There would be less reason to write if all writing were doomed to be ephemera; the reverse-chronological format of blogs shouldn't be a straitjacket or death sentence for ideas.)

Replies from: MondSemmel
comment by MondSemmel · 2023-08-31T19:36:29.183Z · LW(p) · GW(p)

Absolutely. I've just gotten a 30-day trial for Matt Yglesias' SlowBoring substack, and figured I'd look through the archives... But then I immediately realized that Substack, just like reddit etc., practically doesn't care about preserving, curating or resurfacing old content. Gwern has a point here on internet communities prioritizing content on different timescales by design, and in that context, LessWrong's attempts to preserve old content are extremely rare.

comment by Adam Zerner (adamzerner) · 2023-09-09T07:48:44.813Z · LW(p) · GW(p)

I'm very confident that there is no norm of pushing people away from posting on old threads. I'm generally confident that most people appreciate comments on old posts. However, I think it is also true that comments on old posts are unlikely to be seen, voted on, or responded to.

Replies from: niplav
comment by niplav · 2023-09-09T15:07:42.971Z · LW(p) · GW(p)

I agree that if at all there is a counternorm to that, and also with the observation that such comments are often (sadly) ignored.

comment by Raemon · 2023-08-31T19:20:42.718Z · LW(p) · GW(p)

It's totally normal to comment on old posts. We deliberate design the forum to make it easier to do and for people to see that you have.

Replies from: Raemon
comment by Raemon · 2023-08-31T19:23:31.980Z · LW(p) · GW(p)

(actually your comment here makes me realize we should probably somehow indicate when there are new comments on the top-of-the-page spotlight post, so people can more easily see and continue the convo)

Replies from: TAG
comment by TAG · 2023-08-31T21:37:10.490Z · LW(p) · GW(p)

GreaterWrong shows new comments regardless.

Replies from: Raemon
comment by Raemon · 2023-08-31T21:38:54.572Z · LW(p) · GW(p)

So does LessWrong, but they quickly disappear (because there's a high volume of comments). GreaterWrong doesn't have Spotlight Items so the point is a bit moot, but the idea here is that everyone is nudged more to see new comments on the current Spotlight Item on LessWrong.

(i.e. this thing at the top:


)

comment by RomanHauksson (r) · 2023-05-18T15:04:58.689Z · LW(p) · GW(p)

We take tending the garden seriously

Ironic typo: the link includes the proceeding space.

comment by niplav · 2023-05-17T07:44:26.428Z · LW(p) · GW(p)

Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.

He usually descibes himself as a decision theorist if asked for a description of his job.

comment by Causal Chain (causal-chain) · 2023-05-17T02:52:48.327Z · LW(p) · GW(p)

Some typos:

rationality lessons we've accumulated and made part of our to our thinking

Seems like some duplicated words here.

weird idea like AIs being power and dangerous in the nearish future.

 Perhaps: "weird ideas like AIs being powerful and dangerous"

comment by Measure · 2023-05-17T13:44:59.834Z · LW(p) · GW(p)

We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.

 

The double negative here distorts the meaning of this sentence.

comment by Ruby · 2023-05-17T05:42:56.655Z · LW(p) · GW(p)

Thanks @David Gross [LW · GW] for the many suggestions and fixes! Much appreciated. Clearly should have gotten this more carefully proofread before posting.

Replies from: MondSemmel, ksv
comment by MondSemmel · 2023-05-23T14:01:08.867Z · LW(p) · GW(p)

All the typo comments are great, but the resolved typos are mixed in with open feedback. Is it possible to hide those or bundle them together, somehow, so they don't clutter the comments here?

Replies from: Ruby
comment by Ruby · 2023-05-23T17:36:55.473Z · LW(p) · GW(p)

I agree it's not great, though I don't have any easy/quick solution for it.

Replies from: MondSemmel
comment by MondSemmel · 2023-05-23T19:53:41.647Z · LW(p) · GW(p)

I also frequently make typo comments, and this problem is why I've begun neutral-voting my own typo comments, so they start on 0 karma. If others upvote them, the problem is that the upvote is meant to say "thanks for reporting this problem", but it also means "I think more people should see this". And once the typo is fixed, the comment is suddenly pointless, but still being promoted to others to see.

Alternatively, I think a site norm would be good where post authors are allowed and encouraged to just delete resolved typo comments and threads. I don't know, however, if that would also delete the karma points the user has gained via reporting the typos. And it might feel discouraging for the typo reporters, knowing that their contribution is suddenly "erased" as if it had never happened.

A technical alternative would be an archival feature, where you or a post author can mark a comment as archived to indicate that it's no longer relevant. Once archived, a comment is either moved to some separate comments tab, or auto-collapsed and sorted below all other comments, or something.

comment by simple_name (ksv) · 2023-05-17T07:42:50.972Z · LW(p) · GW(p)

The concepts page link in the "Exploring your interests" section seems wrong.

comment by David Gross (David_Gross) · 2023-05-17T01:45:58.612Z · LW(p) · GW(p)

Although encouraged, you don't have to read this to get started on LessWrong! 

This is grammatically ambiguous. The "encouraged" shows up out of nowhere without much indication of who is doing the encouraging or what they are encouraging. ("Although [something is] encouraged [to someone by someone], you don't have to read this...")

Maybe "I encourage you to read this before getting started on LessWrong, but you do not have to!" or "You don't have to read this before you get started on LessWrong, but I encourage you to do so!"

comment by pathfinder · 2024-10-26T02:00:35.483Z · LW(p) · GW(p)

who

redundant "who"s in bullets

Replies from: Ruby
comment by Ruby · 2024-10-26T18:37:38.606Z · LW(p) · GW(p)

Thanks! Fixed

comment by Crazy philosopher (commissar Yarrick) · 2024-08-13T18:00:38.211Z · LW(p) · GW(p)

I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?

Replies from: Ruby
comment by Ruby · 2024-08-13T18:04:07.958Z · LW(p) · GW(p)

I'd ask in the Open Thread [LW · GW] rather than here. I don't know of a canonical answer but would be good if someone wrote one.

Replies from: commissar Yarrick
comment by Crazy philosopher (commissar Yarrick) · 2024-06-03T18:03:16.349Z · LW(p) · GW(p)

what exactly do users lose and receive karma for?

Replies from: habryka4
comment by habryka (habryka4) · 2024-06-03T18:04:56.047Z · LW(p) · GW(p)

Karma is just the sum of votes from other users on your posts, comments and wiki-edit contributions.

comment by mocny-chlapik · 2023-07-03T19:54:19.966Z · LW(p) · GW(p)

Hey, I wonder what's your policy on linking blog posts? I have some texts that might be interesting to this community, but I don't really feel like copying everything from HTML here and duplicating the content. At the same time I know that some communities don't like people promoting their content. What are the best practices here?

comment by Pretentious Penguin (dylan-mahoney) · 2023-07-31T04:22:17.857Z · LW(p) · GW(p)

Typo: "If you arrived here out of interested in AI" instead of "If you arrived here out of interest in AI".