How Do You Interpret the Goal of LessWrong and Its Community?
post by ashen8461 · 2025-01-16T19:08:43.749Z · LW · GW · No commentsThis is a question post.
Contents
Answers 4 Ruby None No comments
I lurk LessWrong and am grappling with a perceived misalignment between its stated goals [? · GW]—improving reasoning and decision-making—and the type of content often shared. I am not referring to content that I disagree with, or content that I think is poorly written, nor am I asking people to show me their hero license [LW · GW]. I'm referring to a style of writing that is common in the rationalist blogosphere, it often has a surprising conclusion and draws from multiple domains to answer questions. Popular examples of people who write posts in this way include Scott Alexander, Robin Hanson, johnwentsworth, gwern, etc.[1] While this style of writing is fascinating and often enlightening, I wonder how much it genuinely improves reasoning or helps one be less wrong about the world. The primary goal of these kinds of posts do not seem to be to help you achieve these goals, or at the very least, they seem less efficient than other methods. Is there an implicit divide between "fun" posts on LessWrong and more productive ones?
I suspect there's a broader discourse that I may have missed despite my efforts to answer my own question before asking. If this post is repetitive or misaligned with community norms, I apologize. Thank you for the sanity check to those that respond.
- ^
This small sample of authors obviously have very different styles and interests, not to mention that many of their posts can be thought of as belonging to a completely different category than "rationalist blogosphere" My grouping of this kind of writing and philosophy is based off of vibes, take that how you will.
Answers
I'm curious why the section on "Applying Rationality" in the About page you cited doesn't feel like an answer.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what's actually true (not just what's convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
Beyond that, The Twelve Virtues of Rationality [LW · GW] includes "scholarship" as the 11th virtue, and I think that's a deep part of LessWrong's culture and aims:
The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.
I would think it strange though if one could get better about reasoning and believing true things without actually trying to do that on specific cases. Maybe you could sketch out what you expect LW content to look like more.
↑ comment by ashen8461 · 2025-01-16T21:13:20.048Z · LW(p) · GW(p)
Thank you for your response. On reflection, I realize my original question was unclear. At its core is an intuition about the limits of critical thinking for the average person. If this intuition is valid, I believe some members of the community should, rationally, behave differently. While this kind of perspective doesn't seem [LW · GW] uncommon [LW · GW], I feel its implications may not be fully considered. I also didn’t realize how much this intuition influenced my thinking when writing the question. My thoughts on this are still unclear, and I remain uncertain about some of the underlying assumptions, so I won’t argue for it here.
Apologies for the confusion. I no longer endorse my question.
No comments
Comments sorted by top scores.