What are you looking for in a Less Wrong post?
post by adamShimi · 2020-08-01T18:00:04.738Z · LW · GW · No commentsThis is a question post.
Contents
Answers 19 TurnTrout 14 adamzerner 10 bvbvbvbvbvbvbvbvbvbvbv 8 Alexei 7 wearsshoes 6 curi 1 Антон Желтоухов None No comments
My model of "What LW as a community considers a good post" is not accurate enough for my taste. Sometimes I don't understand the karma/number of comments (or lack thereof) of some posts, and it seems rude or arrogant to ask (whether it's someone else's post or one of mine). Among other things, this might help me decide whether to spend the hours necessary to write a post, or if it's not the kind of ideas people are interested in here.
So I'm asking you, yes you, what makes a LW post that you want to read. That you want to upvote or strong upvote. That you might comment. A good post, basically.
Answers
Usually I strong-upvote when I feel like a post made something click for me, or that it's very important and deserves more eyeballs. I weak-upvote well-written posts which taught me something new in a non-boring way.
As an author, my model of this is also impoverished. I'm frequently surprised by posts getting more or less attention than I expected.
For me, it boils down to being useful.
For something to be useful, it first has to be true. From there, there's a bunch of different ways for a post to close the gap and be something that I find useful. Maybe it teaches me how to be happy [LW · GW]. Maybe it teaches something about rationality. Maybe it teaches me something about how the world works.
↑ comment by adamShimi · 2020-08-02T19:45:44.897Z · LW(p) · GW(p)
Thanks for the answer!
It makes me think of this great essay by Paul Graham.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-08-02T19:52:51.615Z · LW(p) · GW(p)
Yeah. Also Write To Say Stuff Worth Knowing by Robin Hanson.
Replies from: adamShimiI am mostly interested in the meta. By that I mean how people get their points across or how do they developp their ideas etc.
LW is full of very interesting writing styles and approaches to questions. It interests me usually more than the subject of the question.
Otherwise : anything related to AI or how to think ends up in my reading queue.
↑ comment by adamShimi · 2020-08-02T19:44:45.988Z · LW(p) · GW(p)
Thanks for the answer!
It is pretty interesting: do you really not care at all about the ideas themselves (except the two topics mentioned)? A related question might be "how do you decide to go read a post only from the title, if you only use the meta?"
Replies from: bvbvbvbvbvbvbvbvbvbvbv↑ comment by bvbvbvbvbvbvbvbvbvbvbv · 2020-08-03T19:07:57.361Z · LW(p) · GW(p)
I am a student in a field completely unrelated to any of this and I tend to have my hobbies take too much of my time compared to what my uni asks of me.
The consequence is that I try to optimize the time I spend here. So I add what feels like future high value in my diy reading queue [LW(p) · GW(p)]. De facto almost anything that touches to AI that doesn't seem too easy ends up in this queue. Then what I think will be useful to me (eg : the "how to be happy" essay that I discovered thanks to your comment above).
But I never add to my queue (or ignore) before reading it diagonally first. I always skim read while paying close atention to the structure, the epistemic status, the way each thought was processed to be explained to others, how accessible to notions were made, was there a conclusion?, how quickly people seem to agree or disagree, etc
The funny thing is that I compulsively glance over every comment and every post. And I always seem to find a thought out of it. Like I never thought these things could have suspensions or that it could be enough to use the fabric of the whole thing [LW · GW]. Out of all and any posts, there always seems to be something for me.
My bar for a normal upvote is pretty low. Sometimes I even upvote somewhat low quality posts from new authors if I think they actually tried + seems like they could get better with some encouragement.
Strong upvote is for a post that hits most of: useful, interesting, well written, credible and surprising.
A bulleted list of answers others have written:
- Generates a new insight (TurnTrout)
- Is good for something (adamzerner)
- Shows its work (bvxn)
- Ties up its loose ends (curi)
- Resolves a disagreement (curi)
- Shows effort (Alexei)
- Well-written (Alexei)
- Surprising (Alexei)
- Credible (Alexei)
- Summarize work (me)
And certain topical interests which LW is a topos for:
- Cognition
- AI
- Self-improvement
I'm throwing in that I like posts and comments that compress knowledge (such as this).
My further two cents are that what people answer here will be somewhat unrepresentative. The answers will be a certain set of ideal practices which your answerers may not actually implement and even if they did, they might not represent the community at large. The honest answer to your question is probably data-driven; by scraping the site you could generate a better predictive model of what content actually gets upvotes than people will tell you here.
But nevertheless there is value to your question. The idealized picture you'll get is in fact the picture of the ideal you want. If you take onboard people's best-case answers, you'll make stuff that the most engaged people want the most of, and that will contribute to a making better community overall.
The main issue for me to write comments is whether I think discussion to a conclusion is available. Rationalists can't just agree to disagree, but in practice almost all discussions end without agreement and without explanation of reasons for ending the discussion by the party choosing to end the discussion. Just like at most other forums, most conversations seem to have short time limits which are very hard to override regardless of the content of the discussion.
I'm interested in things like finding and addressing double cruxes and otherwise getting some disagreements resolved. I want conversations where at least one of us learns something significant. I don't like for us each to give a few initial arguments and then stop talking. Generally I've already heard the first few things that other people say (and often vice versa too), so the value in the conversation mostly comes later. (The initial part of the discussion where you briefly say your position mostly isn't skippable. There are too many common positions, that I've heard before, for me to just guess what you think and jump straight into the new stuff.)
I occasionally write comments even without an expectation of substantive discussion. That's mostly because I'm interested in the topic and can use writing to help improve my own thoughts.
↑ comment by TAG · 2020-08-03T12:18:45.967Z · LW(p) · GW(p)
Rationalists can’t just agree to disagre
If you read all the way through the rationalwiki article on Aumanns Theorem, there is a clear explanation as to why it cannot apply in practice.
Replies from: curi↑ comment by curi · 2020-08-03T20:30:04.606Z · LW(p) · GW(p)
He said, “Well, um, I guess we may have to agree to disagree on this.”
I [Yudkowsky] said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”
...
Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree
...
Regardless of our various disputes, we [Yudkowsky and Hanson] both agree that Aumann’s Agreement Theorem extends to imply that common knowledge of a factual disagreement shows someone must be irrational.
...
Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree
Do you think I'm misunderstanding the sequences or do you disagree with them?
Just because it's not fully proven in practice by math doesn't mean it isn't a broadly true and useful idea.
Replies from: TAG↑ comment by TAG · 2020-08-03T21:11:43.033Z · LW(p) · GW(p)
It is fully proven by the math, but it requires a set of stringent conditions about honesty and shared information which are unlikely to obtain in real world situations. As explained in the rationality article. Did you read it?
Replies from: TAG↑ comment by TAG · 2020-08-04T10:38:43.424Z · LW(p) · GW(p)
It's not that you misunderstood the summary versions, it's that the summary versions are inaccurate. In general, you should summarise something as it operates under the prevalent, realistic conditions. So "you can't use Bayes for everything" and "people aren't suddenly going to start agreeing, even if they are rational".
For me, it's all about getting more posts on that topic. A post could be bad in terms of text quality. It could be "false" or badly reasoned. But if I consider the topic underrated I will upvote it.
No comments
Comments sorted by top scores.