Duncan Sabien on Moderating LessWrong
post by Davis_Kingsley · 2018-05-24T10:12:26.996Z · LW · GW · 110 commentsThis is a link post for https://medium.com/@ThingMaker/moderating-lesswrong-4de028808def
Contents
110 comments
Another strong post from Duncan Sabien (aka Conor_Moreton).
110 comments
Comments sorted by top scores.
comment by jimrandomh · 2018-05-25T16:21:14.189Z · LW(p) · GW(p)
Meta-meta point: the fact that you used your own post as an example, rather than a post where you weren't involved, made a very large difference in how I engaged with this Moderating LessWrong article. It prevented me from finishing it. Specifically, when I got to the point where you quoted Benquo, I felt honor-bound to stop reading and go check the source material, which turned out to be voluminous and ambiguous enough that I ran out of time before I came back. This is because, long ago, I made it a bright-line rule to never read someone quoting someone they disagree with in a potentially-hostile manner without checking the original context. That wouldn'tve felt necessary if the examples were drawn from discussions where you weren't involved.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T16:34:27.749Z · LW(p) · GW(p)
Upvoted; I think this is an entirely reasonable heuristic, and that it probably validly protects you in somewhere between three-out-of-four and ninety-nine-out-of-a-hundred instances.
However, I don't think that it's impossible to do the thing correctly/in a principled manner (obviously, since I attempted it). I'd be interested in a follow-up from you later in which you share your impression of something like "this is an example of why I have that heuristic in the first place" versus "my heuristic fired correctly, given starting conditions, but this ended up not being an example of why I have that policy."
(Including detail, if you feel like offering it.)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T18:20:10.398Z · LW(p) · GW(p)
Although, it occurs to me that if you're holding to this heuristic in all cases, this also would've prevented you from reading a significant chunk of benquo's comments (which also were quoting someone they disagreed with in a potentially-hostile manner) without first making sure you had my original post fresh in your mind.
Replies from: jimrandomh↑ comment by jimrandomh · 2018-05-26T01:36:19.020Z · LW(p) · GW(p)
I don't think I have it nailed down quite precisely what triggers the TAP--in practice it's actually more like a pattern-match with things I was seeing and thinking about when I established the TAP years ago, and descriptively speaking I think the trigger condition involves my information about a narrative having been filtered through a single author. I will say that, when I later made a serious effort to sort out what happened in detail (partly written up, not yet posted), it involved a lot of switching back and forth between tabs and little to no reliance on quotations.
comment by Qiaochu_Yuan · 2018-05-25T20:22:36.525Z · LW(p) · GW(p)
(Meta, in case it's relevant to anyone: it's felt to me like LW has been deluged in the last few weeks by people saying things that seem very clearly wrong to me (this is not at all specific to this post / discussion), but in such a way that it would take a lot of effort on my part to explain clearly what seems wrong to me in most of the cases. I'm not willing to spend the time necessary to address every such thing or even most of them, but I also don't have a clear sense of how to prioritize, so I approximately haven't been commenting at all as a result because it just feels exhausting. I'm making an exception for this post more or less on a whim.)
There's a lot I like about this post, and I agree that a lot of benquo's comments were problematic. That said, when I put on my circling hat, it seems pretty clear to me that benquo was speaking from a place of being triggered, and I would have confidently predicted that the conversation would not have improved unless this was addressed in some way. I have some sense of how I would address this in person but less of a sense of how to reasonably address it on LW.
There's something in Duncan's proposed norms in the direction of "be responsible for your own triggered-ness." And there's something I like about that in principle, but also I think practice almost nobody on LW can do this reliably, including Duncan, and I would want norms that fail more gracefully in the presence of multiple people being triggered and not handling it ideally. At the very least, I want this concept of being triggered to be in common knowledge (I think it's not even in mutual knowledge at the moment) so we can talk about it when it's relevant, and ideally I'd want norms that make it okay to say things like "hey, based on X, Y, and Z I suspect you're currently a little triggered, do you want to slow down this conversation in A, B, or C way?" without this being taken as, like, a horrendous overreaching accusation.
Replies from: Duncan_Sabien, Raemon, Benquo, SaidAchmiz↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T22:37:43.443Z · LW(p) · GW(p)
Endorsed as well, although I think I might have a disagreement re: how reasonable it is for the site to expect people to conform to certain very concrete and grokkable standards even while triggered. I would claim, for instance, that even in the Dragon Army posts, where I was definitely triggered, you can't find any examples of egregious or outright violations of the principles laid out in this essay, and that fewer than 15% of my comments would contain even moderate, marginal violations of them.
I note that this is a prediction that sticks its neck out, if anybody wants to try. I have PDFs of both threads, including all comments, that I share with anyone who requests them.
Replies from: Vaniver↑ comment by Vaniver · 2018-05-28T20:45:58.343Z · LW(p) · GW(p)
I have PDFs of both threads, including all comments, that I share with anyone who requests them.
I am curious about the trivial inconvenience, here; why not just share the PDFs with a link, instead of requiring people to ask you for them first?
↑ comment by Raemon · 2018-05-25T20:35:48.115Z · LW(p) · GW(p)
Highly endorse this. And this is fact might be the entirety of my crux of disagreement.
(expressing wariness about delving into the details here. I am not willing to delve into details that focus on the recent Benquo thread until I've had a chance to talk to Ben in more detail. Interested in diving into details re: past discussions I've had with Duncan, but would probably prefer to do that in a non-public setting because the nature-of-the-thing makes it harder)
[edit: I think I have cruxes separate from this, but they might be similar/entangled]
↑ comment by Benquo · 2018-05-31T08:38:58.037Z · LW(p) · GW(p)
I was definitely something like triggered, but as far as I can tell, specific bounded attempts to tell me what I was doing wrong were easy for me to evaluate, and comments that attempted to take my perspective seemed pretty far off and substantially exacerbated this. Notably, I found it comparatively easy to engage with Duncan's criticisms.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T00:38:27.656Z · LW(p) · GW(p)
… ideally I’d want norms that make it okay to say things like “hey, based on X, Y, and Z I suspect you’re currently a little triggered, do you want to slow down this conversation in A, B, or C way?” without this being taken as, like, a horrendous overreaching accusation.
That does seem pretty overreaching to me, though. I mean, it would certainly have the effect of “slowing down the conversation”, in that I wouldn’t want to converse with someone who used this sort of rhetorical move!
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T02:38:37.026Z · LW(p) · GW(p)
There has to be some way to add the hypothesis that someone is triggered into the conversation. Like, sure, maybe the given example doesn't cut it, and maybe it's hard/tricky/subtle/we won't get it on the first five tries/no one solution will fit all situations. And maybe you're pointing at something like, this isn't really a hypothesis, but is an assertion clothed so as to be sneakily defensible.
But people do get triggered, and LessWrong has got to be the kind of place where that, itself, can be taken as object—if not by the person who's currently in the middle of a triggered state, then at least by the people around them.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T05:03:39.169Z · LW(p) · GW(p)
There has to be some way to add the hypothesis that someone is triggered into the conversation. … But people do get triggered, and LessWrong has got to be the kind of place where that, itself, can be taken as object—if not by the person who’s currently in the middle of a triggered state, then at least by the people around them.
I don’t agree with this at all. In fact, I’d say precisely the opposite: Less Wrong has got to be exactly the kind of place where “someone is triggered” should not be added into the conversation—neither by the “triggered” person, nor by others.
My emotional state is my own business. If we are having a conversation on Less Wrong, and I do something which violates some norm, by all means confront me about that violation. I will not use being “triggered” as an excuse, a justification, or even a thing that you have any obligation at all to consider; in return, you will not psychoanalyze me and ask things like whether I am “triggered”. That is—or, rather, that absolutely should be—the social contract.
On Less Wrong (or any similar forum), the interface that we implement is “person who does not, in the context of a conversation/debate/discussion/argument, have any emotional states that have any bearing on how the interaction proceeds”. You can have all the emotional states you want; but they are implementation details, which I should not see. In another context—perhaps even another discussion, here on Less Wrong—we can discuss those emotional states just like we can discuss anything else. But implementation details must not ever affect the integrity of the interface. Likewise, my implementation details are none of your business. Obey the Law of Demeter.
Replies from: Unreal, Duncan_Sabien↑ comment by Unreal · 2018-05-26T09:16:35.064Z · LW(p) · GW(p)
Maybe this "social contract" is a fine thing for LessWrong to uphold.
But rationalists should not uphold it, in all places, at all times. In fact, in places where active truth-seeking occurs, this contract should be deliberately (consensually) dropped.
Double Cruxing often involves showing each other the implementation details. I open up my compartments and show you their inner workings. This means sharing emotional states and reactions. My cruxes are here, here, and here, and they're not straightforward, System-2-based propositions. They are fuzzy and emotionally-laden expectations, movies in my head, urges in my body, visceral taste reactions.
The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” Do not be blinded by words. When words are subtracted, anticipation remains.
That is, ultimately, about implementation details (and sharing them). It's about phenomenology. And that extends to the subjective experience of not only the five senses, but emotions, thoughts, and unnamed aspects of experience.
If you don't want to open up your implementation details to me, that is cool. But we're not going to go to the depths of truth-seeking together without it. Which, again, might be fine for this forum, but I don't think that makes this place "better" for truth-seeking; I think it makes it worse.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T16:10:57.948Z · LW(p) · GW(p)
Double Cruxing often involves showing each other the implementation details.
Then chalk up another reason to disdain Double Cruxing.
If you don’t want to open up your implementation details to me, that is cool. But we’re not going to go to the depths of truth-seeking together without it.
This (“go to the depths of truth-seeking together”) is certainly an attitude that I would not like to see become prevalent on Less Wrong.
Replies from: habryka4, Raemon↑ comment by habryka (habryka4) · 2018-05-26T18:41:01.754Z · LW(p) · GW(p)
(Noting disagreement with your position here, and willingness to expand on that at some other point in time when there are not 5 discussions going on on LW that I feel like I want to participate in more. Rough model outline is something like: "I think close friendships and other environments with high barriers to entry can indeed strongly benefit from people modeling each other's implementation details. Most of the time when environments with a lower standard of selection or trust try to do this, it ends badly, though it's not obvious to me that they always have to go badly, or whether there are hypothetical procedures or norms that allow this conversation to go well even in lower-trust environments, though I haven't yet seen such a set of norms robustly in action.")
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T18:48:48.131Z · LW(p) · GW(p)
Sure; this is certainly a conversation I’m open to having, and I do understand the limitations of time and attention. What you just outlined is also helpful; I look forward to when you have the chance to expand on it.
↑ comment by Raemon · 2018-05-26T18:41:47.495Z · LW(p) · GW(p)
Something that I guess I've never quite gotten is, in your view Said, what is Less Wrong for? In 20 years if everything on LW went exactly the way you think is ideal, what are the good things that would have happened along the way, and how would we know that we made the right call?
(I have my own answers to this, which I think I've explained before but if I haven't done a clear enough job, I can try to spell them out)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T19:20:37.352Z · LW(p) · GW(p)
That’s a good question, but a tricky one to answer directly. I hope you’ll forgive me if (à la Turing) I substitute a different one, which I think sheds light on the one you asked:
In the time Less Wrong has existed, what has come out of it, what has happened as a result, that is good and positive; and, contrariwise, what has happened that is unfortunate or undesirable?
Here’s my list, which I expect does not match anyone else’s in every detail, but the broad outlines of which seem to me to be clearly reasonable. These lists are in no particular order, and include great and small things alike:
Pros
- The Sequences
- Certain other posts or sequences, such as many of Scott’s posts, Alicorn’s “luminosity” sequence, and a small handful of others
- Interesting/useful new results in, e.g., decision theories
- The recruitment of talented mathematicians/etc. to MIRI, and their resulting work on the alignment problem and related topics
- The elevation of the AI alignment problem into mainstream consciousness
- “Three Worlds Collide” (and, more broadly, the genre of “rationalist fiction”, which certainly includes a lot of dross, but also some gems, and breaks some fruitful new ground in fiction-space)
- Certain elements of the online “rationalist diaspora”, such as the truly wonderful Slate Star Codex and a tiny handful of other worthy blogs, and certain chat rooms and similar online spaces
- The Kocherga anticafe (which has no analogues in the United States—itself a fact which deserves serious consideration!) and the surrounding social activities; also, similar (though, to my knowledge, all smaller-scale) endeavors elsewhere
- The concept of effective altruism
Cons
- Almost everything which (to my knowledge) takes place in both the Bay Area and the New York rationalist communities (meaning no offense to many people involved in the latter, at least some of whom are—at least in my limited experience—genuinely decent folks)
- Almost all “rationalist communities” in the United States in general
- The concept of the “rationalist community”, period
- The vast mass of sheer nonsense that deluged Less Wrong for the half-decade (at least!) leading up to the creation of Less Wrong 2.0
- Assorted absurdities and scandals involving various mythical reptiles
- The promotion and increasing acceptance of certain reckless and harmful behaviors
- The promotion and increasing acceptance of certain anti-epistemologies
- Almost everything that CFAR does and has done
- The Effective Altruism movement
It’s hard to say what Less Wrong is for; but what I, personally, would want out of this site, is more of the things on the former list, and no more of the things on the latter. If, in 20 years, the list of Pros has expanded greatly, with more of the same; and the list of Cons has not only not been added to, but the current entries on it forgotten and passed into misty memory, left far behind and overshadowed by the Pros—well, I, at least, will say that you made all the right calls.
Replies from: ESRogs, tcheasdfjkl, Wei_Dai, DanielFilan, Raemon↑ comment by ESRogs · 2018-05-29T17:47:45.850Z · LW(p) · GW(p)
Pros
4. The recruitment of talented mathematicians/etc. to MIRI, and their resulting work on the alignment problem and related topics
5. The elevation of the AI alignment problem into mainstream consciousness
Cons
1. 2. rationalist communities
8. Almost everything that CFAR does and has done
9. The Effective Altruism movement
Just want to note that I think you may be underestimating the extent to which these things on your Cons list have contributed to these things on your Pros list.
For example:
- The EA movement funds MIRI and other AI Safety efforts.
- CFAR co-runs the AI Summer Fellows Program, which has directly led to several MIRI hires.
- More generally, CFAR and the rationalist community have served as a funnel for MIRI.
- The Future of Life Institute (which has promoted and helped fund work on the alignment problem) was founded by CFAR alumni who knew each other from the Boston rationalist community.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-29T20:05:36.260Z · LW(p) · GW(p)
If the point is “the Cons are not all bad; they are partly good, to the extent that they contribute to (or are perhaps even necessary for?) the Pros”, then—granted.
If the point is “the Cons are not bad at all, and the reasons for considering them to be bad do not exist, because of the fact that they also contribute to the Pros”, then that is revealed to be manifestly incoherent as soon as it’s made explicit.
If the point is something else entirely, then I reserve judgment until clarification.
Replies from: Unreal, ESRogs, Duncan_Sabien↑ comment by Unreal · 2018-05-29T20:47:35.704Z · LW(p) · GW(p)
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
For instance, if you see people acting to work on/improve/increase the cons... would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?
(This is just in the hypothetical world where this is true. I do not know if it is.)
Like, what if we just live in a "tragic world" where you can't achieve things like your pros list without... basically feeding people's desire for community and connection? And what if people's desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?
(If my hypothetical does nothing, then could you come up with a hypothetical that does?)
Replies from: SaidAchmiz, Duncan_Sabien↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-29T21:35:07.685Z · LW(p) · GW(p)
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
This question is incomplete. The corrected version would read:
“If you found out some of those cons (or some close version of them) were necessary in order to achieve some of those pros, would anything shift for you?”
Given this, the answer is “of course, and it would depend on which Cons were necessary in order to achieve which Pros”.
Now, you said that your question is a mere hypothetical, but let’s not obfuscate: clearly, if not you, then at least other folks here, think that your hypothetical scenario describes reality. But as Ray commented elsethread, this is hardly the ideal context to hash out the details of this topic. So I won’t. I will, however, ask you this:
Do you think that some of the Cons on my list are necessary in order to achieve some of the Pros? (No need to provide details on which, etc.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-29T20:51:41.126Z · LW(p) · GW(p)
(shucks, now I'm kind of ashamed of my own reply to Said above, which is not nearly as skillful as this)
↑ comment by ESRogs · 2018-05-29T22:41:34.115Z · LW(p) · GW(p)
If the point is “the Cons are not all bad; they are partly good, to the extent that they contribute to (or are perhaps even necessary for?) the Pros”, then—granted.
Yes, this is the point. (I wouldn't personally put it quite that way, since by my own evaluation the things I mentioned -- EA, CFAR, rationalist communities -- are much better than "not all bad" makes it sound. But yes, it seems like someone who values the things on your pros list should at least think that those things are not all bad.)
then—granted.
For clarity -- when you say, "granted", do you mean, "Yes, I already believed that, and I stand by my pros and cons list, as written." Or do you mean, "Good point. You've given me an update, and I would no longer endorse the statement, 'Almost everything CFAR has done belongs on the con side of a pros-and-cons list.'"?
If the former (such that you would still endorse the "Almost everything..." statement), I would challenge whether that position is consistent with both 1) highly valuing the things on your pros list, and also 2) having an accurate view of the facts on the ground of what CFAR is trying to accomplish and has actually accomplished.
I could see that position being consistent if you thought CFAR's other actions were highly negative. But my guess is that you see them being closer to useless (and widely overvalued), rather than so negative as to make their positive contributions a rounding error.
In any case, I'm happy to table that debate if you'd like, as has been suggested in other comments.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-30T01:18:00.945Z · LW(p) · GW(p)
For clarity—when you say, “granted”, do you mean, “Yes, I already believed that, and I stand by my pros and cons list, as written.” Or do you mean, “Good point. You’ve given me an update, and I would no longer endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’”?
The middle way, viz.:
Good point. You’ve given me an update, and I would still endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’
… I would challenge whether that position is consistent with both 1) highly valuing the things on your pros list, and also 2) having an accurate view of the facts on the ground of what CFAR is trying to accomplish and has actually accomplished.
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence. I allow that I may have an inaccurate view of their accomplishments. I would love to see an overview, written by a neutral third party, that summarizes everything that CFAR has ever done.
I could see that position being consistent if you thought CFAR’s other actions were highly negative. But my guess is that you see them being closer to useless (and widely overvalued), rather than so negative as to make their positive contributions a rounding error.
I’m afraid your guess is mistaken (though I would quibble with the “rounding error” phrasing—that is a stronger claim than any I have made).
Replies from: ESRogs↑ comment by ESRogs · 2018-05-30T17:28:59.314Z · LW(p) · GW(p)
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence.
That's fair. I include the "trying" part because it is some evidence about the value of activities that, to outsiders, don't obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn't have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-30T17:51:25.762Z · LW(p) · GW(p)
I don’t think that I agree with this framing. (Consider the following to be a sort of thinking-out-loud.)
Suppose that activity Y is, to an outsider (e.g., me), neutral in value—neither beneficial nor harmful. You come to me and say: “I have done thing X, which you take to be beneficial; as you have observed, I have also been engaging in activity Y. I claim that Y is necessary for the accomplishment of X. Will you now update your evaluation of Y, and judge it no longer as neutral, but in fact as positive (on account of the fact—which you may take my word, and my accomplishment of X, as evidence—that Y is necessary for X)?”
My answer can only be “No”. No, because whatever may or may not be necessary for you to accomplish outcome X, nonetheless it is only X which is valuable to me. How you bring X about is your business. It is an implementation detail; I am not interested in implementation details, when it comes to evaluating your output (i.e., the sum total of the consequences of all your actions).[1]
Now suppose that Y is not neutral in my eyes, but rather, of negative value. I tally up your output, and note: you have caused X—this is to your credit! But, at the same time, you have done Y—this I write down in red ink. And again you come to me and say: “I see you take X to be positive, but Y to be negative; but consider that Y is necessary for X [which, we once again assume, I may have good reason to trust is the case]! Will you now move Y over to the other side of the ledger, seeing as how Y is a sine qua non of X?”
And once again my answer is “No”. Whatever contribution Y has made to the accomplishment of X, I have already counted them—they are included in the value I place on X! To credit you again for doing Y, would be double-counting.[2] But the direct negative value of Y to me—that part has not already been included in my evaluation of X; so indeed I am correct in debiting Y’s value from your account.
And so, in the final analysis, all questions about what you may or may not have been “trying” to do—and any other implementation details, any other facts about how you came by the outcomes of your efforts—simply factor out.
Of course your implementation details may very well be of interest to me when it comes to predicting your future output; but that is a different matter altogether! ↩︎
Note, by the way, that this formulation entirely removes the need for me to consider the truth of your claim that Y is necessary for X. Once again we see the correctness of ignoring implementation details, and looking only at outcomes. ↩︎
↑ comment by Raemon · 2018-05-30T18:06:40.041Z · LW(p) · GW(p)
It seems like there's a consistent disagreement here about how much implementation details matter.
And I think it's useful to remember that things _are_ just implementation details. Sometimes you're burning coal to produce energy, and if you wrap up your entire thought process around "coal is necessary to produce energy" you might not consider wind or nuclear power.
But realistically I think implementation details do matter, and if the best way to get X is with Y... no, that shouldn't lead you to think Y is good in-and-of-itself, but it should affect your model of how everything fits together.
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-30T18:25:37.229Z · LW(p) · GW(p)
I don’t disagree with what you say, but I’m not sure that it’s responsive to my comments. I never said, after all, that implementation details “don’t matter”, in some absolute sense—only (here) that they don’t matter as far as evaluation of outcomes goes! (Did you miss the first footnote of the grandparent comment…?)
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
Yes, of course. But I am not the one doing any reconfiguring of, say, CFAR, nor am I interested in doing so! It is of course right and proper that CFAR employees (and/or anyone else in a position to, and with a motivation to, improve or modify CFAR’s affairs) understand the implementation details of how CFAR does the things they do. But what is that to me? Of academic or general interest—yes, of course. But for the purpose of evaluation…?
Replies from: Raemon↑ comment by Raemon · 2018-05-30T19:07:05.522Z · LW(p) · GW(p)
It seemed like it mattered with regard to the original context of this discussion, where the thing I was asking was "what would LW output if it were going well, according to you?" (I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
If LessWrong's job was to produce energy, and we did it by burning coal, pollution and other downsides might be a cost that we weigh, but if we thought about "how would we tell things had gone well in another 20 years?", unless we had a plan for switching the entire plant over to solar panels, we should probably expect roughly similar levels of whatever the costs were (maybe with some reduction based on efficiency), rather than those downsides disappearing into the mists of time.
[edit: mild update to first paragraph]
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-30T19:23:19.022Z · LW(p) · GW(p)
(I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
Sure, but more importantly, what you asked was this:
In 20 years if everything on LW went exactly the way you think is ideal, what are the good things that would have happened along the way, and how would we know that we made the right call?
[emphasis mine]
Producing energy by burning coal is hardly ideal. As you say upthread, it’s well and good to be realistic about what can be accomplished and how it can be accomplished, but we shouldn’t lose track of what our goals (i.e., our ideals) actually are.
Replies from: Raemon↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-29T20:20:03.268Z · LW(p) · GW(p)
There may need to be more buckets than "pro" and "con."
I propose "negative," "neutral," "positive," "instrumental," and "detrimental."
Thus you can get things like "negative and yet instrumental" or "positive and yet detrimental," where the first word is the thing taken reasonably in isolation and judged against a standard of virtue or quality, and the second word is the ramifications of the thing's existence in the world in a long-term consequentialist sense.
(So returning to my favorite local controversy, punching people is Negative, but it's possible that punch bug might consequentially be Instrumental for societies filled with good people that are overall on board with nonviolence and personal sovereignty.)
Other categorizations might do better to clarify cruxes ... this was my attempt to create a paradigm that would allow you to zero in on the actual substance of disagreement.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-29T20:30:40.865Z · LW(p) · GW(p)
Let’s not reinvent the wheel here.
You’re talking about means and ends (which, in a consequentialist framework, are, of course, just “ends” and “other ends”).
(Your example may thus be translated as “punching people is negative ceteris paribus, as it has direct, immediate, negative effects; however, the knock-on effects, etc., may result in consequences which, when all aggregated and integrated over some suitable future period, are net positive”. Of course this gets us into the usual difficulties with aggregation, both intra-personally and interpersonally, but these may probably be safely bracketed… at least, provisionally.)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-29T20:50:34.614Z · LW(p) · GW(p)
I'm talking about you and ESRogs zeroing in on where you disagree, because at least one of you is wrong and has a productive opportunity to update. Sorry if the example of punch bug was distracting, but I suspect fairly strongly that it is inappropriate and oversimplified to just have a pros-and-cons list in the case of these large evaluations you're making—not least because in a black-or-white dichotomy, you lose resolution on the places where your assumptions actually differ.
↑ comment by tcheasdfjkl · 2018-05-28T01:16:28.947Z · LW(p) · GW(p)
Confused and curious about why you put Kocherga in positives and all other rationalist social/community/meatspace things in negatives. I don't think the difference between the two is that large. (I'm a Bay Area rationalist-type person who has been to a couple of things at Kocherga)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-28T01:29:36.227Z · LW(p) · GW(p)
The thing is that “rationalist social/community/meatspace things” is a wrong category. It lumps together things that are very different.
In general, the way that “rationalists” use the word (and concept) “community” leads them into error, of exactly this sort. That makes it difficult to discuss this sort of thing productively.
Unfortunately, untangling this is beyond the scope of a comment thread, especially a borderline-off-topic one like this. At some point I may attempt it, in top-level-post form.
↑ comment by Wei Dai (Wei_Dai) · 2018-05-27T00:24:03.547Z · LW(p) · GW(p)
I'm really curious about the cons (even 5 where I'm only aware of one scandal). Can you link to some existing explanations, provide a summary of why you think each item on your cons list is bad, and/or write up your thoughts in detail at some point?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-27T01:10:28.470Z · LW(p) · GW(p)
Hmm. I’m not sure that would be a great idea. For all that I disagree strongly (to say the least) with much of what I listed, still it seems to me that most of the folks involved aren’t bad people; it doesn’t quite feel right to write up a “this is everything I think is bad about Less Wrong and everyone involved with it” sort of essay. Part of the reason I hesitate is that I don’t have all that much skin in the game; I am not really a member of any of these “rationalist communities”, so in a sense my criticisms will be those of an outsider. How much value is there, in such a thing? I don’t know. Certainly many of those who are closer to the things than I am, have had harsh enough things to say, on all the subjects I listed. It hardly seems necessary for me to add to that.
I wrote the grandparent comment in order to communicate, to Ray (and the rest of the LW team, and any others who may be in a position to affect the future of Less Wrong), what my views on this matter are. It seems to me that I’ve succeeded in that. So, as to your question… meaning no disrespect at all, I’d prefer, if possible, not to turn this discussion into an airing of dirty laundry.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-05-27T07:06:12.497Z · LW(p) · GW(p)
Certainly many of those who are closer to the things than I am, have had harsh enough things to say, on all the subjects I listed. It hardly seems necessary for me to add to that.
Can you (or anyone else) link to such complaints? (For example I don't think I've ever seen a complaint that almost all “rationalist communities” in the United States or the concept of “rationalist community” is harmful on net.)
I wrote the grandparent comment in order to communicate, to Ray (and the rest of the LW team, and any others who may be in a position to affect the future of Less Wrong), what my views on this matter are.
My model of the LW team is that they would disagree with a lot of your cons prior to seeing your views, and seeing your views (without knowing the reasoning behind them) wouldn't cause them to make a large update in your direction. Would you agree with this, and if so how do you plan to cause them to update or otherwise to accomplish the goal of "the list of Cons has not only not been added to, but the current entries on it forgotten and passed into misty memory, left far behind and overshadowed by the Pros"?
I’d prefer, if possible, not to turn this discussion into an airing of dirty laundry.
Why not? You mentioned "most of the folks involved aren’t bad people" but if they are actually doing bad things surely it doesn't make sense to let them keep doing those things just to spare their feelings?
Replies from: Raemon, SaidAchmiz↑ comment by Raemon · 2018-05-27T09:35:06.816Z · LW(p) · GW(p)
FWIW, I appreciated Said giving a response that was a succinct but comprehensive answer – I think further details might make sense as a top-level post but would probably take this thread in too many different directions. I think there's something useful for people with really different worldviews being able to do a quick exchange of the high level stuff without immediately diving into the weeds.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-27T07:52:39.410Z · LW(p) · GW(p)
My model of the LW team is that they would disagree with a lot of your cons prior to seeing your views, and seeing your views (without knowing the reasoning behind them) wouldn’t cause them to make a large update in your direction.
To a first approximation, nothing that anyone ever says to anyone, on any topic on which the target already has any sort of opinion, causes them to make a large update in the speaker’s direction. All we can hope for is small updates (assuming we do not discard the “update” model altogether—which I rather think it’s time we did; but that is a separate discussion).
Why not? You mentioned “most of the folks involved aren’t bad people” but if they are actually doing bad things surely it doesn’t make sense to let them keep doing those things just to spare their feelings?
If others, who are closer to the matter, have already spoken, and more specifically, more critically, and more plainly, then what hope do I have of stopping anyone from doing any bad things? My intent is to do what I can to nudge Less Wrong toward the direction in which I think it should go. That is all. I have no greater ambitions for this discussion.
↑ comment by DanielFilan · 2018-05-26T19:35:31.284Z · LW(p) · GW(p)
The Kocherga anticafe (which has no analogues in the United States—itself a fact which deserves serious consideration!) and the surrounding social activities
Is there any chance you could let those of us who speak English but not Russian know what that is?
Replies from: SaidAchmiz↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T06:02:38.434Z · LW(p) · GW(p)
Okay, largely convinced, given stronger norms around what sorts of behavior are okay and what aren't. I think I was thinking that openly addressing triggered-ness as object would be a helpful in finding the best way to return a conversation to normalcy.
I still don't like the idea of having a particular set of hypotheses being taboo; I can buy an instrumental argument that we might want to make an exception around triggeredness that's similar to the exceptions around positing that someone might have a lot of unacknowledged racist biases—
(I think we stay away from that on consequentialist grounds and not because we don't form and check those hypotheses in subtle ways)
—but in general I think LW should be a place where there's always a correct, dispassionate, epistemically careful, and socially neutral way to say pretty much anything.
But overall, I like your ... Turing test? ... argument. "If it's posting like a LWer, and replying like a LWer, and acting like a LWer, then it's a LWer; doesn't matter what its internal state is." I'd be willing to give up a small swath of conversational space, to purchase that. Indeed, other people misreading me as triggered and playing status games (when that wasn't my experience and the comments I'm making aren't evidence for that hypothesis except circumstantially/via pattern-matching) has been a big headache for me.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T07:08:16.249Z · LW(p) · GW(p)
“If it’s posting like a LWer, and replying like a LWer, and acting like a LWer, then it’s a LWer; doesn’t matter what its internal state is.” I’d be willing to give up a small swath of conversational space, to purchase that.
Indeed.
I still don’t like the idea of having a particular set of hypotheses being taboo; I can buy an instrumental argument that we might want to make an exception around triggeredness that’s similar to the exceptions around positing that someone might have a lot of unacknowledged racist biases—
Exactly. We can make it even more stark:
“Have you considered that maybe you only think that because you’re just really stupid? What’s your IQ?”
“Have you considered that maybe you’re a really terrible person and a sociopath or maybe just evil?”
[to a woman] “You seem angry, is it that time of the month for you?”
etc.
We don’t say these sorts of things. Any of them might be true. But we don’t say them, because even if they are true, it’s none of our business. Really, the only hypothesis that needs to be examined for “why person X is saying thing Y” is “they think that it’s a good idea to say thing Y”.
Note that this is a very broad class of hypotheses. It’s much broader, in particular, than merely “person X thinks that thing Y is [insofar as it constitutes any sort of proposition(s)] true”. It excludes only things where you say something, not because you’re consciously choosing to say it in the service of some conversational (or other) goal, but because you’re compelled to say it, by forces outside of your control.
And maybe you are. But to the extent that you do not choose to say a thing, but are compelled to say it, we—your interlocutors—are not interacting with you. Rather, we are interacting with the abstract person-interface which “you” are implementing, which—by specification—chooses to say and do things, and is not compelled to do anything.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T07:36:23.239Z · LW(p) · GW(p)
“Have you considered that maybe you’re a really terrible person and a sociopath or maybe just evil?”
I'll note that, empirically, we do say these things. Or at least, people say them to me, and they're net upvoted, and no one takes a public stance against it, mods included. And I'm not just referring to benquo or to the overt troll in the original Dragon thread, either.
(There's a BIG difference between, e.g., Ray silently private messaging Benquo, and Ray saying out loud in the thread "I'm privately messaging Benquo about this.")
Replies from: SaidAchmiz, habryka4, Benquo↑ comment by Said Achmiz (SaidAchmiz) · 2018-05-26T07:41:04.115Z · LW(p) · GW(p)
Well, empirically, we also say the stuff about being triggered. I’m saying that we shouldn’t say either sort of thing.
↑ comment by habryka (habryka4) · 2018-05-26T17:24:24.921Z · LW(p) · GW(p)
(I am curious of examples of this, either here or via PM. I think I basically agree with you that there were multiple period in which we haven't been able to reliably moderate all content on LW, but I also care about setting the historical record straight, and right now we have a bunch more resources for moderation available than we had over the last few weeks, so it might still be the correct call to add mod annotations to those threads, saying that these things are over the line. I have somewhat complicated feelings about writing publicly that we are in a private conversation with someone, since that does tend to warp expectations a bunch, but I am still pretty open to making it a policy that when we ping users about infractions in private, that we also make a relevant note on the thread, and that the benefits might just reliably outweigh the costs here.)
Replies from: Duncan_Sabien, Unreal↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T17:34:25.634Z · LW(p) · GW(p)
The vast majority of the examples are in the two Dragon Army threads, one from LW1 and the other from LW2, which are now Gone. I am willing to share the PDFs with you (Ray already has them), but in this case there's no useful retroactive action.
(The rest are in the thread quoted in this essay, and at my last check (last night) are still un-addressed.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-05-26T18:33:12.287Z · LW(p) · GW(p)
I don't remember anything in the second Dragon Army thread that fit this pattern, but it's been a while and I was pretty busy at the time and don't think I was able to read everything before the thread got removed, so I would be curious about the pdf.
Agree that there are things unresolved in the thread quoted here. I definitely plan to address them, but currently want to wait until the private conversations we are having with people come to a natural stop.
↑ comment by Unreal · 2018-05-26T21:08:08.895Z · LW(p) · GW(p)
add mod annotations to those threads, saying that these things are over the line
i find this idea very distasteful
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-05-26T21:34:55.234Z · LW(p) · GW(p)
Interested in more input on this. It seems obvious to me that future readers of the original Dragon Army thread should not think that writing stuff like the numbers guy did, will not result in a ban or punishment. And since I want LessWrong to be a timeless archive, it's important for historic discussion to be kept similarly curated as present discussion.
Replies from: Unreal↑ comment by Unreal · 2018-05-26T21:56:14.323Z · LW(p) · GW(p)
If you only plan on annotating past discussions that have long-since died, I mind a lot less. But for a discussion that is still live or potentially live, it feels like standing on a platform and shouting through a loudspeaker. I'd advocate for only annotating comments without any activity within the past X months.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-05-26T22:01:10.545Z · LW(p) · GW(p)
Ah, yes. I was thinking of all the old stuff that is much older than that (such as the original DA thread). Anything that's still active should have different policies.
↑ comment by Benquo · 2018-05-27T06:18:05.678Z · LW(p) · GW(p)
Can you give an example of where I said this?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-28T01:46:59.577Z · LW(p) · GW(p)
I suspect all mods would prefer that you and I not directly engage just yet, until there's structure in place for a facilitated and non-weaponized conversation.
Replies from: Benquo↑ comment by Benquo · 2018-05-28T07:20:33.562Z · LW(p) · GW(p)
The comment I was responding to was attributing an opinion to me. A norm (even a temporary one) in which you can do that, but I can't ask for evidence, seems like it ends up allowing whichever of us is more interested in the exercise to snipe at the other unchallenged pretty much indefinitely.
I'm not interested in sniping at you right now, I'm just interested in people parsing the literal comment of my comments (and your posts) and not attributing to me things that I did not in fact say.
Replies from: Vaniver↑ comment by Vaniver · 2018-05-28T20:52:20.134Z · LW(p) · GW(p)
A norm (even a temporary one) in which you can do that, but I can't ask for evidence, seems like it ends up allowing whichever of us is more interested in the exercise to snipe at the other unchallenged pretty much indefinitely.
To be clear on my view (as a mod), it is fine for you to ask for evidence (note that habryka did as well, earlier), and also fine for Duncan to disengage. I suspect that the world where he disengages is better than the one where he responds, primarily because it seems to me like handling things in a de-escalatory way often requires not settling smaller issues until more fundamental ones are addressed.
I do note some unpleasantness here around the question of who gets "the last word" before things are handled a different way, where any call to change methods while a particular person is "up" is like that person attempting to score a point, and I frown on people making attempts to score points if they expect the type of conversation to change shortly.
As a last point, the word "indefinitely" stuck out to me because of the combination with "temporary" earlier, and I note that the party who is more interested in repeatedly doing the 'disengage until facilitated conversation' move is also opening themselves up to sniping in this way.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-28T21:23:18.550Z · LW(p) · GW(p)
In particular, there is something happening here that I notice myself wanting to narrativize as weaponized disingenuousness (which is probably not Ben's intention) that's like ...
I'm just interested in people ... not attributing to me things that I did not in fact say.
... politely following the rules, over here in this thread, and by example of virtuous action making me seem unreasonable for not wanting to reply ...
... whereas over in the other thread, I get the impression that this exact rule is the one he was breaking (e.g. when he explicitly asserted that I want to ghettoize people, when what I said was that we could treat people who found punch bug norms highly costly in a manner analogous to how we treat people with peanut allergies (to the best of my knowledge, there is no ghetto in which we confine people with peanut allergies)).
It reminds me of the phrase peace treaties are not suicide pacts. In fact the norm Ben is pushing for here is one I already follow, the vast majority of the time, except in cases where I see the other person as having already repeatedly demonstrated that they don't hold themselves to the same standard. I don't like being made to look bad for having a superseding principle prevent me from proving, in this case, that I am in fact principled in this way, too.
My favorite world would be one in which someone else would reliably make points such as this one, and so I could disengage in this particular likely-to-be-on-tilt case, while also feeling that all the things which "need" to be said will be taken care of.
Replies from: Benquo↑ comment by Benquo · 2018-05-31T08:20:15.757Z · LW(p) · GW(p)
Duncan's comment here persuaded me to go search for cases where my use of "ghetto" was ambiguous between quoting Duncan and making a claim about what his proposal implied. I've added clarifying notes in the cases that seemed possibly ambiguous to me. If anyone (including but not limited to Duncan) points out cases I've missed, and I agree that they're potentially ambiguous, I'll be happy to correct those as well.
I still stand by the claim, but it's important to distinguish that claim from a false impression that Duncan said that he envisioned ghettoes for people who don't want to play punchbug. He didn't say that.
One thing that makes Duncan's criticisms comparatively easy to evaluate here is that he's grounding things in the object level text with a fairly high degree of precision. I don't always agree with the criticisms, and sometimes strongly dispute his characterization of what I meant (though that's at least evidence that something I wrote was unclear), of course.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-31T09:05:46.083Z · LW(p) · GW(p)
Upvoted, and appreciated on a visceral, emotional level.
comment by Raemon · 2018-05-25T20:21:32.879Z · LW(p) · GW(p)
There's a set of moderation-challenges that the post doesn't delve into, which are the ones I struggle most with – I don't have a clear model of what it'd mean to solve these, whereas the challenges pointed to in the OP seem comprehensible, just hard. I'm interested in thoughts on this.
1. Difficulty with moderating just-over-the-line comments, or non-legibly-over-the-line comments
The most common pattern I run into, where I'm not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.
The two underlying generators I'm pointing at here seem to be:
- Not actually approaching a discussion collaboratively. Sometimes this is legit bad faith, and sometimes it's just... not enough willingness to do interpretive labor, or a vague underlying sense that you're not trying to be on the same team.
- Not being up-to-speed enough to contribute to a discussion. This feels harshest to take action on, even when I'm putting on my maximally friendly hat. Sometimes someone shows up and clearly hasn't read the sequences. Sometimes they have read all the relevant background info but just don't seem to be getting what the conversation is about, or are derailing it into a more basic version of itself (when it was aiming to be a higher level conversation)
These involve the most misunderstandings.
2. Negative Space
Sometimes, it's not what a commenter has said – it's what they haven't said. They're making some points that seem reasonable, but they're ignoring other points.
Sometimes this is because they're actually ignoring points that are uncomfortable, or they don't have access to a mental-skill that allows them to notice or process something.
Other times, they respond to a bunch of comments with another full blown essay that addresses everything at once (but, take weeks/months/years to get around to it).
Replies from: ingres, Qiaochu_Yuan, Duncan_Sabien↑ comment by namespace (ingres) · 2018-05-26T09:14:08.249Z · LW(p) · GW(p)
The most common pattern I run into, where I’m not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.
What I used to do on the #lesswrong IRC was put every time I see someone make a comment like this into a journal, and then once I find myself really annoyed with them I open the journal to help establish the pattern. I'd also look at peoples individual chat history to see if there's a consistent pattern of them doing the thing routinely, or if it's a thing they just sometimes happen to do.
I definitely agree this is one of the hardest challenges of moderation, and I pretty much always see folks fail it. IMO, it's actually more important than dealing with the egregious violations, since those are usually fairly legible and just require having a spine.
My most important advice would be don't ignore it. Do not just shrug it off and say "well nothing I can do, it's not like I can tell someone off for being annoying". You most certainly can and should for many kinds of 'annoying'. The alternative is that the vigor of a space slowly gets sucked out by not-quite-bad-actors.
↑ comment by Qiaochu_Yuan · 2018-05-26T00:19:27.717Z · LW(p) · GW(p)
Not actually approaching a discussion collaboratively.
Not being up-to-speed enough to contribute to a discussion.
Yeah, these are two of the things that have been turning me off from trying to keep up with comments the most. I don't really have any ideas short of incredibly aggressive moderation under a much higher bar for comments and users than has been set so far.
Replies from: Zvi↑ comment by Zvi · 2018-05-31T13:38:00.163Z · LW(p) · GW(p)
Worth noting that we currently have a lot of discussion going on, and the more we have, the more inclined I am to support moderation raising its standards for what is acceptable, and getting more aggressive. A shift from 'not generating discussion' to 'generating lots of discussion some of which may be bad' changes the situation a lot.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T22:38:33.207Z · LW(p) · GW(p)
+5
comment by Raemon · 2018-05-25T20:01:13.320Z · LW(p) · GW(p)
This article gave me a bunch of food for thought. I don't think it addresses my main cruxes re: previous disagreements I've had with Duncan, but it definitely gave me some new ideas and new vantage points to view old ones.
(Note 1: I won't be commenting on Duncan's comments on Benquo's comments because I'm still in the process with chatting with Benquo about it. I have a number of relevant disagreements with both Ben and Duncan, hope to resolve those disagreements at some point, but meanwhile don't have the bandwidth I'd require to engage with both of them at once)
Some thoughts so far:
1. Hierarchy of Goals
The hierarchy of "purposes of LessWrong" that Duncan describes is roughly the same one I'd describe. A concern or difference in framing I have here is that several the of the stages reinforce each other in a cyclical fashion.
I'm not quite sure you can cleanly prioritize truth over truthseeking culture.
If our culture isn't outputting useful accumulation of knowledge, then it's failing at our core mission. Definitely. But in the situations where truthseeking-culture vs truth seem to be in conflict, I think it's often because there's a hard problem that defies easy answers. (see "Decoupled vs Contextual [LW · GW]" and "Tensions in Truthseeking [LW · GW]" for rough examples).
2. "Every Comment Gets Read, and acted on in some fashion"
I agree that this is an important goal worth striving for. I don't think it's achievable for the immediate future due to resource constraints, but useful to have in mind as the bar to measure against.
I think we've actually recently made a lot of progress on 80/20ing this – the new moderator sidebar forces someone to engage with all new posts, and all "at-risk" comments – those created by new users, or which have been downvoted or reported.
I liked Duncan's proposed UI of "all un-read-by-moderators comments appear highlighted to mods". I think it might be worth implementing something similar so that we gain a visceral sense of how close or far we currently are to "100% comment-check-coverage."
3. Limited Resources, tool building
We do have pretty limited resources, and a fairly high bar for who we trust as a moderator (esp. at the level that Duncan is describing here).This is exacerbated because half the mods also being developers, so there is a direct tradeoff between building tools, and engaging in high bandwidth communication (as well as putting organizational capacity into finding more moderators we trust, vs, say, building out open source documentation).
So, for the immediate future I do lean towards solutions like "build tools that help make moderation as easy as possible", rather than "make sure to fully engage with every comment that doesn't uphold our standards." (for example, right now there isn't actually a tool that with one click, locks an entire subthread – I can block replies to an individual comment, I can lock all comments on a post, but I can't lock all replies to all children of a comment, and this crimps the ability to put-things-in-stasis until a time when I have enough energy to moderate thoroughly.
3. Collegial Culture
I do think all things being equal, high-bandwidth communication is ideal. I like the general thrust of the approach described in the initial example:
Notice the details in the example above—they’re not random; most of them were put there deliberately and are doing important work. The phrase “appears to me to be” serves to highlight the critic’s awareness of uncertainty, that they may have misinterpreted things or missed detail. The framing of “thing we don’t do around here” is boundary-enforcing but not morally charged—it’s not that the behavior is fundamentally bad or wrong, just that it’s not part of our specific subcultural palette.
The phrase “if I’m understanding you correctly” foreshadows a crux [LW · GW]—if I’m understanding you, then I believe X, but if my understanding changes, I might not believe X anymore. The invocation of a “standard rationality move” (such as applying reductionism, checking the inverse of the hypothesis, or setting a five-minute timer) reinforces the shared culture of the site and models better behavior for newcomers. And the “thoughts?” at the end—especially if it occurs within a context where such invitations are demonstrably genuine, and not just lip service—actively draws the other person back into the conversation. The sum total of all of these little touches turns what might otherwise be the beginning of a fight into a cooperative, collaborative dynamic.
This basically is how I'd prefer myself to engage with most comments that don't live up to a standard, and insofar as I don't communicate that way it's (usually) because I'm either under time-pressure/stress/triggered, or perhaps more generally I haven't processed the overall strategy into a S1-response that flows smoothly in high stakes situations.
I think Vaniver does this sort of thing better than me. It was helpful to me to have some of the details of this spelled out so that I could pay more attention to them.
The main potential disagreement I have here is the scale of intervention within a single conversation that Duncan is suggesting. I can definitely imagine this turning out to be important, but I can also easily imagine it turning out to derailing most conversations into meta commentary on themselves.
4. Operationalization...
The part where I periodically disagree with Duncan is when it comes to the nuts and bolts of what sort of culture we're actually trying to build. I have a lot to say on this, but at least some of it is stuff that I'd like to chat with Duncan about in a separate format from this, because I think the internet is uniquely bad for hashing out the details.
Replies from: Duncan_Sabien, Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T20:21:01.667Z · LW(p) · GW(p)
I should note that I do think this post contains all of my cruxes re: disagreement; i.e. that in every case where you've strongly disagreed with me about norms or policies or how-a-given-conversation-should-go, the principle I was acting on was among those laid out here.
(Most specifically: when it's justified to punch back, what counts as not-meeting-the-standard-of-rationality, and how much leadership is obligated to actively defend the right-but-unpopular.)
From my own past experience as a mod and admin, I predict that scale concerns are miscalibrated. It's a lot at first, but just as some teachers wage an unending losing battle against student misbehavior while others basically see no problems at all, ever ... once the standard is set and enforcement is clear, consistent, and credible, problems 90% stop occurring.
(I acknowledge that I have not fully responded to all of your points; I wanted to register these things quickly but other stuff is probably worth responding to later in other comments.)
Replies from: Raemon, Raemon↑ comment by Raemon · 2018-05-25T20:32:42.304Z · LW(p) · GW(p)
From my own past experience as a mod and admin, I predict that scale concerns are miscalibrated. It's a lot at first, but just as some teachers wage an unending losing battle against student misbehavior while others basically see no problems at all, ever ... once the standard is set and enforcement is clear, consistent, and credible, problems 90% stop occurring.
Yeah, I can definitely imagine this being the case. I don't have strong opinions on this concept, although I'm in part worried that doing it right involves a lot of skill, and failing to do it right may make things worse.
↑ comment by Raemon · 2018-05-25T20:34:44.952Z · LW(p) · GW(p)
I should note that I do think this post contains all of my cruxes re: disagreement;
Yeah, it makes sense that this is the parts that seem like most salient points of disagreement. But I think it's fairly important (and has been the last couple times we talked about this), that I agree with most of the cruxes listed here, and yet disagree with the conclusion. So it's important that whatever is causing the disagreement isn't actually covered here (or at least, not covered sufficiently)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-28T02:18:12.593Z · LW(p) · GW(p)
Note 1: I won't be commenting on Duncan's comments on Benquo's comments because I'm still in the process with chatting with Benquo about it.
... I note that there's a point in the near future at which continued lack of any public action by the LW team stops being "we want to take our time and get this right and not add fuel to the fire" and starts being a de facto endorsement and a taking of sides (since the comments I claim are objectionable remain visible to all, net upvoted, more than a week later, sans any moderating influence or perspective).
Separately, I also note that benquo's made a comment here that I really really really want to reply to directly, but that my model of the LW leadership prefers no engagement until there's facilitation in place. I'm not clear on whether or not there's a plan to make that happen.
comment by Jan_Kulveit · 2018-05-24T18:41:07.611Z · LW(p) · GW(p)
To me it seems there is a certain tension between In Defense of Punch Bug and this post.
As I understand it, while "In defense of Punch Bang" in some parts argues people should not spend huge amount of attention on basically random noise, this calls for very high attention to detail on the part of moderators, like
"be attentive enough to be the one to catch the slipped-in hidden assumption in the middle of the giant paragraph, and to point out the uncharitable summary even when it’s carefully and politely phrased, and to follow the subthread all the way down to its twentieth reply".
This sounds surprisingly similar as a call why people should be diligently watching for microaggressions: so I would just point to the In Defense of Punch Bang for a reasonable counterargument.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-24T18:48:35.638Z · LW(p) · GW(p)
The difference is, In Defense of Punch Bug is a prescription for general behavioral norms in society at large, whereas the moderator post is about how to correctly execute a very specific leadership role within a very specific subculture. I don't think there's any contradiction; they're advice for different domains.
Sort of like if one person wrote an essay saying "don't sweat the small stuff" but then also wrote a memoir about their work as a neurosurgeon and spent a LOT of time talking about attention to detail. I think this is not incoherent.
(Also note that the inherent chillness of what I labeled collegiate culture requires the relaxed approach to micro advocated in the punch bug post, rather than an easily-offended or highly-charged or knee-jerk narrativized reaction to perceived transgressions.)
↑ comment by Jan_Kulveit · 2018-05-25T04:11:41.186Z · LW(p) · GW(p)
I also don't think it is incoherent, just that it seems to me there is some tension.
I understand both the general norms and the moderation as some kind of bounded-optimization problems. In case of general norms, as I see it (which may be different from DPB), big reason of why not to care about micro-things is, people are limited in attention and cognitive capacity. If they were orders of magnitude less limited, maybe they should care about subtler effects.
I assume the current LessWrong moderators are also attention and time constrained. What I quoted seems to me like a call to invest a lot of effort into improving details, mostly by creating negative feedback loops. I would agree this may be what should be done, given order or two increase in moderation resources. With the current capabilities IMO what is more important is high level steering and creating positive feedback loops.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T04:29:20.835Z · LW(p) · GW(p)
Would you argue that current moderation is ideal (or at least very close), given constraints and tradeoffs?
Replies from: Jan_Kulveit↑ comment by Jan_Kulveit · 2018-05-29T03:56:42.965Z · LW(p) · GW(p)
I think the current moderation is within 30% to the constrained optimum I can imagine, and I also think there is cheaper room of improvement is in different directions than you point toward.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-29T04:03:44.914Z · LW(p) · GW(p)
Cruxes and/or models? The assertion is good on its own, since it allows people to vote their agreement or disagreement, but it doesn't really move us toward convergence.
Replies from: Jan_Kulveit↑ comment by Jan_Kulveit · 2018-05-29T18:34:32.418Z · LW(p) · GW(p)
I'll try to write something explicitly but 1) it may take some time 2) I'm afraid part of my models is now in "intuition" [LW(p) · GW(p)] black-box form. Generally my background in this is my past self participating in translating, trying to design and implement norms in cs.wikipedia community [LW(p) · GW(p)], a long time ago.
What is my baseline ... there actually is a place rather unlike Facebook or 4chan or other places you mention, which was quite successful in its stated purpose - building highly usable body of aggregated NPOV knowledge, and community around it: Wikipedia and it's community. IMO there is actually a lot to learn from their rules and norms, so whenever community norms of WP and LW differ, IMO it's worth to look into more details. Some differences arise because different purposes, some because rationality, some are random broken-symmetry effects, but I suspect significant part is just because LW is orders of magnitude smaller and did not have time to develop something, or LW is getting it wrong. (Part of WP norms and culture is in turn based on knowledge developed previously on MeatBall)
Ok...back to the original question, one of the important directions where LW is IMO suboptimal is
Reason why is this neglected is that while if you are wronged on LW, you e.g. write a blogpost, talk to people, etc. (and people notice, because it's you!), if a random, bright, somewhat argumentative mathematician is "bitten" in her first interaction with LW community, she just bounces, without leaving much trace. The counterfactual damage is much less visible.
In contrast things like one very experienced, high status person writing something wrong, harming other experienced, high status person (which is my picture of the cause of the current debate - with the caveat that I read just about 1/3 of the discussion and may be dont understand it) ... actually in my view point toward the need of have some "conflict resolution process"
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-29T19:57:39.181Z · LW(p) · GW(p)
I like this frame, and agree that there's a lot of value in comparing LW and WP.
I think having more defense for the bitten or bite-vulnerable is a big part of my emotional motivation here, though it's because of empathy rather than game theory (it seems justifiable under either).
comment by tcheasdfjkl · 2018-05-25T02:09:35.075Z · LW(p) · GW(p)
Hm. I am in favor of high standards of discourse but I am remarkably resistant to Duncan imposing his high standards of discourse because he has remarkably different social/discourse intuitions from me.
Replies from: tcheasdfjkl↑ comment by tcheasdfjkl · 2018-05-25T02:11:08.531Z · LW(p) · GW(p)
Also worried about selective/biased policing and missing out on important points because the person making the point had trouble phrasing it well. (That's sometimes the right tradeoff, but not always. Based on the examples in the post I don't like where Duncan draws the line.)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T04:07:49.518Z · LW(p) · GW(p)
I'm curious if you predict that I would endorse your description/summary of where I draw the line (i.e. whether you predict that you can pass my ITT).
I'm also curious for your cruxes, and/or your description of how you would recommend the example comments be moderated (including not at all if that's your view).
(It seems useful to simply make the assertion, especially since it provides a flag for other people to vote on, giving a rough sense of how popular your stance is. But I think it would be even more useful if it led toward a reciprocal sharing of your model.)
Replies from: tcheasdfjkl↑ comment by tcheasdfjkl · 2018-05-28T01:20:30.335Z · LW(p) · GW(p)
Apologies for the lack of response here; a good and full response would take a fair bit of time and energy and I'm rather short on both. I'm not sure if I'll get around to responding at all, though I will if I can. I understand that what I've written so far is vague and arguably hostile which is not a great combination, I just think it's probably better than not saying the thing at all.
In the meantime, I would invite people who upvoted my above comments and have more spoons than me to answer Duncan's questions in my place.
comment by Vaniver · 2018-05-25T00:33:49.571Z · LW(p) · GW(p)
I think it's helpful to have some sort of system to make sure that every comment gets read, but I think the ownership checkbox is potentially a bad way to do it. I'm mostly thinking of the incentives for moderators, here; it seems highly plausible that someone comes across a comment that feels off but they don't really know how to handle it; this means they don't want to say "yep, this one's mine" (because they don't want to handle it) but also not checking it is wrong somehow.
One of the things that I had considered when proposing the Sunshine Regiment was a 'report' button on every comment, available to all users, which was basically a "something about this should be handled by someone with more time and energy"--not necessarily "this post should get removed" but "oh man, I really want to see how Vaniver would respond to this comment," or something.
I also suspect there's something like Stack Exchange's edit queue that could be good, where several of the important pieces are 1) multiple eyes on any particular thing and 2) tracking how much people have eyes on things and 3) tracking when people's judgments disagree.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-26T12:17:43.491Z · LW(p) · GW(p)
a 'report' button on every comment, available to all users, which was basically a "something about this should be handled by someone with more time and energy"
Why not just make a comment about it, as a reply to the offending post? E.g. "There seem to be some problems in this thread that a third-party could help with, but I'm not willing to do it, so don't reply to this". If that was an established norm, then I think it's reasonably likely that such comment would be noticed by someone (especially since we have "recent discussion" in the home page).
Whether that would actually solve problems or just create more, is a separate question. But trying things is generally a good idea.
Replies from: Vaniver↑ comment by Vaniver · 2018-05-28T20:56:28.907Z · LW(p) · GW(p)
Why not just make a comment about it, as a reply to the offending post? E.g. "There seem to be some problems in this thread that a third-party could help with, but I'm not willing to do it, so don't reply to this". If that was an established norm, then I think it's reasonably likely that such comment would be noticed by someone (especially since we have "recent discussion" in the home page).
It is generally wise to solve social problems with tech, when possible. I also think it is important here to have the someone who does the noticing be someone who actually has the relevant skills, rather than thinking they have the relevant skills (both because of overconfident bystanders and because of underconfident skilled people, who won't feel licensed to point out such problems unless handed a literal license to do so).
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-29T07:20:09.445Z · LW(p) · GW(p)
I also think it is important here to have the someone who does the noticing be someone who actually has the relevant skills, <...> who won't feel licensed to point out such problems unless handed a literal license to do so).
Yes, but giving people licenses is pretty easy. I'd be fine with you having one, for example, though I guess I don't have the power to give it to you myself.
It is generally wise to solve social problems with tech, when possible.
The problem is that tech takes time and effort to write, so writing tech to solve problems that it may not actually solve is unwise. What I'm proposing is a temporary prototype of some sort. If that worked out, then I agree, a proper tech solution would be nice.
comment by Vaniver · 2018-05-25T00:25:28.378Z · LW(p) · GW(p)
I think one of the main things I updated on is that a model I had of user discussions with users (that it's important to hug the most important part of the query, i.e. chasing and focusing on cruxes) is not applicable to mod actions (where it's important to catch each of the norm violations, instead of simply the most serious violation).
There are a handful of reasons for this, the most serious of which is the "that which is not punished is allowed"--Raemon made a comment on the Benquo's top comment that pointed out what Raemon saw as a defect in Benquo's presentation, and separated that from [LW(p) · GW(p)] a criticism of Benquo's argument. But that meant that later, when I criticized the argument, Benquo responded with [LW(p) · GW(p)] 'but Raemon wasn't criticizing my argument.' If I only point out what seems to me like the most serious error in a post, then there's a (reasonable!) argument that all the things that went unsaid weren't at the threshold of correction, and if I only point out the error that's easier to respond to, the same sort of thing goes through.
Replies from: Zvi↑ comment by Zvi · 2018-05-25T13:11:57.088Z · LW(p) · GW(p)
It's worth pointing out that in this case, Raemon explicitly said he wasn't criticizing the argument itself, and he wrote a distinct comment to that effect.
I do think the general point is valid and a serious problem, though; if I criticize X it can be read as giving a pass for Y even though it shouldn't be (and the even worse parallel problem is when I give reason/argument X for my point, and people then assume that means there's no additional good reason/argument Y or Z, or otherwise don't actually add together evidence from diverse angles, especially when they're relatively illegible).
Obvious brainstorm is, would it be sufficient to say 'The largest issue with' or something like that, when you suspect there are additional problems? Or would this create a worse burden; e.g. now if I don't say that, it is real evidence that there are no additional problems, so now the burden to say anything goes up, and oh no...
comment by Ben Pace (Benito) · 2018-05-24T11:27:21.690Z · LW(p) · GW(p)
I really like when people put effort into providing alternatives-to/critiques-of how I work, so thanks for this.
It doesn’t mean I can always reply promptly, alas. Right now I’m in the process of taking all my finals (so I can get a visa and move to the Bay), and this will continue for a few weeks :(. Oli’s just coming back from the same, while Ray’s holding the fort.
It’s also regularly the case that doing detailed public write-ups of considerations around a decision/approach isn’t the right use of effort relative to just *building the product*, and that applies to things like detailed comments on this too. So far with LW I and Oli have spent much less time writing about where we’re going than just going there, which I think has worked well. (Ray has actually done a lot of writing of his thinking, somehow. Well done Ray.)
Then again, write-ups are a big part of building moderation, so I’ll see in a few weeks when I come to this.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-05-24T17:16:08.842Z · LW(p) · GW(p)
I am properly back since a few days ago, so I will probably write up my thoughts on this sometime soon. I am quite excited about this post, and am looking forward to the discussion around it.
comment by Kaj_Sotala · 2018-05-24T10:27:45.580Z · LW(p) · GW(p)
Moved to meta.
comment by Benquo · 2018-05-31T19:02:17.608Z · LW(p) · GW(p)
This seems like it's pointing at a good thing. As a data point, the proposed responses to my comments would all have seemed friendly and helpful to me, and I'd have had an easy time engaging with the criticism.
His draft response to my VW comment would probably have motivated me to add a note to my initial comment deprecating that reference, or edit it out entirely (with a note clarifying that an edit had been made).
The asymmetry comment he pointed to as helpful (thanks, Vaniver!) actually would have been helpful, if SilentCal hadn't already taken the initiative (thanks, SilentCal!) and clarified what he took the term to mean.
comment by jbeshir · 2018-05-25T08:35:57.867Z · LW(p) · GW(p)
I'm concerned that the described examples of holding individual comments to high epistemic standards don't seem to necessarily apply to top-level posts, or linked content- one reason I think this is bad is that it is hard to precisely critique something which is not in itself precise, or which contains metaphor, or which contains example-but-actually-pointing-at-a-class writing where the class can be construed in various different ways.
Critique of fuzzy intuitions and impressions and feelings often involves fuzzy intuitions and impressions and feelings, I think- and if this stuff is restricted in critique but not in top level content it makes top level content involving fuzzy intuitions and impressions and feelings hard to critique, despite I think being exactly the content which needs critiquing the most.
Strong comment standards seem like they would be good for a space (no strong opinion on whether LW should be that space), but it would probably want to also have high standards in top level posts, possibly review and feedback prior to publication, to keep them up to the same epistemic standards. Otherwise I think moderation argument over which interpretations of vague content were reasonable would dominate.
Additionally, strong disagree on "weaken the stigma around defensiveness" as an objective of moderation. One should post arguments because one believes they are valid, and clarify misunderstandings because they are wrong, not argue or post or moderate to try to save personal status. It may be desirable to post and act with the objective of making it easier to not be defensive, but we still want people in themselves to try to avoid taking it as a referendum on their person. In terms of fairness, I'm not sure how you'd judge it- it is valid for the part people have most concerns about to not be the part which is desired to be given the most attention, I think, in even formal peer review. It's also valid for most people to disagree with and have critiques of a piece of content. The top level post author (or the link post's author) doesn't have a right to "win"- it is permissible for the community to just not think a post's object level content is all that good. If there was to be a fairness standard that justified anything, it'd certainly want to be spelled out in more detail and checked by someone other than the person feeling they were treated unfairly.
Replies from: Duncan_Sabien, Duncan_Sabien, Zvi↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T15:58:52.992Z · LW(p) · GW(p)
Critique of fuzzy intuitions and impressions and feelings often involves fuzzy intuitions and impressions and feelings, I think- and if this stuff is restricted in critique but not in top level content it makes top level content involving fuzzy intuitions and impressions and feelings hard to critique, despite I think being exactly the content which needs critiquing the most.
If I've given the impression that I object to and want to restrict fuzziness itself, then I'd like to clarify and reverse that impression ASAP. I think it's entirely possible to do fuzzy intuitions and impressions and feelings in a high-quality, high-epistemic way, and I think it's a critical part of the overall ecosystem of ideas; as an example I'd point to ~the entire corpus of work written by Anna Salamon.
I don't think there's a conflict between freedom of fuzziness and high standards; the key is to flag things. If it's an intuition, note that it's an intuition; if there's no formal argumentation behind it, note that; if you think it would be resistant to update and also think that's correct, just be open about that up front. That way, people who want to engage know what they're dealing with, and know which lines of approach will be useful versus which will be futile.
A sort of cheesy, empty example, since I'm trying to come up with it on the fly and there's no actual topic at hand:
I don't know, I don't really have arguments to back this up, but I'm getting the sense that there's a very large and powerful dynamic in play in this situation that no one has pointed at or addressed. Like, when I imagine a world where we've fully solved all four of the problems that USERFACE has pointed out, I still get a deep, sinking, doomy feeling in my stomach. This is just intuition, but it's pretty strong, and I'm wondering if anybody else has thoughts in that direction.
I'd upvote a comment like that in a heartbeat.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T16:07:43.448Z · LW(p) · GW(p)
One should post arguments because one believes they are valid, and clarify misunderstandings because they are wrong, not argue or post or moderate to try to save personal status
Strong agree. But I claim that I have seen myself and others attempt to do exactly that in response to damaging falsehoods, and that the response has been criticism of defensiveness.
Like, the point wasn't to salvage personal status; the point was the original attack contained falsehoods, and this was dismissed as if the only activity taking place was a status fight. There was a bucket error going on, a bucket error that I think the ideal LW would have incentive slopes against, and encourage people to grow out of being vulnerable to.
A quote that I believe I cited at the time, which feels relevant here, too:
Draco knew, he knew what he'd done wrong. He'd been so tired after casting twenty-seven Locking Charms for all the other Dragon Warriors. Less than a minute wasn't enough time to recover after each spell. And so he'd just cast Colloportus on his own padlocked glove, just cast the spell, not put in all his strength to bind it stronger than Harry Potter or Hermione Granger could undo.
But nobody was going to believe that, even if it was true. Even in Slytherin, nobody would believe that. It sounded like an excuse, and an excuse was all that anyone would hear.
... I believe that one of the foremost goals of LessWrong, at least from a social norms standpoint, is to become the sort of place where Draco could say "I think you should seriously consider the possibility that I didn't cast Colloportus as strongly as I could have, and that therefore Hermione counterspelling it isn't conclusive re: our relative magical strengths" and not get laughed off stage. And I think that mods who care about that and are unified in their commitment to it are a crucial ingredient.
(Edit: have updated the specific quote in the post to more clearly point at what I'm actually advocating.)
↑ comment by Zvi · 2018-05-25T13:19:44.101Z · LW(p) · GW(p)
One thing I noticed by writing a blog slash posting here, is that the rules for posts and comments are importantly different. In comments you have discussion norms in a way you don't for main posts, and for main posts you have an on-the-record property that comments don't have. So when writing comments, one needs to be very careful about certain types of norms of fairness and politeness, and forms of argumentation, and be even more careful about rhetorical flourish.
But in exchange there's an understanding that you're not held to your statements and positions (outside of the comment section itself, at least) the way a post would be. Thus, I draw a distinction that if I put something in a post it is 'fair game' to be quoted and held as my position in an outside context, whereas a comment doesn't do that, it's on the record but it's exploratory, and in several cases I've used it to say things in comments that I didn't feel comfortable saying in full posts.
Right now I believe the way we handle issues with top-level posts is to not promote them to front page if there are problems, and only curate them if they're excellent, combined with lots of voting, which seems pretty good. When I do things that are against LW norms (which I occasionally do since all my blog posts are auto-posted here) I have no problem getting the message that I did that (whether or not I already knew that I'd done that) through these systems.
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T22:46:09.287Z · LW(p) · GW(p)
There is a potential discussion to be had someday about whether upvotes and downvotes themselves can be considered to be "in error" given the specific milieu of LW, or whether upvotes and downvotes are sacred à la free speech. Looking at votes on this and other threads, I have frequently had the sense that people were Doing It (Objectively) Wrong, not fundamentally as in wrong-opinions, but culturally as in participation-in-this-culture-means-precommitting-to-supporting-X-and-othering-Y.
I'm aware that there is a strongly-felt libertarian argument against this (a sort of red-versus-white disagreement, in MTG terms) and it'd be interesting to see whether LW wants to have a standard in common knowledge around this question.
Replies from: Raemon↑ comment by Raemon · 2018-05-26T04:23:58.372Z · LW(p) · GW(p)
This question and its subcomponents are type erroring for me or something.
I think people can downvote/upvote in ways that are not game theoretically optimal given their values.
I think criticizing people's upvotes/downvotes probably tends to lead to bad outcomes. (Citation needed, but this is what I currently believe)
The notion that you shouldn't criticize people's votes because it leads to bad outcomes feels white to me rather than red (which may have actually been what you meant but my impression was you meant Red to be the color of "vote however you want")
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T04:56:23.649Z · LW(p) · GW(p)
I think criticizing people's upvotes/downvotes probably tends to lead to bad outcomes. (Citation needed, but this is what I currently believe)
Agreed. But as long as I'm personally forming lists of things that LW culture should get right that most internet culture fails at abysmally ... I don't know whether "users can criticize other users' votes without the conversation being derailed into status fights" should be on that list or not.
There are certainly comments where the fact of their very high or very low karma scores seems to me to be meaningful evidence re: whether it makes sense to be hopeful about the LessWrong project in general.
Am gathering a little informal data on this on FB now; will report back later. (EDIT: 70 votes, 91% in favor of the position that votes themselves can be considered to be in error; no data collected on how to operationalize that in the social space.)
comment by TAG · 2018-05-25T11:13:27.619Z · LW(p) · GW(p)
In short, what makes LessWrong different from everywhere else on the internet is that it’s a place where truth comes first,
Because all those science, logic, maths and philosophy forums are just full of lies.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-25T15:49:06.675Z · LW(p) · GW(p)
Here's a comment that I think should be defended, on the ideal LW (which maybe it would've anyway in the natural course of affairs; it's only been up for a few hours and it's only been downvoted by a few people). It's a bit snide/aggressive in its framing, but it raises a completely valid point, and if the snideness is a problem it can be objected to separately from the content.
In response, I'd say yeah—I clearly overreached in my phrasing. What I actually meant to convey was something like, different from the majority of "typical" places on the internet. I wanted to convey the sense that LW ought to be something like 98th percentile on this axis, but I was incorrect to imply that the ideal LW would be at the absolute far right tail of all websites and discussion fora, and didn't actually mean that in the first place.
Replies from: Raemon↑ comment by Raemon · 2018-05-26T07:51:52.370Z · LW(p) · GW(p)
FYI, AFAICT I am the only person who downvoted it (bringing it from 3 to -3). As long as we're having the meta-est of conversations, I wanted to talk about why.
The single most common criticism I hear from good writers on LW, is that the comments feel nitpicky in a way that isn't actually helpful, and that you have to defend your points against a hostile-seeming crowd rather than collaboratively building something.
This comment seemed to be doing two things in that direction: a) it was bit aggressive as you noted, b) the point it was making just... didn't seem very relevant. Yes, there are places on the internet aspiring to similar things as LW. But a reasonable read on your statement was "most of the internet isn't trying to do this much at all, LW is different." While some humility is nice, I really don't understand your point better now that you've rephrased it to the 98th percentile thing.
So the main impact of this sort of comment is both to spend time on things that don't matter much, and increase the latent hostility of the thread, and I think people don't appreciate enough [LW · GW] how bad that is for discourse. Both of that seem like something to me that is better silently downvoted than engaged with.
Replies from: Duncan_Sabien, zulupineapple↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-26T08:08:47.341Z · LW(p) · GW(p)
Oh, yeah, there's maybe another crux between us—while I buy the argument that a policy of silently downvoting this class of comment may be more sustainable in the long run, I don't share your instincts re: demon threads (to the point that I'm relatively dismayed that that label for them is starting to stick, because it implies hopelessness or helplessness or external-locus-of-control that I don't feel). My approach to threads like those is maybe best summed up by "Let the Light win, and if trouble comes of it ... let the Light win again."
EDIT: Also, I'm pretty sure other up- and downvotes are in the mix by now, because I've seen it at a lot of other scores in the interim. I think -4 and 0 and -6.
↑ comment by zulupineapple · 2018-05-26T13:26:27.489Z · LW(p) · GW(p)
the comments feel nitpicky in a way that isn't actually helpful
If you see a comment that is technically correct but nitpicky and unhelpful, you could reply "this is technically correct, but nitpicky and unhelpful". Downvoting correct statements just looks bad.
the point it was making just... didn't seem very relevant.
I think there is a more charitable reading of TAG's comment. Not only are there places in the internet aspiring to find the truth, there are, in fact, very few places that are not aspiring to find it. The point isn't that there are more places like LW. The point is that "truth seeking" isn't the distinguishing characteristic of LW.
and that you have to defend your points against a hostile-seeming crowd rather than collaboratively building something.
I honestly believe that attacking people's points is a good way to learn something. I don't know what you mean by "collaboratively building something", I'd appreciate examples where that has happened in the past. I suspect that you're overestimating how valuable or persistent this "something" is.
increase the latent hostility of the thread, and I think people don't appreciate enough [LW · GW] how bad that is for discourse.
I don't think you've provided strong arguments that it actually is bad for discourse. Yes, demon threads don't usually go anywhere, but regular threads don't usually go anywhere either. And people can actually learn from demon threads, even if they're not willing to admit it right away. I certainly have.
Replies from: Vaniver↑ comment by Vaniver · 2018-05-28T19:16:09.150Z · LW(p) · GW(p)
Not only are there places in the internet aspiring to find the truth, there are, in fact, very few places that are not aspiring to find it.
?? If you look at the Alexa top 50 sites for the US, how many of them are about aspiring to find the truth? I count between 3 and 4 (Google, Wikipedia, and Bing for sure, Wikia maybe).
Replies from: Duncan_Sabien, zulupineapple↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2018-05-28T20:39:49.915Z · LW(p) · GW(p)
You forgot amazon.com and walmart.com, which have the tightest instrumental rationality feedback loops of them all.
</banter>
↑ comment by zulupineapple · 2018-05-29T07:32:51.256Z · LW(p) · GW(p)
I think you're confusing "aspiring to find truth" with "finding truth". Your crackpot uncle who writes facebook posts about how Trump eats babies isn't doing it because he loves lies and hates truth, he does it because he has poor epistemic hygiene.
So in this view almost every discussion forum and almost every newspaper is doing their best to find the truth, even if they have some other goals as well.
Also, of course, I'm only counting places that deal with anything like propositions at all, and excluding things like jokes, memes, porn, shopping, etc, which is a large fraction of the internet.