post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Raemon · 2019-06-21T18:23:59.474Z · LW(p) · GW(p)

Meta Thread (for making observations or suggested changes to the format)

Replies from: Raemon
comment by Raemon · 2019-06-22T07:53:50.584Z · LW(p) · GW(p)

I found it somewhat hard so far to treat this different than a LW comment thread. It's possible a PM setup would have a different effect. This doesn't lead me to what to change anything about the experiment, just noting it.

comment by Raemon · 2019-05-24T23:01:22.016Z · LW(p) · GW(p)

Question to get things started:

LessWrong current disincentivizes project announcements, social-momentum-building and resolving interpersonal controversies. Is that bad?

Social-momentum-esque discussions are left on the personal blog. The intent here is more to be a "soft" disincentive, although people seem to think of it as a harder ban that we meant.

The main reason for this is "it's really hard to get people to just talk about ideas without bringing social momentum into it, and it's easier to establish a site culture that focused on that if we're not also trying to figure out how to have good-epistemology-around-group-coordination at the same time."

I do think it's probably important to eventually provide a platform that more strongly incentivizes new projects. I currently am not sure whether it will ever be the right format for handling social controversies (I could be persuaded either way).

Replies from: Raemon
comment by Raemon · 2019-05-24T23:07:54.005Z · LW(p) · GW(p)

I was in the process of writing up a post that distinguished between a few things:

Calls to Clarity (i.e. "hey everyone, I think it'd be good if we all understood X principle" [without necessarily agreeing with X]). This is totally fine and good for LW frontpage.

Calls to Action ("hey everyone I'm starting a project and I think you should support it" or "hey everyone I think it'd be great it we adopted this norm." The intended goal is that this is fine for personal blogs, and I see the problem as something like "make sure that people actually believe they are welcome to write about this on their personal blog)

Calls to Conflict ("so and so is a bad person and should be punished."). This seems most likely to set people on the defensive, to have a particular hard time thinking rationally. My current best guess is "this is *usually* okay for your personal blog, sometimes not even there, and sometimes we might want to employ stronger moderation tools (such as not allowing conflict-heavy threads to show up on the frontpage).

My model of Benquo (in particular after a recent comment thread) is somewhat skeptical that it's good idea to treat conflict and action asymmetrically. Certainly, this punishes a certain kind of thinking more than others.

My current best guess is that conflict is in fact worse for rationality than action. I also am generally pro archipelago as a strategy, where I want people to be incentivized to go off and build hard things, in an environment where mutually-exclusive hard things don't have to be in conflict (and seems more possible to grow the pie than to have to fight over it).

The notion of "calls to conflict maybe even more strongly disincentivized" is meant to enable something like basic civilization with rule of law, where you can trust that you can build a private enterprise without getting robbed or chased off your property.

Curious what you think of all that.

Replies from: Benquo, Benquo
comment by Benquo · 2019-06-21T15:09:06.358Z · LW(p) · GW(p)

This schema seems like it has some very important gaps. As we've discussed elsewhere, there's a need for criticism that isn't mainly about one person being bad - for instance, a call to action might be based on wrong ideas or factual errors. Even if this is a call to action around which some people have built their identities or social standing, criticizing it is not intrinsically the same thing as the kind of call to conflict you defined here.

If these are in practice the same thing, then that's a huge problem.

There's another legitimate type of criticism, which is, "so and so has violated community standards," which is both a claim about their behavior and about what the community standards are and ought to be. It's not obvious that "punishment" should follow in all cases, even if they actually did violate community standards, if those standards were unclear. In any case, a step we have to pass through before enforcement - if we want to have standards at all and not just mob rule - needs to be clarifying specific cases, and it might make sense to be much more lenient in cases where the mods aren't already on board.

There's a third class of criticism that's not about being "bad" - though it overlaps a bit with the first two - which is, a specific sort of epistemic defense - pointing out a pattern of communication that is seeking to induce errors. Obviously if we can't talk about that as prominently as we can talk about any other given thing, that's a huge security vulnerability.

Replies from: Raemon
comment by Raemon · 2019-06-22T07:51:01.633Z · LW(p) · GW(p)

Agreed that all three types of criticism are quite important – And yes I intended for "criticizing an error entangled with someone's identity" to be quite different from what I mean by "call to conflict" here. ("Call to Conflict" maybe fits more into the second bullet point here, and the way I was intending to use it was particularly extreme instances)

I have more thoughts but they're taking awhile to get in order.

comment by Benquo · 2019-06-21T15:16:40.489Z · LW(p) · GW(p)
My model of Benquo (in particular after a recent comment thread) is somewhat skeptical that it's good idea to treat conflict and action asymmetrically.

I strongly believe it's wrong to apply a higher burden to criticism of calls to action (or arguments offered in that context), than to the calls to action themselves. The frame in which we're lumping everything someone feels personally attacked by together as "conflict" basically gives everyone proposing something an unprincipled veto, letting them reclassify any criticism as "conflict" by framing the criticism as an attack on them or their allies.

I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn't really a rationality site, it's just a weird social club with shared rationality-related applause lights.

Replies from: Raemon, Benquo, Raemon
comment by Raemon · 2019-06-22T07:52:56.465Z · LW(p) · GW(p)

Noticing:

somewhat skeptical

vs

strongly believe it's wrong

And realizing I think I basically knew that "somewhat skeptical" was not an accurate way to describe your beliefs, and I think the algorithm that led me to write it that way was running through some sort of modesty or conflict-mediation filter that I don't endorse. Mostly noting for my own reference

comment by Benquo · 2019-06-21T15:28:03.469Z · LW(p) · GW(p)

What sort of solutions might work?

Duncan's suggestion here seems like it has the right mood - treating discussion of things someone might feel attacked by as an important enough class to commit resources to, and including the point of view of the people who feel attacked. Third parties are needed in such cases. Imposing all the work on a small fixed class of moderators seems like it imposes a high burden on a few people.

One thing I've had occasion to want a couple times is something like an epistemic court. I have within the past several months felt a strong need for shared institutions that allow me to sue or be sued for being knowably wrong. Unlike state courts, I don't see any need for a body that can award damages, just one that can make judgments. Without this, if someone claims I have a blind spot, it's very hard for me to know when to actually terminate my own attempt to find it, since "no, YOU have a blind spot!" is sometimes true, but very hard to be subjectively confident of.

In any case, my intuition that courts would be helpful I think has something important in common with Duncan's intuition that more active moderation would be helpful. There's something wrong with the sort of debate club norms we have now. We're focused more on making valid arguments than finding the truth, which leaves us vulnerable to large classes of trolling.

I think there's been an implicit procedural-liberal bias to much discussion of moderation, where it's assumed that we can agree on rules in lieu of a shared perspective. But this doesn't actually work for getting to the truth, because it's vulnerable to both manufactured spurious grievances, and illegible attacks that evade the detection of legible rules, without any real mechanism for adjudicating when we want to classify conflicts as one or the other (or both, or some third thing).

A lot of why I've been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out - who have a perspective which in some concrete interested way wants to be made more correct - are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking. Cf. Something to Protect [LW · GW].

Replies from: Raemon
comment by Raemon · 2019-06-22T07:20:00.008Z · LW(p) · GW(p)

I think I agree with most of the basic concepts here, and disagreements are mostly of the form "given current resources, what goals are practical to set and achieve?"

I think both having more active moderation of the type Duncan describes would be good, as would an epistemic court, and the only argument I have against them is that they're expensive. Epistemic Court seems potentially more viable because it doesn't necessarily need to be used all the time – it's expensive but if only used on the most important cases it might be affordable.

The sorts of systems that I think LW is exploring right now are ones that "solve problems with technology, rather than cognitive effort, when possible." Competent people are busy and the world is big, so it makes more sense to do things like nudges that require minimal effort form moderators to maintain. (the parts of Duncan's suggestions that we've come closest to implementing are things that make it easy for moderators to at least skim each new post, and take a few quick actions)

This does mean there are limits on what sort of place LessWrong can be.

A lot of why I've been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out - who have a perspective which in some concrete interested way wants to be made more correct - are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking

This does sound like a good description of the problem.

I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn't really a rationality site, it's just a weird social club with shared rationality-related applause lights.

I definitely think of solving this as part of my longterm goal. But a major disagreement is that "if you can't solve this, you're just left with a weird social club." (This was also a major disagreement of mine with Duncan).

I think there are lots of things you can achieve that are massive improvements over the status quo, that don't require solving this problem. There are probably around 20 major characteristics I wish each LW user had (such as "be able to think in probabilites" and "be able to generate hypotheses for confusing phenomena"), and most of them can be improved with "regular learning and practice", and nudges, rather than overcoming weird adversarial anti-inductive dynamics.

LessWrong isn't as good as many small, private, heavily filtered spaces, but a) it's present form still seems like a significant improvement as far as public forums go over most alternatives in the same reference class, and b) I think there's a bunch of room for further improvement.

A major example the team is exploring is the Open Questions feature. An important aspect of it is that sort of forces people to focus on the object level, and on actually figuring things out. It's harder to have a demon thread when the frame is "help answer this question." And meanwhile it can start to direct people's default behavior from "sort of just hang out on the internet" to "actually do intellectual labor that solves a problem."

Replies from: Benquo, Raemon
comment by Benquo · 2019-06-26T13:53:08.362Z · LW(p) · GW(p)
There are probably around 20 major characteristics I wish each LW user had (such as "be able to think in probabilites" and "be able to generate hypotheses for confusing phenomena"), and most of them can be improved with "regular learning and practice", and nudges, rather than overcoming weird adversarial anti-inductive dynamics.

Why would this matter at all for any purpose that might related to the use of rivalrous goods in an environment where there's no solution to adversarial epistemics? What's your model for how that could work?

Replies from: Raemon
comment by Raemon · 2019-06-27T04:33:06.424Z · LW(p) · GW(p)

I'm not sure what you mean. I agree solving adversarial epistemics is quite important and among the top priorities for the rationality project. But why would that be necessary to get any value out of empiricism/scholarship/etc?

Capitalism is built out of adversarial epistemics, which often results in waste, but still has generated tremendous value, as has science and academia. I wouldn't consider the typical company or research department a "weird social club" just because they hadn't solved that yet – 

Does that comparison seem wrong? Do you in fact consider most businesses weird social clubs? I'm not sure what you're trying to get at here.

comment by Raemon · 2019-06-23T07:09:59.186Z · LW(p) · GW(p)

One thing re: missing moods is that while I think there's room for improvement on the "be able to make criticisms without them being attacks" front, I think solving this looks quite different from the way you (and Duncan of 1.5 years ago) were trying to solve it.

There are fundamental limitations of a public forum, and of sprawling, heated discussions in particular. I think it will always require costly demonstrations of good faith in order to do make strong criticisms in public without being perceived as attacking. I think if you attempt to do this, you are just laying down norms that enable and incentive politicians, resulting in less clarity, not more.

But there are two options that both seem relatively straightforward to me:

1. Make criticisms, and employ a lot of costly signaling that you are arguing in good faith.

2. Have a norm wherein people discuss criticism in private, and then afterwards publish a public document that they both endorse. (This may in some cases require counterfactual willingness to write critiques that are attacks)

I generally prefer the latter once a conversation has begun to branch and get heated. Once a conversation has become multithreaded and involve serious disagreements, maintaining good faith becomes exponentially more expensive.

Replies from: Raemon
comment by Raemon · 2019-06-23T18:30:11.479Z · LW(p) · GW(p)

(I also think it’s just sort of okay for there to be a mutual understanding and clarify that some classes of feedback need to be treated as indistinguishable from attacks, which means they need to be somewhat socially punished to disincentive coalition politics, but that doesn’t mean they don’t also get listened to)

comment by Raemon · 2019-07-15T20:21:29.882Z · LW(p) · GW(p)

[meta note, replying to you because we don't yet have a good process for notifications that don't rely on replying to a person:

I don't know that this went anywhere important enough to publish, but fwiw, since my model of you puts at least some value to things being public and I don't personally object, if you wanted me to turn this from a draft-to-a-public-post that'd be fine.]