Speaking of Stag Hunts

post by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:20:34.967Z · LW · GW · 373 comments

Contents

    rationality has reliable concentration of force.
  Terrible Ideas
None
375 comments

This is an essay about the current state of the LessWrong community, and the broader EA/rationalist/longtermist communities that it overlaps and bridges, inspired mostly by the dynamics around these [LW · GW] three [LW · GW] posts [LW · GW].  The concepts and claims laid out in Concentration of Force [LW · GW], which was originally written as part one of this essay, are important context for the thoughts below.


Summary/thesis, mostly cribbed from user anon03's comment below: In many high-importance and high-emotion discussions on LessWrong, the comments and vote distribution seem very soldier-mindset instead of scout-mindset, and the overall soundness and carefulness of reasoning and discourse seems to me to be much lower than baseline, which already felt a smidge too low.  This seems to indicate a failure of the LW community further up the chain (i.e. is a result of a problem, not the problem itself) and I think we should put forth real effort to fix it, and I think the most likely target is something like a more-consistent embrace and enforcement of some very basic rationality discourse norms.


(And somewhere in the back of his mind was a small, small note of confusion, a sense of something wrong about that story; and it should have been a part of Harry's art to notice that tiny note, but he was distracted. For it is a sad rule that whenever you are most in need of your art as a rationalist, that is when you are most likely to forget it.)


I claim that something has gone a little bit wrong.

And as readers of many of my [LW · GW]/other/essays/know, I claim that things going a little bit wrong is often actually quite a big problem.

I am not alone in thinking that the small scale matters.  Tiny mental flinches, itty bitty little incentives, things thrown ever so slightly off course (and then never brought back). That small things often have outsized or cumulative effects is a popular view, either explicitly stated or discernible as an underlying assumption in the writings of Eliezer Yudkowsky [LW · GW], Nate Soares, Logan Brienne Strohl, Scott Alexander, Anna Salamon [LW · GW], and Andrew Critch [LW · GW], just to name a few.

Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important.  Resistance rather than cooperative disagreement—impatience, dismissal, often condescension or sneering, sometimes projection and strawmanning.

This is absolutely at least in part due to my own clumsiness and confusion.  A better version of me, more skilled at communication and empathy and bridging inferential gaps, would undoubtedly run into these problems less.  Would better be able to recruit people's general enthusiasm for even rather dull and tedious and unsexy work, on that split-second level.

But it seems to me that I can't locate the problem entirely within myself.  That there's something out there that's Actually Broken, and that it fights back, at least a little bit, when I try to point at it and fix it.

Here's to taking another shot at it.


Below is a non-exhaustive list of things which my brain will tend to do, if I don't put forth strategic effort to stop it:

I actually tend not to do these things.  I do them fairly rarely, and ever more rarely as time goes on and I improve my cognitive software and add to my catalogue of mental checks and balances and discover more of my brain's loopholes and close them up one by one.

But it's work.  It's hard work.  It takes up a nontrivial portion of my available spoons every single day.

And more—it requires me to leave value on the table.  To not-win fights that I could have won, had I been more willing to kick below the belt.  There's a reason human brains slide toward those shortcuts, and it's because those shortcuts tend to work.

But the cost—

My brain does not understand the costs, most of which are distant and abstract.  My brain was not evolved to understand, on an intuitive, reflexive level, things like:

All it sees is a chance to win.

"Harry," whispered Dumbledore, "phoenixes do not understand how winning a battle can lose a war." Tears were streaming down the old wizard's cheeks, dripping into his silver beard. "The battle is all they know. They are good, but not wise. That is why they choose wizards to be their masters."

My brain is a phoenix.  It sees ways to win the immediate, local confrontation, and does not understand what it would be sacrificing to secure that victory.


I spend a lot of time around people who are not as smart as me.

(This is a rude statement, but it's one I'm willing to spend the points to make.  It's right that we generally dock people points for rudeness; rudeness tracks a real and important set of things and our heuristics for dealing with it are actually pretty decent.  I hereby acknowledge and accept the regrettable cost of my action.)

I spend a lot of time around people who are not as smart as me, and I also spend a lot of time around people who are as smart as me (or smarter), but who are not as conscientious, and I also spend a lot of time around people who are as smart or smarter and as conscientious or conscientiouser, but who do not have my particular pseudo-autistic special interest and have therefore not spent the better part of the past two decades enthusiastically gathering observations and spinning up models of what happens when you collide a bunch of monkey brains under various conditions.

(Repetition is a hell of a drug [LW · GW].)

All of which is to say that I spend a decent chunk of the time being the guy in the room who is most aware of the fuckery swirling around me, and therefore the guy who is most bothered by it. It's like being a native French speaker and dropping in on a high school French class in a South Carolina public school, or like being someone who just learned how to tell good kerning from bad keming.  I spend a lot of time wincing, and I spend a lot of time not being able to fix The Thing That's Happening because the inferential gaps are so large that I'd have to lay down an hour's worth of context just to give the other people the capacity to notice that something is going sideways.

(Note: often, what it feels like from the inside when you are incapable of parsing some particular distinction is that the other person has a baffling and nonsensical preference between two things that are essentially indistinguishable. To someone with colorblindness, there's just no difference between those two shades.  Sometimes, when you think someone is making a mountain out of a molehill, they are in fact making a mountain out of a molehill.  But sometimes there's a mountain there, and it's kind of wild that you can't see it.  It's wise to keep this possibility in mind.)


I don't like the fact that my brain undermines my ability to see and think clearly, if I lose focus for a minute.

I don't like the fact that my brain undermines other people's ability to see and think clearly, if I lose focus for a minute.

I don't like the fact that, much of the time, I'm all on my own to maintain focus, and keep my eye on these problems, and notice them and nip them in the bud.

I'd really like it if I were embedded in a supportive ecosystem.  If there were clear, immediate, and reliable incentives for doing it right, and clear, immediate, and reliable disincentives for doing it wrong.  If there were actual norms (as opposed to nominal ones, norms-in-name-only) that gave me hints and guidance and encouragement.  If there were dozens or even hundreds of people around, such that I could be confident that, when I lose focus for a minute, someone else will catch me.

Catch me, and set me straight.

Because I want to be set straight.

Because I actually care about what's real, and what's true, and what's justified, and what's rational, even though my brain is only kinda-sorta halfway on board, and keeps thinking that the right thing to do is Win.

Sometimes, when people catch me, I wince, and sometimes, I get grumpy, because I'm working with a pretty crappy OS, here.  But I try to get past the wince as quickly as possible, and I try to say "thank you," and I try to make it clear that I mean it, because honestly, the people that catch me are on my side.  They are helping me live up to a value that I hold in my own heart, even though I don't always succeed in embodying it.

I like it when people save me from the mistakes I listed above.  I genuinely like it, even if sometimes it takes my brain a moment to catch up.


I've got a handful of metaphors that are trying to triangulate something important.

One of them is "herd immunity."  In particular, those nifty side-by-side time lapses that show the progression of virulent illness in populations with different rates of vaccination or immunity.  The way that the badness will spread and spread and spread when only half the population is inoculated, but fizzle almost instantly when 90+% is.

If it's safe to assume that most people's brains are throwing up the bad stuff at least as often as mine does, then it seems to matter a lot how infect-able the people around you are.  How quickly their immune systems kick in, before the falsehoods take root and replicate and spread.

And speaking of immune systems, another metaphor is "epistemic hygiene."  There's a reason that phrase exists.  It exists because washing your hands and wearing a mask and using disinfectant and coughing into your elbow makes a difference.  Cleaner people get sick less, and propagate sickness less, and cleanliness is made up of a bunch of tiny, pre-emptive actions.  

I have no doubt that you would be bored senseless by therapy, the same way I'm bored when I brush my teeth and wipe my ass, because the thing about repairing, maintaining, and cleaning is: it's not an adventure. There's no way to do it so wrong you might die. It's just work, and the bottom line is, some people are okay going to work, and some people—

Well, some people would rather die. Each of us gets to choose.

(There was a decent chance that there was going to be someone in the comments using the fact that this essay contains a Rick & Morty quote to delegitimize me and the point that I'm making, but then I wrote this sentence and that became a harder trick to pull off. Not impossible, though.)

Another metaphor is that of a garden.

You know what makes a garden?

Weeding.

Gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

And weeding is hard work, and it's boring, and it's tedious, and it's unsexy.


What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

That's valuable.

But it's not what I really want from LessWrong.

What I really want from LessWrong is to make my own thinking better, moment to moment.  To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers.  To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.

I'd like an environment that takes seriously the fact that the little things matter, and that understands that standards and principles that are only enforced 90% of the time aren't actually enforced.

I think LessWrong actually does a pretty good job of toeing the rationality line, and following its own advice, if you take the sum total of all of its conversations.

But if you look at the conversations that matter—the times when a dose of discipline is most sorely needed, and when its absence will do the most damage—

In the big, important conversations, the ones with big stakes, the ones where emotions run high—

I don't think LessWrong, as a community, does very well in those conversations at all. When the going gets tough, the number of people who are steadfastly unwilling to let their brains do the things, and steadfastly insistent that others not get away with it either feels like it dwindles to almost nothing, and as a result, the entirely predictable thing happens: people start using symmetric weapons, and they work.

(I set aside a few minutes to go grab some examples—not an exhaustive search, just a quick skim.  There's the total vote count on this comment [LW(p) · GW(p)] compared to these [LW(p) · GW(p)] two [LW(p) · GW(p)], and the fact that it took nearly three weeks for a comment like this one [LW(p) · GW(p)] to appear, and the fact that this [LW(p) · GW(p)] is in negative territory, and this comment chain [LW(p) · GW(p)] which I discussed in detail in another recent post, and this [LW(p) · GW(p)] and its child being positive while this [LW(p) · GW(p)] and this [LW(p) · GW(p)] hover around zero, and this [LW · GW] still not having incorporated the extremely relevant context provided in this [LW(p) · GW(p)], and therefore still being misleading to anyone who doesn't get around to the comments, and the lack of concrete substantiation of the most radioactive parts of this [LW(p) · GW(p)], and so on and so forth.)

To be clear: there are also many examples of the thing going well.  If you count up from nothing, and just note all the places where LessWrong handled these conversations better than genpop, there are many!  More, even, than what I'm highlighting as the bad stuff.

But gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

There's a difference between "there are many black ravens" and "we've successfully built an environment with no white ravens."  There's a difference between "this place substantially rewards black ravens" and "this place does not reward white ravens; it imposes costs upon them."  It should be possible—no, it should be easy to have a conversation about whether the incidence of white ravens has been sufficiently reduced, separate from the question of the total incidence of black ravens, and to debate what the ratio of white ravens to black ravens needs to be, and how long a white raven should hang out before being chased away, and what it would cost to do things differently, and whether that's worth it, and I notice that this very sentence is becoming pretty defensive, and is emerging in response to past experiences, and a strong expectation that my attempt at nuance and specificity is likely to fail, because the culture does not sufficiently disincentivize projection and strawmanning and misrepresentation, and so attempts-to-be-clear cannot simply be offhand but must be preemptively fortified and made proof against adversarial interpretation and geez, this kind of sucks, no?

In Concentration of Force [LW · GW], which was originally part one of this essay, I mention the process of evaporative cooling, and I want to ask: who is being evaporatively cooled out of LessWrong these days, and is that the feedback loop we want to set up?

I think it isn't.  I think that a certain kind of person—

(one who buys that it's important to stick to the rationality 101 basics even when it's inconvenient, and that even a small percentage of slips along this axis is a pretty big deal)

—is becoming less prevalent on LessWrong, and a certain other kind of person—

(one who doesn't buy the claim that consistency-on-the-small-stuff matters a lot, and/or thinks that there are other higher goals that supersede approximately-never-letting-the-standards-slip)

—is becoming more prevalent, and while I have nothing against the latter in general, I really thought LessWrong was for the former.


Here's my vision of LessWrong:

LessWrong should be a place where rationality has reliable concentration of force.

Where rhetorical trickery does not work.  Where supposition does not get mistaken for fact.  Where people's words are treated as if they mean what they say, and if there seems to be another layer of implication or inference, that is immediately surfaced and made explicit so the hypothesis can be checked, rather than the assumption run with.  Where we are both capable of distinguishing, and careful to distinguish, our interpretations from our observations, and our plausible hypotheses from our justified conclusions.  Where we hold each other to that standard, and receive others holding us to that standard as prosocial and cooperative, because we want help holding the line. Where bad commentary is not highly upvoted just because our monkey brains are cheering, and good commentary is not downvoted or ignored just because our monkey brains boo or are bored.

Perhaps most importantly, where none of the above is left on the level of "c'mon, we all know."  Where bad stuff doesn't go unmentioned because it's just assumed that everyone knows it's bad.  That just results in newcomers not knowing the deal, and ultimately means the standards erode over time.

(A standard people are hesitant or embarrassed or tentative about supporting, or that isn't seen as cool or sophisticated to underline, is not one that endures for very long.)

"Professor Quirrell," said Harry gravely, "all the Muggle-raised students in Hogwarts need a safety lecture in which they are told the things so ridiculously obvious that no wizardborn would ever think to mention them. Don't cast curses if you don't know what they do, if you discover something dangerous don't tell the world about it, don't brew high-level potions without supervision in a bathroom, the reason why there are underage magic laws, all the basics."

I spend a decent chunk of my time doing stuff like upvoting comments that are mostly good, but noting in reply to them specific places in which I think they were bad or confused or norm-violating.  I do this so that I don't accidentally create a social motte-and-bailey, and erode the median user's ability to tell good from bad.

This is effortful work.  I wish more people pitched in, more of the time, the way this user did here [LW(p) · GW(p)] and here [LW(p) · GW(p)] and here [LW(p) · GW(p)] and here [LW(p) · GW(p)].

In my opinion, the archetype of the Most Dangerous Comment is something like this one [LW(p) · GW(p)]:

One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place.

This is a bad comment (in context, given what it's replying to).  It's the kind of thing my brain produces, when I lose focus for a minute.

But it sounds good.  It makes you Feel Like You're On The Right Team as you read it, so long as you're willing to overlook the textbook strawmanning it does, of the comment it's replying to.

It's a Trojan horse.  It's just such good-thoughts-wrapped-in-bad-forms that people give a pass to, which has the net effect of normalizing bad forms.

It's when the people we agree with are doing it wrong that we are most in need of standards, firmly held. 

(I have a few theories about why people are abandoning or dismissing or undermining the standards, in each of a few categories.  Some people, I think, believe that it's okay to take up banned weapons as long as the person you're striking at is in the outgroup. Some people seem to think that suffering provides a justification for otherwise unacceptable behavior.  Some people seem to think that you can skip steps as long as you're obviously a good guy, and others seem to think that nuance and detail are themselves signs of some kind of anti-epistemic persuadery.  These hypotheses do not exhaust the space of possibility.)

It is an almost trivial claim that there are not enough reasonable people in the world. There literally never will be, from the position of a group that's pushing for sanity—if the quality of thought and discourse in the general population suddenly rose to match the best of LessWrong, the best of the LessWrongers would immediately set their sights on the next high-water mark, because this sure ain't enough.

What that means is that, out there in the broader society, rationality will approximately always lose the local confrontations.  Battles must be chosen with great care, and the forces of reason meticulously prepared—there will be occasional moments of serendipity when things go well, and the rare hero that successfully speaks a word of sanity and escapes unscathed, but for the most part those victories won't come by accident.

Here, though—here, within the walls of the garden—

A part of me wants to ask "what's the garden for, if not that?  What precisely are the walls trying to keep out?"

In my post on moderating LessWrong, I set forth the following principle:

In no small part, the duty of the moderation team is to ensure that no LessWronger who’s trying to adhere to the site’s principles is ever alone, when standing their ground against another user (or a mob of users) who isn’t

I no longer think that's sufficient.  There aren't enough moderators for reliable concentration of force.  I think it's the case that LessWrongers trying to adhere to the site's principles are often alone—and furthermore, they have no real reason, given the current state of affairs, to expect not to be alone.

Sometimes, people show up.  Often, if you look on a timescale of days or weeks.  But not always.  Not quickly.  Not reliably.

(And damage done in the meantime is rarely fully repaired.  If someone has broadcast falsehoods for a week and they've been strongly upvoted, it's not enough to just say "Oops, sorry, I was wrong."  That comes nowhere close to fixing what they broke.)

Looking at the commentary in the threads of the last month—looking at the upvotes and the downvotes—looking at what was said, and by whom, and when—

It's not promising.

It's not promising in the sense that the people in the parking lot consider it their responsibility to stand in defense of the parent who left their kids in the car.

That they do so reliably, and enthusiastically.  That they show up in force.  That they show up in force because they expect to be backed up, the way that people in a city expect to be backed up if they push back against someone shouting racist slurs.  That they consider themselves obligated to push back, rather than considering it not-their-problem.

It is a defining characteristic of stag hunts that when everybody actually buys in, the payoff is pretty huge.

It is also a defining characteristic of stag hunts that when critical mass fails to cohere, those who chose stag get burned, and feel cheated, and lose big.

This post is nowhere close to being a sufficient coordination mechanism to cohere a stag hunt.  No one should change their behavior in response to this alone.

But it's a call for a stag hunt.


The elephant in the room, which deserves its own full section but I wasn't able to pull it together:

Standards are not really popular.  Most people don't like them.  Or rather, most people like them in the abstract, but chafe when they get in the way, and it's pretty rare for someone to not think that their personal exception to the standard is more justified than others' violations.  Half the people here, I think, don't even see the problem that I'm trying to point at.  Or they see it, but they don't see it as a problem.

I think a good chunk of LW's current membership would leave or go quiet if we actually succeeded at ratcheting the standards up.

I don't think that's a bad thing.  I'd like to be surrounded by people who are actually trying.  And if LW isn't going to be that place, and it knows that it isn't, I'd like to know that, so I can go off and found it (or just give up).


Terrible Ideas

... because I don't have better ones, yet. 

The target is "rationality has reliable concentration of force."

The current assessment is "rationality does not have reliable concentration of force."

The vector, then, is things which either increase the number of people showing up in true rationalist style, relative to those who are not, or things which increase the power of people adhering to rationalist norms, relative to those who are not.

More of the good thing, and/or less of the bad thing.

Here are some terrible ideas for moving in that direction—for either increasing or empowering the people who are interested in the idea of a rationality subculture, and decreasing or depowering those who are just here to cargo cult a little.

These are all terrible ideas.

These are all

terrible

ideas.

I'm going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.

But you have to start somewhere, if you're going to get anywhere, and I would like LessWrong to get somewhere other than where it was over the past month.  To be the sort of place where doing the depressingly usual human thing doesn't pay off.  Where it's more costly to do it wrong than to do it right.

Clearly, it's not going to "just happen."  Clearly, we need something to riff off of.

The guiding light in front of all of those terrible ideas—the thing that each of them is a clumsy and doomed attempt to reach for—is making the thing that makes LessWrong different be "LessWrong is a place where rationality has reliable concentration of force."

Where rationality is the-thing-that-has-local-superiority-in-most-conflicts.  Where the people wielding good discourse norms and good reasoning norms always outnumber the people who aren't—or, if they can't outnumber them, we at least equip them well enough that they always outgun them.

Not some crazy high-tower thing.  Just the basics, consistently done.

Distinguish inference from observation.

Distinguish feeling from fact.

Expose cruxes, or acknowledge up front that you haven't found them yet, and that this is kind of a shame.

Don't weaponize motte-and-bailey equivocation.

Start from a position of charity and good faith, or explain why you can't in concrete and legible detail.  Cooperate past the first apparent "defect" from your interlocutor, because people have bad days and the typical mind fallacy is a hell of a drug, as is the double illusion of transparency.

Don't respond to someone's assertion of [A] with "But [B] is abhorrent!"  Don't gloss over the part where your argument depends on the assumption that [A→B].

And most importantly of all: don't actively resist the attempts of others to do these things, or to remind others to do them.  Don't sneer, don't belittle, don't dismiss, don't take-it-as-an-attack.  Act in the fashion of someone who wants to be reminded of such things, even when it's inconvenient or triggers a negative emotional reaction.

Until users doing the above (and similar) consistently win against users who aren't, LessWrong is going to miss out on a thing that clearly a lot of us kind of want, and kind of think might actually matter to some other pretty important goals.

Maybe that's fine.  Maybe all we really need is the low-effort rough draft.  Maybe the 80/20 is actually the right balance.  In fact, we're honestly well past the 80/20—LessWrong is at least an 85/37 by this point.

But it's not actually doing the thing, and as far as I can tell it's not really trying to do the thing, either—not on the level of "approximately every individual feels called to put forth a little extra effort, and approximately every individual feels some personal stake when they see the standards being degraded."

Instead, as a collective, we've got one foot on the gas and the other on the brake, and that probably isn't the best strategy for any worthwhile goal [LW · GW].  One way or another, I think we should actually make up our minds, here, and either go out and hunt stag, or split up and catch rabbits.


Author's note: this essay is not as good as I wished it would be.  In particular, it's falling somewhat short of the very standard it's pulling for, in a way that I silver-line as "reaching downward across the inferential gap" but which is actually just the result of me not having the spoons to do this [LW(p) · GW(p)] or this [LW(p) · GW(p)] kind of analysis on each of a dozen different examples. Having spent the past six days improving it, "as good as I wish it would be" is starting to look like an asymptote, so I chose now over never.

373 comments

Comments sorted by top scores.

comment by johnswentworth · 2021-11-06T19:29:30.362Z · LW(p) · GW(p)

There's a vision here of what LessWrong could/should be, and what a rationalist community could/should be more generally. I want to push back against that vision, and offer a sketch of an alternative frame.

The post summarizes the vision I want to push back against as something like this:

What I really want from LessWrong is to make my own thinking better, moment to moment.  To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers.  To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.

Now, I do think that's a great piece to have in a vision for the LessWrong or the rationalist community. But I don't think it's the central piece, at least not in my preferred vision.

What's missing? What is the central piece?

Fundamentally, the problem with this vision is that it isn't built for a high-dimensional world [LW · GW]. In a high-dimensional world, the hard part of reaching an optimum isn't going-uphill-rather-than-downhill; it's figuring out which direction is best, out of millions of possible directions. Half the directions are marginally-good, half are marginally-bad, but the more important fact is that the vast majority of directions matter very little.

In a high-dimensional world, getting buffeted in random directions mostly just doesn't matter. Only one-part-in-a-million of the random buffeting in a million-dimensional space will be along the one direction that matters; a push along the direction that matters can be one-hundred-thousandth as strong as the random noise and still overwhelm it.

Figuring out the right direction, and directing at least some of our effort that way, is vastly more important than directing 100% of our effort in that direction (rather than a random direction).

Moving from the abstraction back to the issue at hand... fundamentally, questionable epistemics in this episode of Drama just don't matter all that much. They're the random noise, buffeting us about on a high-dimensional landscape. Maybe finding and fixing organizational problems will lead to marginally more researcher time/effort on alignment, or maybe the drama itself will lead to a net loss of researcher attention to alignment. But these are both mechanisms of going marginally faster or marginally slower along the direction we're already pointed. In a high-dimensional world, that's not the sort of thing which matters much.

If we'd had higher standards for discussion around the Drama, maybe we'd have been more likely to figure out which way was "uphill" along the drama-salient directions - what the best changes were in response to the issues raised. But it seems wildly unlikely that any of the dimensions salient to that discussion were the actual most important dimensions. Even the best possible changes in response to the issues raised don't matter much, when the issues raised are not the actual most important issues.

And that's how Drama goes: rarely are the most important dimensions the most Drama-inducing. Raising site standards is the sort of thing which would help a lot in a high-drama discussions, but it wouldn't much help us figure out the most-important-dimensions.

Another framing: in a babble-and-prune model, obviously raising community standards corresponds to pruning more aggressively. But in a high-dimensional world, the performance of babble-and-prune depends mostly on how good the babble is - random babble will progress very slowly, no matter how good the pruning. It's all about figuring out the right direction in the first place, without having to try every random direction to do so. It fundamentally needs to be a positive process, figuring out techniques to systematically pursue better directions, not just a process of avoiding bad or useless directions. Nearly all the directions are useless; avoiding them is like sweeping sand from a beach.

Replies from: Vaniver, jimmy, Ruby, Duncan_Sabien, tailcalled, tailcalled
comment by Vaniver · 2021-11-07T16:32:08.283Z · LW(p) · GW(p)

I think I agree with the models here, and also want to add a complicating factor that I think impacts the relevance of this.

I think running a site like this in a fully consequentialist way is bad. When you're public and a seed of something, you want to have an easily-understandable interface with the world; you want it to be the case that other people who reason about you (of which there will be many, and who are crucial to your plan's success!) can easily reason about you. Something more like deontology or virtue ethics ("these are the rules I will follow" or "these are the virtues we will seek to embody") makes it much easier for other agents to reason about you.

And so the more that I as a mod (or the mod team in general) rely on our individual prudence or models or so on, the more difficult it becomes for users to predict what will happen, and that has costs. (I still think that it ultimately comes down to our prudence--the virtues that we're trying to embody do in fact conflict sometimes, and it's not obvious how to resolve those conflicts--but one of the things my prudence is considering are those legibility costs.)

And when we try to figure out what virtues we should embody on Less Wrong, I feel much better about Rationality: Common Interest of Many Causes [LW · GW] than I do about "whatever promotes AI safety", even tho I think the 'common interest' dream didn't turn out as well as one might have hoped, looking forward from 2009, and I think AI safety is much closer to 'the only game in town' than it might seem on first glance. Like, I want us to be able to recover if in fact it turns out AI safety isn't that big a deal. I also want LessWrong to survive as a concern even if someone figures out AI safety, in a way that I might not for something like the AI Alignment Forum. I would like people who aren't in tune with x-risk to still be around here (so long as they make the place better).

That said, as pointed out in my other comment [LW(p) · GW(p)], I care more about reaching the heights than I do about raising the sanity waterline or w/e, and I suspect that lines up with "better babble" more than it does "better prune".

Replies from: johnswentworth
comment by johnswentworth · 2021-11-07T16:49:29.979Z · LW(p) · GW(p)

+1 to all this, and in particular I'm very strongly on board with rationality going beyond AI safety. I'm a big fan of LessWrong's current nominal mission to "accelerate intellectual progress", and when I'm thinking about making progress in a high-dimensional world, that's usually the kind of progress I'm thinking about. (... Which, in turn, is largely because intellectual/scientific/engineering progress seem to be the "directions" which matter most for everything else.)

comment by jimmy · 2021-11-08T17:26:50.828Z · LW(p) · GW(p)

I think your main point here is wrong.

Your analysis rests on a lot of assumptions:

1) It's possible to choose a basis which does a good job separating the slope from the level

2) Our perturbations are all small relative to the curvature of the terrain, such that we can model things as an n-dimensional plane

3) "Known" errors can be easily avoided, even in many dimensional space, such that the main remaining question is what the right answers are

4) Maintenance of higher standards doesn't help distinguish between better and worse directions.

5) Drama pushes in random directions, rather than directions selected for being important and easy to fuck up.


1) In a high dimensional space, almost all bases have the slope distributed among many basis vectors. If you can find a basis that has a basis vector pointing right down the gradient and the rest normal to it, that's great. If your bridge has one weak strut, fix it. However, there's no reason to suspect we can always or even usually do this. If you had to describe the direction of improvement from a rotting log to a nice cable stayed bridge, there's no way you could do it simply. You could name the direction "more better", but in order to actually point at it or build a bridge, many many design choices will have to be made. In most real world problems, you need to look in many individual directions and decide whether it's an improvement or not and how far to go. Real world value is built on many "marginal" improvements.

2) The fact that we're even breathing at all means that we've stacked up a lot of them. Almost every configuration is completely non-functional, and being in any way coherent requires getting a lot of things right. We are balanced near optima on many dimensions, even thought there is plenty left to go. While almost all "small" deviations have even smaller impact, almost all "large" deviations cause a regression to the mean or at least have more potential loss than gain. The question is whether all perturbations can be assumed small, and the answer is clear from looking at the estimated curvature. On a bad day you can easily exhibit half the tolerance that you do on a good day. Different social settings can change the tolerance by *much* more than that. I could be pretty easily convinced that I'm averaging 10% too tolerant or 10% too intolerant, but a factor of two either way is pretty clearly bad in expectation. In other words, the terrain is can *not* be taken as planar.

3) Going uphill, even when you know which way is up, is *hard*, and there is a tendency to downslide. Try losing weight, if you have any to lose. Try exercising as much as you think you should. Or just hiking up a real mountain. Gusts of wind don't blow you up the mountain as often as they push you down; gusts of wind cause you to lose your footing, and when you lose your footing you inevitably degenerate into a high entropy mess that is further from the top. Getting too little sleep, or being yelled at too much, doesn't cause people to do better as often as it causes them to do worse. It causes people to lose track of longer term consequences, and short term gradient following leads to bad long term results. This is because so many problems are non-minimum phase. Bike riding requires counter-steering. Strength training requires weight lifting, and accepting temporary weakening. Getting rewarded for clear thinking requires first confronting the mistakes you've been making. "Knowing which way to go" is an important part of the problem too, and it does become limiting once you get your other stuff in order, but "consistently performs as well as they could, given what they know" is a damn high bar, and we're not there yet. "Do the damn things you know you're supposed to to, and don't rationalize excuses" is a really important part of it, and not as easy as it sounds.

4) Our progress on one dimension is not independent of our ability to progress on the others. Eat unhealthy foods despite knowing better, and you might lose a day of good mental performance that you could have use to figure out "which direction?". Let yourself believe a comforting belief, and that little deviation from the truth can lead to much larger problems in the future. One of the coolest things about LW, in my view, is that people here are epistemically careful enough that they don't shoot themselves in the foot *immediately*. Most people reason themselves into traps so quickly that you either have to be extremely careful with the order and manner in which you present things, or else you have to cultivate an unusual amount of respect so they'll listen for long enough to notice their confusion. LW is *better* at this. LW is not *perfect* at this. More is better. We don't have clear thinking to burn. So much of clear thinking has to do with having room to countersteer that doing anything but maximizing it to the best of our ability is a huge loss in future improvement.

5) Drama is not unimportant, and it is not separable. We are social creatures, and the health and direction of our social structures is a big deal. If you want to get anything done as a community, whether it be personal rationality improvement or collective efforts, the community has to function or that ain't gonna happen. That involves a lot of discussing which norms and beliefs should be adopted, as well as meta-norms and beliefs about how disagreement should be handled, and applying to relevant cases. Problems with bad thinking become exposed and that makes such discussions both more difficult and more risky, but also more valuable to get right. Hubris that gets you in trouble when talking to others doesn't just go away when making private plans and decisions, but in those cases you do lack someone to call you on it and therefore can't so easily find which direction(s) you are erring in. Drama isn't a "random distraction", it's an error signal showing that something is wrong with your/your communities sense making organs, and you need those things in order to find the right directions and then take them. It's not the *only* thing, and there are plenty of ways to screw it up while thinking you're doing the right thing (non-minimumphase again), but it is selected (if imperfectly) for being centered around the most important disagreements, or else it wouldn't command the attention that it does.
 

Replies from: johnswentworth, Duncan_Sabien, SaidAchmiz
comment by johnswentworth · 2021-11-08T19:24:06.534Z · LW(p) · GW(p)

This is a great comment. There are some parts which I think are outright wrong (e.g. drama selects for most-important-disagreements), but for the most part it correctly identifies a bunch of shortcomings of the linear model from my comment.

I do think these shortcomings can generally be patched; the linear model is just one way to explain the core idea, and other models lead to the same place. The main idea is something like "in a high dimensional space, choosing the right places to explore is way more important than speed of exploration", and that generalizes well beyond linearity.

I'm not going to flesh that out more right now. This all deserves a better explanation than I'm currently ready to write.

Replies from: jimmy
comment by jimmy · 2021-11-10T19:38:30.571Z · LW(p) · GW(p)

Yeah, I anticipated that the "Drama is actually kinda important" bit would be somewhat controversial. I did qualify that it was selected "(if imperfectly)" :p

Most things are like "Do we buy our scratch paper from walmart or kinkos?", and there are few messes of people so bad that it'd make me want to say "Hey, I know you think what you're fighting about is important, but it's literally less important than where we buy our scratch paper, whether we name our log files .log or .txt, and literally any other random thing you can think of".

(Actually, now that I say this, I realize that it can fairly often look that way and that's why "bikeshedding" is a term. I think those are complicated by factors like "What they appear to be fighting about isn't really what they're fighting about", "Their goals aren't aligned with the goal you're measuring them relative to", and "The relevant metric isn't how well they can select on an absolute scale or relative to your ability, but relative to their own relatively meager abilities".)

In one extreme, you say "Look, you're fighting about this for a reason, it's clearly the most important thing, or at least top five, ignore anyone arguing otherwise".

In another, you say "Drama can be treated as random noise, and the actual things motivating conflict aren't in any way significantly more important than any other randomly selected thing one could attend to, so the correct advice is just to ignore those impulses and plow forward"

I don't think either are very good ways of doing it, to understate it a bit. "Is this really what's important here?" is an important question to keep in mind (which people sometimes forget, hence point 3), but that it cannot be treated as a rhetorical question and must be asked in earnest because the answer can very well be "Yes, to the best of my ability to tell" -- especially within groups of higher functioning individuals.

I think we do have a real substantive disagreement in that I think the ability to handle drama skillfully is more important and also more directly tied into more generalized rationality skills than you do, but that's a big topic to get into.

I am, however, in full agreement on the main idea of "in a high dimensional space, choosing the right places to explore is way more important than speed of exploration", and that it generalizes well and is a very important concept. It's actually pretty amusing that I find myself arguing "the other side" here, given that so much of what I do for work (and otherwise) involves face palming about people working really hard to optimize the wrong part of the pie chart, instead of realizing to make a pie chart and work only on the biggest piece or few.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T17:58:36.083Z · LW(p) · GW(p)

If you had to describe the direction of improvement from a rotting log to a nice cable stayed bridge, there's no way you could do it simply. You could name the direction "more better", but in order to actually point at it or build a bridge, many many design choices will have to be made. In most real world problems, you need to look in many individual directions and decide whether it's an improvement or not and how far to go. Real world value is built on many "marginal" improvements.

This was an outstandingly useful mental image for me, and one I suspect I will incorporate into a lot of thoughts and explanations.  Thanks.

EDIT: finished reading the rest of this, and it's tied (with Vaniver's) for my favorite comment on this post (at least as far as the object level is concerned; there are some really good comments about the discussions themselves).

comment by Said Achmiz (SaidAchmiz) · 2021-11-09T04:25:38.875Z · LW(p) · GW(p)

Outstanding comment. (Easily the best I’ve read on Less Wrong in the last month, top five in the last year.)

comment by Ruby · 2021-11-07T01:39:06.113Z · LW(p) · GW(p)

Drama

I object to describing recent community discussions as "drama". Figuring out what happened within community organizations and holding them accountable is essential for us to have a functioning community. [I leave it unargued that we should have community.]

Replies from: johnswentworth
comment by johnswentworth · 2021-11-07T16:43:21.472Z · LW(p) · GW(p)

I agree that figuring out what happened and holding people/orgs accountable is important. That doesn't make the process (at least the process as it worked this time) not drama. I certainly don't think that the massive amount of attention the recent posts achieved can be attributed to thousands of people having a deeply-held passion for building effective organizations.

Replies from: Ruby
comment by Ruby · 2021-11-07T17:08:09.002Z · LW(p) · GW(p)

Not sure if this is what you're getting at. My estimate is that only a few dozen people participated and that I would ascribe to most of them either a desire for good organizations, a desire to protect people or a desire for truth and good process to be followed. I'd put entertainment seeking as a non-trivial motivation for many, and to be responsible for certain parts of the conversation, but not the overall driver.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T17:18:20.884Z · LW(p) · GW(p)

For me personally, they're multiplied terms in the Fermi.  Like, engagement = [desire for good]*["entertainment"]*[several other things].

I wouldn't have been there at all just for the drama.  But also if there was zero something-like-pull, zero something-like-excitement, I probably wouldn't have been there either.  

I don't feel great about this.

Replies from: johnswentworth
comment by johnswentworth · 2021-11-07T17:24:44.479Z · LW(p) · GW(p)

This sounds right, I think it generalizes to a lot of other people too.

Replies from: Self_Optimization
comment by Self_Optimization · 2021-11-27T01:48:14.809Z · LW(p) · GW(p)

To expand on this (though I only participated in the sense of reading the posts and a large portion of the comments), my reflective preference was to read through enough to have a satisfactorily-reliable view of the evidence presented and how it related to the reliability of data and analyses from the communities in question. And I succeeded in doing so (according to my model of my current self’s upper limitations regarding understanding of a complex sociological situation without any personally-observed data).

But I could feel that the above preference was being enforced by willpower which had to compete against a constantly (though slowly) growing/reinforced sense of boredom from the monotony of staying on the same topic(s) in the same community with the same broad strokes of argument far beyond what is required to understand simpler subjects. If there had been less drama, I would have read far less into the comments, and misses a few informative discussions regarding the two situations in question (CFAR/MIRI and Leverage 1.0).

So basically, the “misaligned subagent-like-mental-structures” manifestation of akrasia is messing things up again.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T20:17:48.664Z · LW(p) · GW(p)

(I like the above and agree with most of it and am mulling and hope to be able to reply substantively, but in the meantime I wanted to highlight one little nitpick that might be more than a nitpick.)

Maybe finding and fixing organizational problems will lead to marginally more researcher time/effort on alignment, or maybe the drama itself will lead to a net loss of researcher attention to alignment. But these are both mechanisms of going marginally faster or marginally slower along the direction we're already pointed. In a high-dimensional world, that's not the sort of thing which matters much.

I think this leaves out a thing which is an important part of most people's values (mine included), which is that there's something bad about people being hurt, and there's something good about not hurting people, and that's relevant to a lot of people (me included) separate from questions of how it impacts progress on AI alignment.  Like, on the alignment forum, I get subordinating people's pain/suffering/mistreatment to questions of mission progress (maybe), but I think that's not true of a more general place like LessWrong.

Put another way, I think there might be a gap between the importance you reflectively assign to the Drama, and the importance many others reflectively assign to it.  A genuine values difference.

I do think that on LessWrong, even people's pain/suffering/mistreatment shouldn't trump questions of truth and accuracy, though.  Shouldn't encourage us to abandon truth and accuracy.

Replies from: johnswentworth
comment by johnswentworth · 2021-11-07T17:55:46.661Z · LW(p) · GW(p)

Addendum to the quoted claim:

Maybe finding and fixing organizational problems will lead to marginally more researcher time/effort on alignment, or maybe the drama itself will lead to a net loss of researcher attention to alignment. But these are both mechanisms of going marginally faster or marginally slower along the direction we're already pointed. In a high-dimensional world, that's not the sort of thing which matters much.

... and it's also not the sort of thing which matters much for reduction of overall pain/suffering/mistreatment, even within the community. (Though it may be the sort of thing which matters a lot for public perceptions of pain/suffering/mistreatment.) This is a basic tenet of EA: the causes which elicit great public drama are not highly correlated with the causes which have lots of low-hanging fruit for improvement. Even within the rationalist community, our hardcoded lizard-brain drama instincts remain basically similar, and so I expect the same heuristic to apply: public drama is not a good predictor of the best ways to reduce pain/suffering/mistreatment within the community.

But that's a post-hoc explanation. My actual gut-level response to this comment was an aesthetic feeling of danger/mistrust/mild ickiness, like it's a marker for some kind of outgroup membership. Like, this sounds like what (a somewhat less cartoonish and more intelligent version of) Captain America would say, and my brain automatically tags anything Captain-America-esque as very likely to be mistaken in a way that actively highjacks moral/social intuitions. That's an aesthetic I actively cultivate [LW · GW], to catch exactly this sort of argument. I recommend it.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:09:20.250Z · LW(p) · GW(p)

FWIW, I agree with this (to the extent that I've actually understood you).  Like, I think this is compatible with the OP, and do not necessarily disagree with a heuristic of flagging Captain America statements.  If 80% of them are bad, then the 20% that are good should indeed have to undergo scrutiny.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-07T23:32:00.263Z · LW(p) · GW(p)

What is this “Captain America” business (in this context)? Would you mind explaining, for those of us who aren’t hip with the teen culture or what have you?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T00:02:54.357Z · LW(p) · GW(p)

My guess is that it's something like: Captain America makes bold claims with sharp boundaries that contain a lot of applause-light spirit, and tend to implicitly deny nuance.  They are usually in the right direction, but "sidesy" and push people more toward being in disjoint armed camps.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-08T00:58:41.398Z · LW(p) · GW(p)

Any chance of getting an example of such bold claims? (And, ideally, confirmation from johnswentworth that this is what’s meant?)

(I ask only because I really have no knowledge of the relevant comic books on which to base any kind of interpretation of this part of the discussion…)

Replies from: johnswentworth, Duncan_Sabien
comment by johnswentworth · 2021-11-08T19:28:10.122Z · LW(p) · GW(p)

I explain a bit more of what I mean here: http://seekingquestions.blogspot.com/2017/06/be-more-evil.html

(Disclaimer: that's an old essay which isn't great by my current standards, and certainly doesn't make much attempt to justify the core model. I think it's pointing to the right thing, though.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T01:02:44.025Z · LW(p) · GW(p)

http://www.ldssmile.com/wp-content/uploads/2014/09/3779149-no+you+move+cap+says.jpg 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-08T05:06:30.673Z · LW(p) · GW(p)

Hmm, I see.

But I am fairly sure that I endorse this sentiment. Or do you think there is a non-obvious interpretation where he’s wrong?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T05:43:54.730Z · LW(p) · GW(p)

I endorse this one myself (have used it in an essay before).  But it's definitely ... er, well, it emboldens people who are wrong (but unaware of it) just as much as it emboldens people who are right?

I dunno.  I can't pass John's ITT here; just trying to help.  =)

Replies from: Viliam, Raemon
comment by Viliam · 2021-11-08T11:19:21.373Z · LW(p) · GW(p)

It also enourages nitpicking about details where people disagree [LW · GW], which means that if you have several people like this on the same team, the arguing probably never stops.

comment by Raemon · 2021-11-08T06:31:41.795Z · LW(p) · GW(p)

John’s linked article went into it in detail:

http://seekingquestions.blogspot.com/2017/06/be-more-evil.html

comment by tailcalled · 2021-11-07T08:28:52.019Z · LW(p) · GW(p)

I also think another problem here is that you are doing a Taylor expansion of the value of the community with respect to the various parameters it could have. This only really works if the proposed change is small or if the value is relatively globally linear. However, there can be many necessary-but-not-sufficient parameters, in which case the function isn't linear globally, but instead has a small peak surrounded by many directions of flatness.

It seems to me that rationality with regards to local deductions, ingroup/outgroup effects, etc. could be necessary-but-not-sufficient. Without these, it's much easier to get thrown off course to some entirely misguided direction - but as you point out, having it does not necessarily provide the right guidance to make progress.

comment by tailcalled · 2021-11-07T07:49:24.263Z · LW(p) · GW(p)

It fundamentally needs to be a positive process, figuring out techniques to systematically pursue better directions, not just a process of avoiding bad or useless directions. Nearly all the directions are useless; avoiding them is like sweeping sand from a beach.

I think this depends on the nature of the bad direction. A usual bad direction might just up some smallish chunk of a single person's time, which on its own isn't that big of a problem (but it does add up, leading to the importance of the things you mentioned). However, one problem with certain topics like drama (sorry Ruby, I don't have a better word even though I realize it's problematic) is that it is highly motivating. This means that it easily attracts attention from many more people, that these people will spend much more time proportionally engaging with it, and that it has much more lasting consequences on the community. Thus getting it right seems to matter more than the typical topic.

Replies from: johnswentworth
comment by johnswentworth · 2021-11-07T17:07:03.598Z · LW(p) · GW(p)

Yup, I'm glad someone brought this up. I think the right model here is Demons in Imperfect Search [LW · GW]. Transposons are a particularly good analogy - they're genes whose sole function is to copy-and-paste themselves into the genome. If you don't keep the transposons under control somehow, they'll multiply and quickly overrun everything else, killing the cell.

So keeping the metaphorical transposons either contained or subcritical is crucial. I think LessWrong handled that basically-successfully with respect to recent events: keeping demon threads at least contained is exactly what the frontpage policy is supposed to do, and the demon threads indeed stayed off the frontpage. It was probably supercritical for a while, but it was localized, so it died down in the end and the rest of the site and community are still basically intact.

(Again, as I mentioned in response to Ruby's comment, none of this is to say that the recent discussions didn't serve any useful functional role. Even transposons serve a functional role in some organisms. But the discussions certainly seemed to grow in a way decoupled from any plausible estimate of their usefulness.)

Important thing to note from this model: the goal with demons is just to keep them subcritical and/or contained. Pushing them down to zero doesn't add much.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T17:15:00.967Z · LW(p) · GW(p)

Agreement on the distinction between subcritical/contained and zero, and that there's usually not value in going all the way to zero.

comment by supposedlyfun · 2021-11-06T14:35:46.210Z · LW(p) · GW(p)

Executive summary: I have no idea what you're talking about.

Standards are not really popular.  Most people don't like them.  Half the people here, I think, don't even see the problem that I'm trying to point at.  Or they see it, but they don't see it as a problem.

I gather that you're upset about how the Leverage conversation went, and also Cancel Culture, so I assume your chief proposition is that LessWrong is canceling Geoff Anders; but you haven't actually made that case, just vaguely gestured with words. 

I think that a certain kind of person is becoming less prevalent on LessWrong, and a certain other kind of person is becoming more prevalent, and while I have nothing against the other kind, I really thought LessWrong was for the first group.

What are the two kinds of persons? Really, I honestly do not know what you are claiming here. Repeat: I don't have even a foggy guess as to what your two kinds of person are. Am I "a certain other kind of person"? How can I know?

Distinguish feeling from fact.

This post has virtually no facts. It has short, punchy sentences with italics for emphasis. It is written like a hortatory sermon. Its primary tool is rhetoric. The first quarter of it is essentially restating parts of the sequences. Then you point to some comments and are upset that they got upvoted. Others, you're upset they haven't been upvoted. I have no idea whatsoever why you feel these things, and you don't elaborate. I am apparently one of the people who "don't even see the problem that [you're] trying to point at." 

This comment was much longer in draft, but I've deleted the remainder because I don't want to seem "impatient" or "sneering". I'm just confused: You wrote all these words intending to convince people of something, but you don't specify what it is, and you don't use the tools we typically use to convince (facts, reliable sources, syllogistic reasoning, math, game theory...). Am I just not part of the intended audience? If so, who are they?

Replies from: MondSemmel, Duncan_Sabien, Duncan_Sabien
comment by MondSemmel · 2021-11-06T15:58:39.183Z · LW(p) · GW(p)

Yikes, despite Duncan's best attempts at disclaimers and clarity and ruling out what he doesn't mean, he apparently still didn't manage to communicate the thing he was gesturing at. That's unfortunate. (And also worries me whether I have understood him correctly.)

I will try to explain some of how I understand Duncan.

I have not read the first Leverage post and so cannot comment on those examples, but I have read jessicata's MIRI post.

and this [LW · GW] still not having incorporated the extremely relevant context provided in this [LW(p) · GW(p)], and therefore still being misleading to anyone who doesn't get around to the comments, and the lack of concrete substantiation of the most radioactive parts of this [LW(p) · GW(p)], and so on and so forth.

As I understand it: This post [LW · GW] criticized MIRI and CFAR by drawing parallels to Zoe Curzi's experience of Leverage. Having read the former but not the latter, the former seemed... not very substantive? Making vague parallels rather than object-level arguments? Merely mirroring the structure of the other post? In any case, there's a reason why the post sits at 61 karma with 171 votes and 925 comments, and that's not because it was considered uncontroversially true. Similarly, there's a reason why Scott Alexander's comment in response [LW(p) · GW(p)] has 362 karma (6x that of the original post; I don't recall ever seeing anything remotely like that on the site): the information in the original post is incomplete or misleading without this clarification.

The problem at this point is that this ultra-controversial post on LW does not have something like a disclaimer at the top, nor would a casual reader notice that it has lots of downvotes. All the nuance is in the impenetrable comments. So anyone who just reads that post without wading into the comments will get misinformed.

As for the third link in Duncan's quote, it's pointing at an anonymous comment supposedly by a former CFAR employee, which was strongly negative of CFAR. But multiple CFAR employees replied and did not have the same impressions of their employer. Which would have been a chance for dialogue and truthseeking, except... that anonymous commenter never followed up to reply, so we ended up with a comment thread of 41 comments which started with those anonymous and unsubstantiated claims and never got a proper resolution (and yet that original comment is strongly upvoted).


Does that make things a bit clearer? In all those cases Duncan (as I understand him) is pointing at things where the LW culture fell far short of optimal; he expects us to do better. (EDIT: Specifically, and to circle back on the Leverage stuff: He expects us to be truthseeking period, to have the same standards of rigor both for critics and defenders, etc. I think he worries that the culture here is currently too happy to upvote anything that's critical (e.g. to encourage the brave act of speaking out), without extending the same courtesy to those who would speak out in defense of the thing being criticized. Solve for the equilibrium, and the consequences are not good.)

Personally I'm not so sure to which extent "better culture" is the solution (as I am skeptical of the feasibility of anything which requires time and energy and willpower), but have posted several suggestions [LW(p) · GW(p)] for how "better software" could help in specific situations (e.g. mods being able to put a separate disclaimer above sufficiently controversial / disputed posts).

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T17:38:07.077Z · LW(p) · GW(p)

This comment was much longer in draft, but I've deleted the remainder because I don't want to seem "impatient" or "sneering". I'm just confused: You wrote all these words intending to convince people of something, but you don't specify what it is, and you don't use the tools we typically use to convince (facts, reliable sources, syllogistic reasoning, math, game theory...). Am I just not part of the intended audience? If so, who are they?

Thanks very much for taking the time to include this paragraph; it's doing precisely the good thing.  It helps my brain not e.g. slide into a useless and unnecessary defensiveness or round you off to something you're not trying to convey.

I gather that you're upset about how the Leverage conversation went, and also Cancel Culture, so I assume your chief proposition is that LessWrong is canceling Geoff Anders; but you haven't actually made that case, just vaguely gestured with words.

That's not, in fact, my chief proposition.  I do claim that something-like-the-mass-of-users is doing something-resembling-canceling-Leverage (such that e.g. if I were to propose porting over some specific piece of Leverage tech to LW or an EA org's internal culture, people would panic in roughly the same way people panic about the concept of eugenics). 

But that's an instance of what I was hoping to talk about, not the main point, which is why I decided not to spend a ton of time digging into all of the specific examples.

What are the two kinds of persons?

In short: people who think that it's important to stick to the rationality 101 basics even when it's inconvenient, versus those willing to abandon them (and upvote others abandoning them).

This post has virtually no facts. It has short, punchy sentences with italics for emphasis. It is written like a hortatory sermon. Its primary tool is rhetoric.

Yes.  I'm trying to remind people why they should care.  Note, though, that in combination with Concentration of Force, it's saying a much more tightly defined and specific thing—"here's a concept, and I'd like to apply that concept to this social domain."

EDIT: in the discussion below, some people have seemed to take this as an admission of some sorts, as opposed to a "sure, close enough."  The words "exhortatory" and "rhetoric" are labels, each of which can cover a wide range of space; something can be a valid match for one of those labels yet not at all central.

I was acknowledging "sure, there's some degree to which this post could be fairly described as exhortatory or rhetoric."  I was not agreeing with "...and therefore any and all complaints one has about 'exhortation' or 'rhetoric' are fair to apply here."  I don't think supposedlyfun was trying to pull a motte-and-bailey or a fallacy-of-the-grey; that's why I replied cooperatively.  Others, though, do seem to me like they are trying to, and I am not a fan.

I have no idea whatsoever why you feel these things, and you don't elaborate.

I did elaborate on one.  Would you be willing to choose another from the linked examples?  The one that's the most confusing or least apparently objectionable?  I don't want to take hours and hours, but I'm certainly willing to go deep on at least a couple.

Replies from: supposedlyfun, AllAmericanBreakfast
comment by supposedlyfun · 2021-11-07T01:49:57.573Z · LW(p) · GW(p)

I spent 15 minutes re-reading the thread underneath orthonormal's comment to try to put myself in your head. I think maybe I succeeded, so here goes, but from a person whose job involves persuading people, it's Not Optimal For Your Argument that I had to do this to engage with your model here, and it's potentially wasteful if I've failed at modeling you. 

I read both of the comments discussed below, at the time I was following the original post and comments, but did not vote on either.

***

orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others

orthonormal P2 [which I inferred using the Principle of Charity]: Most of the time, people who immediately come across as cult leaders are trying to start a cult 

Duncan P1: It's bad when LW upvotes comments with very thin epistemic rigor

Duncan P2: This comment has very thin epistemic rigor because it's based on a few brief conversations

Gloss: I don't necessarily agree with your P2. It's not robust, but nor is it thin; if true, it's one person's statement that, based on admittedly limited evidence, they had a high degree of confidence that Anders wanted to be a cult leader. I can review orthonormal's post history to conclude that ze is a generally sensible person who writes as though ze buys into LW epistemics, and is also probably known by name to various people on the site, meaning if Anders wanted to sue zir for defamation, Anders could (another social and financial cost that orthonormal is incurring). Conditional on Anders not being a cult leader, I would be mildly surprised if orthonormal thought Anders was a cult leader/wannabe. 

Also, this comment [LW · GW]--which meets your epistemic standards, right? If so, did it cause you to update on the "Leverage is being canceled unfairly" idea?

***

Matt P1: I spent hundreds of hours talking to Anders

Matt P2: If he were a cult leader/wannabe, I would have noticed

Duncan P1: It's bad when LW doesn't upvote comments with good epistemic rigor

Duncan P2: This comment has good epistemic rigor because Matt has way more evidence than orthonormal

Gloss: [Edit: Upon reflection, I have deleted this paragraph. My commentary is not germane to the issue that Duncan and I are debating.]

***

The karma score disparity is currently 48 on 39 votes, to 5 on 26 votes. 

Given my thought process above, which of the comments should I have strongly upvoted, weakly upvoted, done nothing to, weakly downvoted, or strongly downvoted, on your vision of LW?

Or: which parts of my thought process are inimical to your vision of LW?

***

If it helps you calibrate your response, if any, I spent about 45 minutes researching, conceptualizing, drafting, and editing this comment.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T02:37:30.423Z · LW(p) · GW(p)

Thank you for the effort!  Strong upvoted.

Quick point to get out of the way: re: the comment that you thought would likely meet my standards, yes, it does; when I hovered over it I saw that I had already (weak) upvoted it.

Here's my attempt to rewrite orthonormal's first comment; what I would have said in orthonormal's shoes, if I were trying to say what I think orthonormal is trying to say.

All right, here comes some subjective experience.  I'm offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.

I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was "this person wants to be a cult leader."  This was based on [specific number of minutes] of conversation.

The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations.  I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010's.

I feel like this is important and relevant because it seems like yet again we're in a situation where a bunch of people are going "gosh, such shock, how could we have known?"  The delta between my wannabe-cult-leader-detectors and everyone else's is large, and I don't know its source, but the same thing happened with [don't name him, don't summon him], who was booted from the Berkeley community for good reason.

I don't think opaque intuition should be blindly followed, but as everyone is reeling from Zoe's account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?

Not as anything definitive, but if I do an honest scan over the past decade, I feel like I'm batting ... 3/5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5, and that means there's probably something to be learned from me and people like me.

If you're actually looking for ways to make this better in the future, anyway.


orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others (i.e. this wasn't just idle hostility)

orthonormal P2 [which I inferred using the Principle of Charity]: This is relevant because, separate from the question of whether my detectors are accurate in an absolute sense, they're more accurate than whatever it is all of you are doing

Duncan P1: It's bad when LW upvotes comments that aren't transparent about what they're trying to accomplish and via what channels they're trying to accomplish it

Duncan P2: orthonormal's original comment is somewhat bad in this way; it's owning its content on the surface but the implicature is where most of the power lies; the comment does not on its face say why it exists or what it's trying to do in a way that an autistic ten-year-old could parse (evidence: I felt myself becoming sort of fuzzy/foggy and confused, reading it).  As written, I think its main goal is to say "I told you so and also I'm a better judge of things than all of you"?  But it doesn't just come right out and say that and then pay for it, the way that I say in the OP above that I'm often smarter than other people in the room (along with an acknowledgement that there's a cost for saying that sort of thing).

I do think that the original version obfuscated some important stuff (e.g. there's a kind of motte-bailey at the heart of "we met at our CFAR workshop"; that could easily imply "we spent fifteen intensely intimate hours in one another's company over four days" or "we spoke for five minutes and then were in the same room for a couple of classes").  That's part of it.

But my concern is more about the delta between the comments' reception.  I honestly don't know how to cause individuals voting in a mass to get comments in the right relative positions, but I think orthonormal's being at 48 while Matt's is at 5 is a sign of something wrong.

I think orthonormal's belongs at something like 20, and Matt's belongs at something like 40.  I voted according to a policy that attempts to cause that outcome, rather than weak upvoting orthonormal's, as I otherwise would have (its strengths outweigh its flaws and I do think it was a positive contribution).

In a world where lots of LessWrongers are tracking the fuzziness and obfuscation thing, orthonormal's comment gets mostly a bunch of small upvotes, and Matt's gets mostly a bunch of strong upvotes, and they both end up in positive territory but with a clear (ugh) "status differential" that signals what types of contributions we want to more strongly reward.


As for Matt's comment:

Matt's comment in part deserves the strong upvote because it's a high-effort, lengthy comment that tries pretty hard to go slowly and tease apart subtle distinctions and own where it's making guesses and so forth; agnostic of its content my prior a third of the way through was "this will ultimately deserve strong approval."

I don't think most of Matt's comment was on the object level, i.e. comments about Anders and his likelihood of being a cult leader, wannabe or otherwise.

I think that it was misconstrued as just trying to say "pshhh, no!" which is why it hovers so close to zero.

My read of Matt's comment:

Matt P1: It's hard to tell what Ryan and orthonormal are doing

Matt P2: There's a difference between how I infer LWers are reading these comments based on the votes, and how I think LWers ought to interpret them

Matt P3: Here's how I interpret them

Matt P4: Here's a bunch of pre-validation of reasons why I might be wrong about orthonormal, both because I actually might and because I'm worried about being misinterpreted and want to signal some uncertainty/humility here.

Matt P5: Ryan's anecdote seems consistent, to me, with a joke of a form that Geoff Anders makes frequently.

Matt P6: My own personal take is that Geoff is not a cult leader and that the evidence provided by orthonormal and Ryan should be considered lesser than mine (and here's why)

Matt P7-9: [various disclaimers and hedges]

Duncan P1: This comment is good because of the information it presents

Duncan P2: This comment is good because of the way it presents that information, and the way it attempts to make space for and treat well the previous comments in the chain

Duncan P3: This comment is good because it was constructed with substantial effort

Duncan P4: It's bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws (the unclear motive thing).


I don't disagree with either of your glosses, but most notably they missed the above axes.  Like, based on your good-faith best-guess as to what I was thinking, I agree with your disagreements with that; your pushback against hypothesized-Duncan who's dinging orthonormal for epistemic thinness is good pushback.

But I think my version of orthonormal's comment is stronger, and while I don't think their original comment was not-worth-writing, such that I'd say "don't contribute if you're not going to put forth as much effort as I did in my rewrite." I think it was less worth writing than the rewrite.  I think the rewrite gives a lot more, and ... hypnotizes? ... a lot less.

As for your gloss on Matt's comment specifically, I just straightforwardly like it; if it were its own reply and I saw it when revisiting the thread I would weak or strong upvote it.  I think it does exactly the sane-itizing light-shining that I'm pulling for, and that feels to me was only sporadically (and not reliably) present throughout the discussions.

I took however many minutes it's been since you posted your reply to write this.  30-60?

Replies from: orthonormal, supposedlyfun, Duncan_Sabien
comment by orthonormal · 2021-11-07T20:14:34.541Z · LW(p) · GW(p)

Thanks, supposedlyfun, for pointing me to this thread.

I think it's important to distinguish my behavior in writing the comment (which was emotive rather than optimized - it would even have been in my own case's favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants' CoZE) from the behavior of people upvoting the comment.

I expect that many of the upvotes were not of the form "this is a good comment on the meta level" so much as "SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME".

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:45:33.472Z · LW(p) · GW(p)

This seems true to me.  I'm also feeling a little bit insecure or something and wanting to reiterate that I think that particular comment was a net-positive addition and in my vision of LessWrong would have been positively upvoted.

Just as it's important to separate the author of a comment from the votes that comment gets (which they have no control over), I want to separate a claim like "this being in positive territory is bad" (which I do not believe) from "the contrast between the total popularity of this and that is bad."

I'm curious whether I actually passed your ITT with the rewrite attempt.

Replies from: orthonormal
comment by orthonormal · 2021-11-10T04:24:06.548Z · LW(p) · GW(p)

Thanks for asking about the ITT. 

I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.

"Pay attention to me and people like me" is a status claim rather than a useful model.

I'd have said "pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another". 

(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn't idle talk from me.)

Otherwise, I think the content of your ITT is about right. 

(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)

For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn't go around loudly making it known; and they've proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I'll accordingly not name them.)

comment by supposedlyfun · 2021-11-07T22:00:16.901Z · LW(p) · GW(p)

I'm going to take a stab at cruxing here.

Whether it's better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they're doing.

Out of all the things you would have added to orthonormal's comment, the only one that I didn't read at the time as explicit or implicit in zir comment was, "Not as anything definitive, but if I do an honest scan over the past decade, I feel like I'm batting ... 3/5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5". I agree it would be nice if people gave more information about their own calibration where available. I don't know whether it was available to orthonormal.

As for the rest, I'm sticking that at the end of this comment as a sort of appendix.

If I'm right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don't see how I could have figured out that this is what our actual disagreement was.

I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don't feel like we need to fork anything based on the distance between our positions.

Why do you think the intensity scalars are so different between us? 

***

All right, here comes some subjective experience.  I'm offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.

The comment makes it clear that it is subjective experience. I wouldn't expect ortho to add it if ze didn't think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.

I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was "this person wants to be a cult leader."  This was based on [specific number of minutes] of conversation.

I don't expect ortho to remember the number of minutes from nine years ago. 

The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations.  

I don't expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn't have attached much weight to it for that reason. 

I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010's.

This is in there well enough that I don't see any value in saying it with more words. Crux?

I feel like this is important and relevant because it seems like yet again we're in a situation where a bunch of people are going "gosh, such shock, how could we have known?"  

This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn't really matter; to me, zir shared experience was just additional data.

The delta between my wannabe-cult-leader-detectors and everyone else's is large, and I don't know its source, but the same thing happened with [don't name him, don't summon him], who was booted from the Berkeley community for good reason.

This is in there well enough that I don't see any value in saying it with more words. Crux?

I don't think opaque intuition should be blindly followed, but as everyone is reeling from Zoe's account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?

"Hey maybe take a minute to listen to people like me" is implicit in the decision to share one's experience. Crux?

Not as anything definitive, but if I do an honest scan over the past decade, I feel like I'm batting ... 3/5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5, and that means there's probably something to be learned from me and people like me.

See above.

If you're actually looking for ways to make this better in the future, anyway.

I don't think ortho would have shared zir experience if ze didn't think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T22:25:57.728Z · LW(p) · GW(p)

I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post.

Hmmm, something has gone wrong.  This is not the case, and I'm not sure what caused you to think it was the case.

"How explicit comments need to be regarding their own epistemic status" is a single star in the constellation of considerations that caused me to write the post.  It's one of the many ways in which I see people doing things that slightly decrease our collective ability to see what's true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.

But it's in no way the central casus belli of the OP.  The constellation is.  So my answer to "Why do you think the intensity scalars are so different between us?" is "maybe they aren't?  I didn't mean the thing you were surprised by."

I don't expect ortho to remember the number of minutes from nine years ago...I don't expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn't have attached much weight to it for that reason.

Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I'm reminded of the time that some researchers investigated what various people meant by the phrase "a very real chance," and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).  

It's true that numbers aren't super reliable, but even estimated/ballpark numbers (you'll note I wrote the phrase "as many as" and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations.  The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation.  To provide data that distinguishes between possibilities.

The comment makes it clear that it is subjective experience.

True.  (I reiterate, feeling a smidge defensive, that I've said more than once that the comment was net-positive as written, and so don't wish to have to defend a claim like "it absolutely should have been different in this way!"  That's not a claim I'm making.  I'm making the much weaker claim that my rewrite was better.  Not that the original was insufficient.)

The thing that I'm pulling for, with the greater explicitness about its subjectivity ...

Look, there's this thing where sometimes people try to tell each other that something is okay.  Like, "it's okay if you get mad at me."

Which is really weird, if you interpret it as them trying to give the other person permission to be mad.

But I think that's usually not quite what's happening?  Instead, I think the speaker is usually thinking something along the lines of:

Gosh, in this situation, anger feels pretty valid, but there's not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way.  I don't want to do that, and I don't want them to be holding back, out of a fear that I will do that.  So I'm going to signal in advance something like, "I will not resist or punish your anger."  Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.

Similarly, yes, it was obvious that the comment was subjective experience.  But there's nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience.  It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity.  It signals to them that you won't take that as an attack, or an attempt to delegitimize your contribution.  It makes it easier to see and think clearly, and it gives the other person some handles to grab onto.  "I'm not one of those people who's going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth."

Again, I am not claiming, and have not at any point claimed, that ortho's comment needed to do this.  But I think it's clearly stronger if it does.

This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn't really matter; to me, zir shared experience was just additional data.

I validate that.  But I suspect you would not claim that their reason doesn't matter at all, to anyone.  And I suspect you would not claim that a substantial chunk of LWers aren't guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds.  The rewrite included more attempts to rule out everything else [LW · GW] than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what's going on, and reduces the fog and confusion and rate of misunderstandings.

"Hey maybe take a minute to listen to people like me" is implicit in the decision to share one's experience.

I don't think that's true at all.  I think that there are several different implications compatible with the act of posting ortho's comment, and that "I'm suggesting that you weight my opinion more heavily based on me being right in this case" is only one such implication, and that it's valuable to be specific about what you're doing and why because other people don't actually just "get" it.  The illusion of transparency is a hell of a drug, and so is the typical mind fallacy.  Both when you're writing, and assume that people will just magically know what you're trying to accomplish, and when you're reading, and assume that everyone else's interpretation will be pretty close to your own.

Again, I am not claiming, and have not at any point claimed, that ortho's comment needed to head off that sort of misunderstanding at the pass.  But I think it's clearly better if it does so.

I don't think ortho would have shared zir experience if ze didn't think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me.

I actually included that sentence because I felt like ortho's original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement.  So I think we're not in disagreement on that.

Replies from: supposedlyfun
comment by supposedlyfun · 2021-11-07T23:31:58.854Z · LW(p) · GW(p)

Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation. 

Also understood: you and I have different preferences for explicitly stating underlying claims. I don't think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement.  Striking that balance is Hard.

I think we've drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:01:45.912Z · LW(p) · GW(p)

(Something genuinely amusing, given the context, about the above being at 3 points out of 2 votes after four hours, compared to its parent being at 30 points out of 7 votes after five.)

Replies from: MondSemmel
comment by MondSemmel · 2021-11-07T09:32:20.872Z · LW(p) · GW(p)

It's bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws

I have an alternative and almost orthogonal interpretation for why the karma scores are the way they are.

Both in your orthonormal-Matt example, and now in this meta-example, the shorter original comments require less context to understand and got more upvotes, while the long meandering detail-oriented high-context responses were hardly even read by anyone.

This makes perfect sense to me - there's a maximum comment length after which I get a strong urge to just ignore / skim a comment (which I initially did with your response here; and I never took the time to read Matt's comments, though I also didn't vote on orthonormal's comment one way or another, nor vote in the jessicata post much at all), and I would be astonished if that only happened to me.

Also think about how people see these comments in the first place. Probably a significant chunk comes from people browsing the comment feed on the LW front page, and it makes perfect sense to scroll past a long sub-sub-sub-comment that might not even be relevant, and that you can't understand without context, anyway.

So from my perspective, high-effort, high-context, lengthy sub-comments intrinsically incur a large attention / visibility (and therefore karma) penalty. Things like conciseness are also virtues, and if you don't consider that it in your model of "good along three different axes, and bad along none as far as I can see", then that model is incomplete.

(Also consider things like: How much time do you think the average reader spends on LW; what would be a good amount of time, relative to their other options; would you prefer a culture where hundreds of people take the opportunity cost to read sub-sub-sub-comments over one where they don't; also people vary enormously in their reading speed; etc.)

Somewhat related: My post in this thread on some of the effects of the existing LW karma system [LW(p) · GW(p)]: If we grant the above, one remaining problem is that the original orthonormal comment was highly upvoted but looked worse over time:

What if a comment looks correct and receives lots of upvotes, but over time new info indicates that it's substantially incorrect? Past readers might no longer endorse their upvote, but you can't exactly ask them to rescind their upvotes, when they might have long since moved on from the discussion.

Replies from: GWS, MondSemmel
comment by Stephen Bennett (GWS) · 2021-11-08T01:35:23.493Z · LW(p) · GW(p)

First, some off the cuff impressions of matt's post (in the interest of data gathering):

In the initial thread I believe that I read the first paragraph of matt's comment, decided I would not get much out of it, and stopped reading without voting.

Upon revisiting the thread and reading matt's comment in full, I find it difficult to understand and do not believe I would be able to summarize or remember its main points now, about 15 minutes after the fact.


This seems somewhat interesting to test, so here is my summary from memory. After this I'll reread matt's post and compare what I thought it said upon first reading with what I think it says upon a second closer reading:

[person who met geoff] is making anecdotal claims about geoff's cult-leader-ish nature based on little data. People who have much more data are making contrary claims, so it is surprising that [person]'s post has so many upvotes. [commenter to person] is using deadpan in a particular way, which could mean multiple things depending on context but I lack that context. I believe that they are using it to communicate that geoff said so in a non-joking manner, but that is also hearsay.

Commentary before re-reading: I expect that I missed a lot, since it was a long post and it did not stick in my mind particularly well. I also remember a lot of hedging that confused me, and points that went into parentheticals within parentheticals. These parentheticals were long enough that I remember losing track of what point was being made. I also may have confabulated arguments in this thread about upvotes and some from matt's post.

I wanted to keep the summary "pure" in the sense that it is a genuine recollection without re-reading, but for clarity [person] is othonormal and [commenter to person] is RyanCarey.


Second attempt at summarizing while flipping back and forth between editor and matt's comment:

RyanCarey is either mocking orthonormal or providing further weak evidence, but I don't know which.

One reading of orthonormal's comment is that he had a strong first impression, has been engaging in hostile gossip about Geoff, and has failed to update since in the presence of further evidence. Some people might have different readings. Orthonormal's post has lots of karma, they have 15k+ karma in general, and their post is of poor quality, therefore the karma system may be broken.

RyanCarey used deadpan in an unclear way, I believe the best reading of their comment is that Geoff made a joke about being a cult leader. Several other commenters and I, all of whom have much more contact with Geoff than orthonormal, do not think he is or wants to be a cult leader. It is out of character for Geoff to make a deadpan joke about wanting to be a cult leader and RyanCarey didn't give confidence in their recollection of their memory, therefore people should be unimpressed with the anecdote.

I am explicitly calling out orthonormal's comment as hostile gossip, which I will not back up here but will back up in a later post. You are welcome to downvote me because of this, but if you do it means that the discussion norms of LessWrong have corroded. Other reasons for downvotes might be appropriate, such as the length.

How about we ask Geoff? I hereby ask Geoff if he's a cult leader, or if he has any other comment.

I talked with Geoff recently, which some might see as evidence of a conspiracy.

Editing that summary to be much more concise:

Orthnonormal has had little contact with Geoff, but is and continues to engage in hostile gossip. I and others with more substantive contact do not believe he is a cult leader. The people orthonormal has talked with, alluded to by the conversations that have incurred orthonormal reputational costs, have had much more contact with Geoff. Despite all of this, orthonormal refuses to believe that Geoff is not a cult leader. I believe we should base the likelihood of Geoff being a cult leader on those who have had more contact with Geoff, or even based on Geoff's words on their own.

I notice that as I am re-reading matt's post, I expect that the potential reading of orthonormal's that he presents at the beginning (a reading that I find uncharitable) is in fact matt's reading. But he doesn't actually say this outright. Instead he says "An available interpretation of orthonormal's comment is...". Indeed, I initially had an author's note in the summary that reflected the point that I was unsure if "an available interpretation" was matt's interpretation. It is only much later (inside a parenthetical) that he says "I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”..." to indicate that the uncharitable reading is in fact Matt's reading.

Matt's comment also included some comments that I read as sneering:

I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they've been right all along.

I would have preferred his comment to start small with some questions about orthonormal's experience rather than immediately accuse them of hostile gossip. For instance, matt might have asked about the extent of orthonormal's contact with Geoff, how confident orthonormal is that Geoff is a cult leader, and whether orthonormal updated against Geoff being a cult leader in light of their friends believing Geoff wasn't a cult leader, etc. Instead, those questions are assumed to have answers that are unsupportive of orthonormal's original point (the answers assumed in matt's comment in order: very little contact, extremely confident, anti-updates in the direction of higher confidence). This seems like a central example of an uncharitable comment.

Overall I find matt's comment difficult to understand after multiple readings and uncharitable of those he is conversing with, although I do value the data it adds to the conversation. I believe this lack of charity is part of why matt's comment has not done well in terms of karma. I still have not voted on matt's comment and do not believe I will. There are parts of it that are valuable, but it is uncharitable and that is a value I hold above most others. In cases like these, where parts of a comment are valuable and other parts are the sort of thing that I would rather pruned from the gardens I spend my time in, I tend to withhold judgment.


How do my two summaries compare? I'm surprised by how close the first summary I gave was to the "much more concise" summary I gave later. I expected to have missed more, largely due to matt's comment's length. I also remember finding it distasteful, which I omitted from my summaries but likely stemmed from the lack of charity extended to orthonormal.

Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt's comment? How would they react to that much more concise comment, as compared to matt's comment?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T01:50:50.607Z · LW(p) · GW(p)

Strong upvote for doing this process/experiment; this is outstanding and I separately appreciate the effort required.

Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt's comment? How would they react to that much more concise comment, as compared to matt's comment?

I find your summary at least within-bounds, i.e. not fully ruled out [LW · GW] by the words on the page.  I obviously had a different impression, but I don't think that it's invalid to hold the interpretations and hypotheses that you do.

I particularly like and want to upvote the fact that you're being clear and explicit about them being your interpretations and hypotheses; this is another LW-ish norm that is half-reliable and I would like to see fully reliable.  Thanks for doing it.

comment by MondSemmel · 2021-11-07T11:51:43.071Z · LW(p) · GW(p)

To add one point:

When it comes to assessing whether a long comment or post is hard to read, quality and style of writing matters, too. SSC's Nonfiction Writing Advice endlessly hammers home the point of dividing text into increasingly smaller chunks, and e.g. here's [LW · GW] one very long post by Eliezer that elicited multiple comments of the form "this was too boring to finish" (e.g. this one [LW(p) · GW(p)]), some of which got alleviated merely by adding chapter breaks.

And since LW makes it trivial to add headings even to comments (e.g. I used headings here [LW(p) · GW(p)]), I guess that's one more criterion for me to judge long comments by.

(One could even imagine the LW site nudging long comments towards including stuff like headings. One could imagine a good version of a prompt like this: "This comment / post is >3k chars but consists of only 3 paragraphs and uses no headings. Consider adding some level of hierarchy, e.g. via headings.")

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-06T22:25:18.046Z · LW(p) · GW(p)

Yes.  I'm trying to remind people why they should care.

You're fighting fire with fire. It's hard for me to imagine a single standard that would permit this post as acceptably LessWrongian and also deem the posts you linked to as unacceptable.

Here's an outline of the tactic that I see as common to both.

  1. You have a goal X.
  2. To achieve X, you need to coordinate people to do Y.
  3. The easiest way to coordinate people to do Y is to use exhortatory rhetoric and pull social strings, while complaining when your opponent does the same thing.
  4. You can justify (3) by appealing to a combination of the importance of X and of your lack of energy or desire not to be perfectionistic, while insisting that your opponents rise to a higher standard, and denying that you're doing any of this - or introspecting for a while and then shrugging and doing it anyway.
  5. If you can convince others to agree with you on the overriding importance of X (using rhetoric and social strings), then suddenly the possibly offensive moral odor associated with the tactic disappears. After all, everybody (who counts) agrees with you, and it's not manipulative to just say what everybody (who counts) was thinking anyway, right?

"Trying to remind people why they should care" is an example of step (3).

This isn't straightforwaredly wrong. It's just a way to coordinate people, one with certain advantages and disadvantages relative to other coordination mechanisms, and one that is especially tractable for certain goals in certain contexts.

In this case, it seems like one of your goals is to effect a site culture in which this tactic self-destructs. The site's culture is just so stinkin' rational that step (3) gets nipped in the bud, every time.

This is the tension I feel in reading your post. On the one hand, I recognize that it's allowing itself an exception to the ban it advocates on this 5-step tactic in the service of expunging the 5-step tactic from LessWrong. On the other hand, it's not clear to me whether, if I agreed with you, I would criticize this post, or join forces with it.

A successful characterization of a problem generally suggests a solution. My confusion about the correct response to your characterization therefore leads me to fear your characterization is incorrect. Let me offer an alternative characterization.

Perhaps we are dealing with a problem of market size.

In a very small market, there is little ability to specialize. Poverty is therefore rampant. Everybody has to focus on providing themselves with the basics, and has to do most things themselves. Trade is also rare because the economy lacks the infrastructure to facilitate trades. So nobody has much of anything, and it's very hard to invest.

What if we think about a movement and online community like this as a market? In a nice big rationality market, we'd have plenty of attention to allocate to all the many things that need doing. We'd have proofreaders galore, and lots of post writers. There'd be lots of money sloshing around for bounties on posts, and plenty of people thinking about how to get this just right. There'd be plenty of comments, critical, supportive, creative, and extensive. Comments would be such an important feature of the discourse surrounding a post that there'd be heavy demand for improved commenting infrastructure, for moderation and value-extraction from the comments. There'd be all kinds of curation going on, and ways to allocate rewards and support the development of writers on the website.

In theory, all we'd need to generate a thriving rationality market like this is plenty of time, and a genuine (though not necessarily exclusive) demand for rationality. It would self-organize pretty naturally through some combination of barter, social exchange, and literal cash payments for various research, writing, editing, teaching, and moderation services.

The problem is the slow pace at which this is emerging on its own, and the threat of starvation in the meantime. Let's even get a little bit ecological. A small LW will go through random fluctuations in activity and participation. If it gets too small, it could easily dip into an irrecoverable lack of participation. And the smaller the site is, the harder it will be to attain the market size necessary to permit specialization, since any participant will have to do most everything for themselves.

Under this frame, then, your post is advocating for some things that seem useful and some that seem harmful. You give lots of ideas for jobs that seem helpful (in some form) in a LW economy big enough to support such specialized labor.

On the other hand, you advocate an increase in regulation, which will come with an inevitable shrinking of the population. I fear that this will have the opposite of the effect you intend. Rather than making the site hospitable for a resurgence of "true rationalists," you will create the conditions for starvation by reducing our already-small market still further. Even the truest of rationalists will have a hard time taking care of their rationality requirements when the population of the website has shrunk to that extent.

Posts just won't get written. Comments won't be posted. People won't take risks. People won't improve. They'll find themselves frustrated by nitpicks, and stop participating. A handful of people will remain for a while, glorying in the victory of their purge, and then they'll quit too after a few months or a few years once that gets boring.

I advocate instead that you trust that everybody on this website is an imperfect rationalist with a genuine preference for this elusive thing called "rationality." Allow a thousand seeds to be planted. Some will bloom. Gradually, the rationalist economy will grow, and you'll see the results you desire without needing much in the way of governance or intervention. And when we have need of governance, we'll be able to support it better.

It's always hard, I think, for activists to accept that the people and goals they care about can and will largely take care of themselves without the activist's help.

Replies from: Duncan_Sabien, Yoav Ravid, Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T23:54:02.436Z · LW(p) · GW(p)

All right, a more detailed response.

You're fighting fire with fire.

I am not fighting fire with fire.  I request that you explicitly retract the assertion, given that it is both a) objectively false, and b) part of a class of utterances that are in general false far more often than they are true, and which tend to make it harder to think and see clearly in exactly the way I'm gesturing at with the OP.

Some statements that would not have been false:

"This seems to me like it's basically fighting fire with fire."

"I believe that, in practice, this ends up being fighting fire with fire."

"I'm having a hard time summing this up as anything other than 'fighting fire with fire.'"

...and I reiterate that those subtle differences make a substantial difference in people's general ability to do the collaborative truth-seeking thing, and are in many ways precisely what I'm arguing for above.

I clearly outline what I am identifying as "fire" in the above post.  I have one list which is things brains do wrong, and another list which lays out some "don'ts" that roughly correspond to those problems.

I am violating none of those don'ts, and, in my post, exhibiting none of those wrongbrains.  I in fact worked quite hard to make sure that the wrongbrains did not creep in, and abandoned a draft that was three-quarters complete because it was based on one.

In many ways, the above essay is an explicit appeal that people not fight fire with fire.  It identifies places where people abandon their principles in pursuit of some goal or other, and says "please don't, even if this leads to local victory."

You're fighting fire with fire. It's hard for me to imagine a single standard that would permit this post as acceptably LessWrongian and also deem the posts you linked to as unacceptable.

It's the one that I laid out in my post.  If you find it confusing, you can ask a clarifying question.  If one of the examples seems wrong or backwards, you can challenge it.  I appreciate the fact that you hedged your statement by saying that you have a hard time imagining, which is better than in the previous sentence, where you simply declared that I was doing a thing (which I wasn't), rather than saying that it seemed to you like X or felt like X or you thought it was X for Y and Z reasons.

The standard is: don't violate the straightforward list of rationality 101 principles and practices that we have a giant canon of knowledge and agreement upon.  There's a separate substandard that goes something like "don't use dark-artsy persuasion; don't yank people around by their emotions in ways they can't see and interact with; don't deceive them by saying technically true things which you know will result in a false interpretation, etc."

I'm adhering to that standard, above.

There's fallacy-of-the-grey in your rounding-off of "here's a post where the author acknowledged in their end notes that they weren't quite up to the standard they are advocating" and "you're fighting fire with fire."  There's also fallacy-of-the-grey in pretending that there's only one kind of "fire."

I strongly claim that I am, in general, not frequently in violation of any of the principles that I have explicitly endorsed, and that if it seems I'm holding others to a higher standard than I'm holding myself, it's likely that the standard I'm holding has been misunderstood. I also believe that people who are trying to catch me when I'm actually failing to live up are on my side and doing me a favor, and though I'm not perfect and sometimes it takes me a second to get past the flinch and access the gratitude, I think I'm credible about acting in accordance with that overall.

  1. You have a goal X.
  2. To achieve X, you need to coordinate people to do Y.
  3. The easiest way to coordinate people to do Y is to use exhortatory rhetoric and pull social strings, while complaining when your opponent does the same thing.
  4. You can justify (3) by appealing to a combination of the importance of X and of your lack of energy or desire not to be perfectionistic, while insisting that your opponents rise to a higher standard, and denying that you're doing any of this - or introspecting for a while and then shrugging and doing it anyway.
  5. If you can convince others to agree with you on the overriding importance of X (using rhetoric and social strings), then suddenly the possibly offensive moral odor associated with the tactic disappears. After all, everybody (who counts) agrees with you, and it's not manipulative to just say what everybody (who counts) was thinking anyway, right?

I did not "use exhortatory rhetoric and pull social strings."  I should walk back my mild "yeah fair" in response to the earlier comment, since you're taking it and adversarially running with it.

If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel, how I think, what I believe, and what I think should be the case.

I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you're going to dismiss all of the "I" statements as being mere window dressing or something (I'm not sure that's what you're doing, but it seems like something like that is necessary, to pretend that they weren't omnipresent in what I wrote), you need to do so explicitly.  You need to argue for them not-mattering; you can't just jump straight to ignoring them, and pretending that I was propagandizing.

I also did not complain about other people using exhortatory rhetoric and pulling social strings.  That's a strawman of my point.  I complained about people a) letting their standards on what's sufficiently justified to say slip, when it was convenient, and b) en-masse upvoting and otherwise tolerating other people doing so.

I gave specifics; I gave a model.  Where that model wasn't clear, I offered to go in-depth on more examples (an offer that I haven't yet seen anyone take me up on, though I'm postponing looking at some other comments while I reply to this one).

I thoroughly and categorically reject (3) as being anywhere near a summary of what I'm doing above, and (4) is ... well, I would say "you're being an uncharitable asshole, here," except that what's actually true and defensible and prosocial is to note that I am having a strongly negative emotional reaction to it, and to separately note that you're not passing my ITT and you're impugning my motives and in general you're hand-waving away the part where you have actual reasons for the attempt to delegitimize and undermine both me and my points.

In this case, it seems like one of your goals is to effect a site culture in which this tactic self-destructs. The site's culture is just so stinkin' rational that step (3) gets nipped in the bud, every time.

I recognize that it's allowing itself an exception to the ban it advocates on this 5-step tactic in the service of expunging the 5-step tactic from LessWrong.

No.  You've failed to pass my ITT, you've failed to understand my point, and as you drift further and further from what I was actually trying to say, it gets harder and harder to address it line-by-line because I keep being unable to bring things back around.

I'm not trying to cause appeals-to-emotion to disappear.  I'm not trying to cause strong feelings oriented on one's values to be outlawed.  I'm trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.

I definitely do not believe that this post, as written, would not survive or belong on the better version of LessWrong I'm envisioning (setting aside the fact that it wouldn't be necessary there).  I'm not trying to effect a site culture where the tactic of the OP self-destructs, and I'm not sure where that belief came from. I just believe that, in the steel LW, this post would qualify as mediocre, instead of decent.

The place where I'm most able to engage with you is:

On the other hand, you advocate an increase in regulation, which will come with an inevitable shrinking of the population. I fear that this will have the opposite of the effect you intend. Rather than making the site hospitable for a resurgence of "true rationalists," you will create the conditions for starvation by reducing our already-small market still further. Even the truest of rationalists will have a hard time taking care of their rationality requirements when the population of the website has shrunk to that extent.

Posts just won't get written. Comments won't be posted. People won't take risks. People won't improve. They'll find themselves frustrated by nitpicks, and stop participating. A handful of people will remain for a while, glorying in the victory of their purge, and then they'll quit too after a few months or a few years once that gets boring.

Here, you assert some things that are, in fact, only hypotheses.  They're certainly valid hypotheses, to be clear.  But it seems to me that you're trying to shift the conversation onto the level of competing stories, as if what's true is either "Duncan's optimistic frame, in which the bad people leave and the good people stay" or "the pessimistic frame, in which the optimistic frame is naive and the site just dies."

This is an antisocial move, on my post where I'm specifically trying to get people to stop pulling this kind of crap.

Raise your hypothesis.  Argue that it's another possible outcome.  Propose tests or lines of reasoning that help us to start figuring out which model is a better match for the territory, and what each is made of, and how we might synthesize them.

I wrote several hundred words on a model of evaporative cooling, and how it drives social change.  Your response boils down to "no u."  It's full of bald assertions.  It's lacking in epistemic humility.  It's exhausting in all the ways that you seem to be referring to when you point at "frustrated by nitpicks, and stop participating."  The only reason I engaged with it to this degree is that it's an excellent example of the problem.

Replies from: dxu
comment by dxu · 2021-11-07T00:15:16.850Z · LW(p) · GW(p)

I would like to register that I think this is an excellent comment, and in fact caused me to downvote the grandparent where I would otherwise have neutral or upvoted. (This is not the sort of observation I would ordinarily feel the need to point out, but in this case it seemed rather appropriate to do so, given the context.)

Replies from: lionhearted
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2021-11-09T12:34:30.186Z · LW(p) · GW(p)

Huh. Interesting.

I had literally the exact same experience before I read your comment dxu.

I imagine it's likely that Duncan could sort of burn out on being able to do this [1] since it's pretty thankless difficult cognitive work. [2]

But it's really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]

Nevertheless, it's really fascinating to read and observe. I feel vaguely like I'm getting smarter.

###

Rigor for the hell of it [7]:

[1] Hedged hypothesis.

[2] Two-premise assertion with a slightly subjective basis, but I think a true one.

[3] Elaborated on a slightly different but related point further in my comment below to him with an example.

[4] Vague but I think acceptably so. To elaborate, I mean making one's ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I'm aware it doesn't acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.

[5] There's some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.

[6] Hedged hypothesis. Note: doesn't make a judgment either way as to whether it's worth it or not. 

[7] Added after writing to double-check I'm playing by the rules and clear up ambiguity. "For the hell of it" is just random stylishness and can be safely mentally deleted. 

(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I'm always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)

comment by Yoav Ravid · 2021-11-07T05:06:47.265Z · LW(p) · GW(p)

Upvoted for the market analogy.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:36:29.438Z · LW(p) · GW(p)

(Thanks for being specific; this is a micro-norm I want to applaud.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T23:29:44.394Z · LW(p) · GW(p)

You're fighting fire with fire.

Nope.  False, and furthermore Kafkaesque; there is no defensible reading of either the post or my subsequent commentary that justifies this line, and that alone being up-front and framing the rest of what you have to say is extremely bad, and a straightforward example of the problem.

It is a nuance-destroying move, a rounding-off move, a making-it-harder-for-people-to-see-and-think-clearly move, an implanting-falsehoods move.  Strong downvote as I compose a response to the rest.

Replies from: habryka4, AllAmericanBreakfast
comment by habryka (habryka4) · 2021-11-08T23:00:18.223Z · LW(p) · GW(p)

Given that there is lots of "let's comment on what things about a comment are good and which things are bad" going on in this thread, I will make more explicit a thing that I would have usually left implicit: 

My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T23:01:46.828Z · LW(p) · GW(p)

In response to this, I'll bow out (from this subthread) for a minimum period of 3 days.  (This is in accordance with a generally wise policy I'm trying to adopt.)

EDIT: I thought Oli was responding to a different thing (I replied to this from the sidebar).  I was already planning not to add anything substantive here for a few days.  I do note, though, that even if two people both unproductively turn up the heat, one after the other, in my culture it still makes a difference which one broke peace first.

Replies from: lionhearted
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2021-11-09T12:45:56.932Z · LW(p) · GW(p)

broke peace first.

Have you read "Metaphors We Live By" by Lakoff?

The first 20 pages or so are almost a must-read in my opinion.

Highly recommended, for you in particular.

A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you'll get the gems.

Edit for exhortation: I think you'll get a whole lot out of it such that I'd stake some "Sebastian has good judgment" points on it that you can subtract from my good judgment rep if I'm wrong. Seriously please check it out. It's fast and worth it.

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-06T23:53:47.860Z · LW(p) · GW(p)

This response I would characterize as steps (3) and (4) of the 5-step tactic I described. You are using more firey rhetoric ("Kafkaesque," "extremely bad," "implanting falsehoods,"), while denying that this is what you are doing.

I am not going to up-vote or down-vote you. I will read and consider your next response here, but only that response, and only once. I will read no other comments on this post, and will not re-read the post itself unless it becomes necessary.

I infer from your response that from your perspective, my comment here, and me by extension, are in the bin of content and participants you'd like to see less or none of on this website. I want to assure you that your response here in no way will affect my participation on the rest of this website.

Your strategy of concentration of force only works if other people are impacted by that force. As far as your critical comment here, as the Black Knight said, I've known worse.

If you should continue this project and attack me outside of this post, I am precommitting now to simply ignoring you, while also not engaging in any sort of comment or attack on your character to others. I will evaluate your non-activist posts the same way I evaluate anything else on this website. So just be aware that from now on, any comment of yours that strikes me as having a tone similar to this one of yours will meet with stony silence from me. I will take steps to mitigate any effect it might have on my participation via its emotional effect. Once I notice that it has a similar rhetorical character, I will stop reading it. I am specifically neutralizing the effect of this particular activist campaign of yours on my thoughts and behavior.

Replies from: Beckeck
comment by Beckeck · 2021-11-07T00:10:04.916Z · LW(p) · GW(p)

Jumping in here in what i hope is a prosocial way. I assert as hypothesis that the two of you currently disagree about what level of meta the conversation is/should-be at, and each feels that the other has an obligation to meet them at their level, and this has turned up the heat a lot. 

maybe there is a more oblique angle then this currently heated one?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T00:23:23.856Z · LW(p) · GW(p)

It's prosocial.  For starters, AllAmericanBreakfast's "let's not engage," though itself stated in a kind of hot way, is good advice for me, too.  I'm going to step aside from this thread for at least three days, and if there's something good to come back to, I will try to do so.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T19:54:57.918Z · LW(p) · GW(p)

BTW this inspired an edit in the "two kinds of persons" spot specifically, and I think the essay is much stronger for it, and I strongly appreciate you for highlighting your confusion there.

EDIT: and also in the author's note at the bottom.

comment by Vaniver · 2021-11-07T06:39:32.651Z · LW(p) · GW(p)

I'll have more to say on this in the future, but for now I just want to ramble about something.

I've been reading through some of the early General Semantics works. Partially to see if there are any gaps in my understanding they can fill, partially as a historical curiosity (how much of rationality did they have figured out, and what could they do with it?), partially because it might be good fodder for posts on LW (write a thousand posts to Rome [LW · GW]).

And somehow a thing that keeps coming into my mind while reading them is the pre-rigorous/rigorous/post-rigorous split Terrence Tao talks about, where mathematicians start off just doing calculation and not really understanding proofs, and then they understand proofs through careful diligence, and then they intuitively understand proofs and discard many of the formalisms in their actions and speech.

Like, the early General Semantics writers pay careful attention to many things that I feel like I have intuitively incorporated; they're trying to be rigorous about scientific thinking (in the sense that they mean) in a way that I think I can be something closer to post-rigorous. Rather than this just being "I'm sloppier than they were", I think I see at least one place where they're tripping up (tho maybe they're just trying to bridge effectively to their audience?), of which the first that comes to mind is when an author, in a discussion of the faults of binarism, makes their case using a surprisingly binarist approach (instead of the more scientific quantitative language).

And so when I see something that seems to say "good math is all about correct deductions", there's a part of me that says "well, but... that's not actually where good math comes from, if you ask the good mathematicians." There's a disagreement going on at the moment between Zvi and Elizabeth [LW(p) · GW(p)] about what inferences to draw from the limited data we have about the container stacking story. It's easy for me to tell a story about how Zvi is confused and biased and following the wrong policies, and it's easy for me to tell a story about how Elizabeth is confused and biased and following the wrong policies. But, for me at least, doing either of those things would be about trying to Win instead trying to Understand.

And, like, I don't know; the reason this is a ramble instead of a clear point is because I think I'm saying "don't bother the professors who talk in intuitions" to someone who is saying "we really need to be telling undergraduates when they make math errors", and yet somehow I'm seeing in this a vision that's something more like "people become rational by carefully avoiding errors" instead of something that's more like "people become rational by trying to cleave as closely as possible to the Way".

You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory.” But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.

Replies from: Duncan_Sabien, Benito, SaidAchmiz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T06:56:25.378Z · LW(p) · GW(p)

For me, what this resonates most clearly with is the interaction I just had with Ben Pace.

Ben was like "X"

And I was like "mostly yes to X, but also no to the implicature that I think surrounds X which is pretty bad."

And Ben was like "oh, definitely not that!  Heck no!  Thanks for pointing it out, but no, and also I think I'll change nothing in response to finding out that many people might draw that from X!"

And my response was basically "yeah, I don't think that Ben Pace on the ideal LessWrong should do anything different."

Because the thing Ben said was fine, and the implicature is easy to discard/get past.  Like "almost certainly Ben Pace isn't trying to imply [that crap], I don't really need to feel defensive about it, I can just offhandedly say 'by the way, not [implication], and it's fine for Ben to not have ruled that out just like it's fine for Ben to not also actively rule out 'by the way, don't murder people' in every comment."

But that's because Ben and I have a high-trust, high-bandwidth thing going.

The more that LW as a whole is clean-and-trustworthy in the way that the Ben-Duncan line segment is clean-and-trustworthy, the less that those implications are flying around all over the place and actually real in the sense that they're being read into things by large numbers of people who then vote and comment as if they were clearly just intended.

I had a similar interaction with Romeo, about the comment I highlighted in the essay; his response was basically "I knew Aella and I were unlikely to fall into that attractor, and didn't even pause to consider the heathen masses."

(Nowhere near an exact quote; don't blame Romeo.)

And Ben was also advocating not even pausing to consider the heathen masses.

And maybe I should update in that direction, and just ignore a constant background shrieking.

But I feel like "ignoring the heathen masses" has gotten me in trouble before on LessWrong specifically, which is why I'm hesitant to just pretend they don't exist.

EDIT: also last time you made a comment on one of my posts and I answered back, I never heard back from you and I was a little Sad so could you at least leave me a "seen" or something

Replies from: Vaniver, Vaniver
comment by Vaniver · 2021-11-07T07:07:41.523Z · LW(p) · GW(p)

And maybe I should update in that direction, and just ignore a constant background shrieking.

I'm not sure about this; there is some value in teaching undergrads rigor, and you seem more motivated to than I am. And, like, I did like Logan's comment about rumor, and I think more people observing things like that sooner is better. I think my main hope with the grandparent was to check if you're thinking the rigor is the moon or the finger, or something.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:12:26.724Z · LW(p) · GW(p)

My views here aren't fully clarified, but I'm more saying "the pendulum needs to swing this way for LessWrong to be good" than saying "LessWrong being good is the pendulum being all the way over there."

Or, to the extent that I understood you and am accurately representing Ben Pace, I agree with you both.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:34:04.746Z · LW(p) · GW(p)

Some strings of wisdom that seem related:

"You have to know the rules before you can break them."

There has to be some sense that you're riffing deliberately and not just wrong about the defaults

The ability to depart from The Standard Forms is dependent on both the level of trust and the number of bystanders who will get the wrong idea (see my and Critch's [LW · GW] related posts, or my essay on the social motte-and-bailey).

"Level three players can't distinguish level two players from level four players."

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-08T03:30:52.975Z · LW(p) · GW(p)

This suggests to me a different idea on how to improve LessWrong: make an automated "basics of disagreement" test. This involves recognizing a couple of basic concepts like cruxes and common knowledge, and involves looking at some comment threads and correctly diagnosing "what's going on" in them (e.g. where are they talking past each other) and you have to notice a bunch of useful ways to intervene.

Then if you pass, your username on comments gets a little badge next to it, and your strong vote strength gets moved up to +4 (if you're not already there).

The idea is to make it clearer who is breaking the rules that they know, versus who is breaking the rules that they don't know.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T04:33:31.838Z · LW(p) · GW(p)

Interestingly, my next planned essay is an exploration of a single basic of disagreement.

comment by Vaniver · 2021-11-07T07:02:08.193Z · LW(p) · GW(p)

EDIT: also last time you made a comment on one of my posts and I answered back, I never heard back from you and I was a little Sad so could you at least leave me a "seen" or something

Seen, also, which are you thinking of? I might have had nothing to say, or I might have just been busy when I saw the response and I wasn't tracking that I should respond to it.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:07:19.010Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/57sq9qA3wurjres4K/ruling-out-everything-else?commentId=fdapBc7dnKEhJ4no4

comment by Ben Pace (Benito) · 2021-11-07T07:04:16.772Z · LW(p) · GW(p)

This comment is surprising to me in how important I think this point is.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T07:06:03.017Z · LW(p) · GW(p)

Not surprising to me given my recent interactions with you and Romeo, but I agree it's quite important and I wouldn't mind a world where it became the main frontier of this discussion.

comment by Said Achmiz (SaidAchmiz) · 2021-11-07T17:16:05.265Z · LW(p) · GW(p)

And somehow a thing that keeps coming into my mind while reading them is the pre-rigorous/rigorous/post-rigorous split Terrence Tao talks about, where mathematicians start off just doing calculation and not really understanding proofs, and then they understand proofs through careful diligence, and then they intuitively understand proofs and discard many of the formalisms in their actions and speech.

This is an instance of three levels of mastery.

comment by JenniferRM · 2021-11-08T09:05:58.214Z · LW(p) · GW(p)

This post makes me kind of uncomfortable and I feel like the locus is in... bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design?

Here's an example where it jumped out at me:

Another metaphor is that of a garden.

You know what makes a garden?

Weeding.

Gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

And weeding is hard work, and it's boring, and it's tedious, and it's unsexy.

Here's another:

But gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

There's a difference between "there are many black ravens" and "we've successfully built an environment with no white ravens."  There's a difference between "this place substantially rewards black ravens" and "this place does not reward white ravens; it imposes costs upon them."

Like... this is literally black and white thinking? 

And why would a good and sane person ever want to impose costs on third parties ever except like in... revenge because we live in an anarchic horror world, or (better) as punishment after a wise and just proceeding where rehabilitation would probably fail but deterrence might work? 

And what the fuck with "weeds" and "weeding" where the bad species is locally genocided? 

Just because a plant is "non-desired" doesn't actually mean you need to make it not thrive. It might be mostly harmless. It might be non-obviously commensal. Maybe your initial desires are improper? Have some humility.

And like in real life agriculture the removal of weeds is often counter-productive. Weeding is the job you give the kids so that they can feel like they are contributing and learn to recognize plants. The real goal is to maximize yield while minimizing costs without causing too much damage to the soil (or preferably while building the soil up even better for next year), and the important parts are planting the right seeds at the right time in the right place, and making sure that adequate quantities of cheap nutrients and water are bio-available to your actual crop.

Just because voting is wrong, here and there...  like... so what? Some of my best comments have gotten negative votes and some of the ones I'm most ashamed of go to the top. This means that the voters are sometimes dumb. That's OK. That's life. Maybe educate them? Here are some heuristics I follow:

Scroll to the bottom of the comments first to find the N that have 1 point, first read them all, then upvote the better N/2. Then look at the M with 2 votes and upvote the better M/2. And so on. Anything lower but more useful to have read first should have relatively higher karma.

If something has a good link, that's better than no links. (Click through and verify, however.) If the link sucks, it is worse than no links.

Don't upvote things that are already on top unless there are other reasons to do so. If something really clever is written later on, it will make it harder for later voters to push it up where it properly belongs.

At least don't read the first comment then mash the upvote when it ends on an applause light. Check the next one below that and so on, and think about stuff.

If a comment is addressed to someone and they respond, even shitty responses often deserve an upvote because that person's response should usually come instantly after unless someone else has a short sweet and much better comment that works as a good prologue. Linear sequences of discussion by two good faith communicators is wonderful to read and anyone horning on the conversation should probably come later.

If a sequence of discussion leads to a super amazing insight 3 or 4 ply in... perhaps someone actually changed their mind... reward all the comments that lead to that (similarly to how direct two-person back and forth is interesting). There's an analogy here to backpropagation.

In an easy to read debate, if one person is making better points, you're allowed to vote them higher than their interlocutor to show who is "winning" but you should feel guilty about this, because it is dirty dirty fun <3

The more comments a comment accumulates, the lower the net utility of the entire collection. Writing things that stand alone is good. Writing things that attract quibbles is less good. The debates should be a ways down the page.

And so on. This is all common sense, right? <3

(It isn't common sense.) You can just look [LW · GW] and see that people don't understand this stuff, but that's why it is good to spell out, I think? Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

We don't need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn't too much work to do so. The rest might be mulch, but mulch is good too <3

Replies from: Duncan_Sabien, Dweomite, Dweomite, Slider, Ruby
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T10:10:49.609Z · LW(p) · GW(p)

Like... this is literally black and white thinking?

Yes, because there is in fact a difference between "stuff that promotes individuals' and groups' ability to see and think clearly" and stuff that does not, and while we have nowhere near a solid grasp on it, we do know some of it really well at this point.  There are some things that just do not belong in a subculture that's trying to figure out what's true.

Some things are, in fact, better than others, especially when you have a goal/value set/utility function. 

And why would a good and sane person ever want to impose costs on third parties ever except like in... revenge because we live in an anarchic horror world, or (better) as punishment after a wise and just proceeding where rehabilitation would probably fail but deterrence might work?

Note the equivocation here between "I can't think of a reason why someone would want to impose costs on third parties except X" and "therefore there probably aren't non-X reasons."  This is an example of the thing; it makes it harder to see the answer to the question "why?"  Harder than it has to be, by making the "why" seem rhetorical, and as-if-there-couldn't-be-a-good-answer.

I'd like to impose some costs on statements like the above, because they tend to confuse things.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-08T19:00:15.257Z · LW(p) · GW(p)

On epistemic grounds: The thing you should be objecting to in my mind is not the part where I said that "because I can't think of a reason for X, that implies that there might not be a reason for X". 

(This isn't great reasoning, but it is the start of something coherent. (Also, it is an invitation to defend X coherently and directly. (A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just... like... living and letting live, and trying to learn from things you initially can't appreciate?)))

On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally were might not be "sane and good" because your advice seemed to be violating ideas about conflict and economics that seem normative to me.

This accusation could also have an epistemic component (which would be an ad hominem) if I were saying "you are saying X and are not sane and good and therefore not-X".  But I'm not saying this.

I'm saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented... probably... but not certainly. 

This is another instance of the whole "weed/conflict/fighting" frame to me, and my claim is that the whole frame is broken for any kind of communal/cooperative truth-seeking enterprise:

There are some things that just do not belong in a subculture that's trying to figure out what's true.

...and I'd like to know what those are, how they can be detected in people or conversations or whatever??

If you think I'm irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like. I'm sure I have flaws, I'm just not sure which of my many flaws you think is a problem here. Perhaps you could explain "epistemic hygiene" to me in mechanistic detail, and show how I'm messing it up?

But, there is a difference between being irrational or impolite.

If you think I'm being impolite to you personally, feel free to say how and why (with nuance, etc) and demand an apology. I would probably offer one. I try to mostly make peace, because I believe conflict and "intent to harm" is very very costly.

However, I "poked you" someone on purpose, because you strongly seem to me to be advocating a general strategy of "all of us being pokey at each other in general for <points at moon> reasons that might be summarized as a natural and normal failure to live up to potentially pragmatically impossible ideals".

You're sad about the world. I'm sad about it too. I think a major cause is too much poking. You're saying the cause is too little poking. So I poked you. Now what?

If we really need to start banning the weeds, for sure and for true... because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector... then I might propose that you be banned?

And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of "people can't grow, errors are defection, ban the defectors" because you would also think that you can't grow, and I can't grow, and if we're calling for each other's banning based on "essentializing pro-conflict social logic" because we both think the other is a "weed"... well... I guess its a fight then?

But I don't think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.

Debate is fun for kids. When I taught a debate team, I tried to make sure it stayed fun, and we won a lot, and years later I heard how the private prep schools tried to share research against us, with all this grinding and library time. (I think maybe they didn't realize that the important part is just a good skeleton of "what an actual good argument looks like" and hitting people in at the center of their argument based on prima facie logical/policy problems.) People can be good sports about disagreements and it helps with educational processes, but it is important to tolerate missteps and focus on incremental improvement in an environment of quick clear feedback <3

The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.

Proposing to pro-actively harm people for pre-systematic or post-systematic reasons is bad because unsystematic negative incentive systems don't scale. "I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it" is a bad plan for making the world good. That's a formula for the social equivalent of an autoimmune disorder :-(

The specific problem: whats the inter-rater reliability like for "decisions to weed"? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people... its a recipe for disaster.

You didn't mention the word "dunbar" for example that I can tell? You don't seem to have a theory of governance? You don't seem to have a theory of local normative validity (other than epistemic hygiene)? You didn't mention "rights" or "elections" or "prices"? You haven't talked about virtue epistemology or the principle of charity? You don't seem to be citing studies in organizational psychology? It seems to all route through the "stag hunt" idea (and perhaps an implicit (and as yet largely unrealized in practice) sense that more is possible) and that's almost all there is? And based on that you seem to be calling for "weeding" and conflict against imperfectly rational people, which... frankly... seems unwise to me.

Do you see how I'm trying to respond to a gestalt posture you've adopted here that I think leads to lower utility for individuals in little scuffles where each thinks the other is a white raven (I assume albinism is the unnatual, rare, presumptively deleterious pheotype?) and is trying to "weed them", and then ultimately (maybe) it could be very bad for the larger community if "conflict-of-interest based fighting (as distinct from epistemic disagreement)" escalates (RO>1.0) instead of decaying (R0<1.0)?

Replies from: Duncan_Sabien, dxu
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T21:25:11.512Z · LW(p) · GW(p)

If you think I'm irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.

I'm having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing.  But I'm also reasonably confident that my menu of options looks like:

  • Don't respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don't have one that's grounded in truth
  • Respond in brief, and the very culture that I'm saying currently isn't trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say.  This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
  • Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
  • Respond at length to all such comments, even though it's easier to produce bullshit than to refute bullshit, meaning that I'm basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written.  "People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet."

Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that "things just get better and better!  Relax and just let the weeds and the plants duke it out, and surely the plants will win!"

Completely ignoring the assertion I made, with substantial effort and detail, that it's bad right now, and not getting better.  Refusing to engage with it at all.  Refusing to grant it even the dignity of a hypothesis.

That seems bad.

And it doesn't matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn't read it and doesn't care, and there aren't enough other people chiming in.  I've answered the call that you're making here half a dozen times, elsewhere.  More than once on this very post.  But that doesn't count for anything in your book, and the audience doesn't see it or care about it.  From the audience's perspective, you made a pretty good comment and I didn't substantively respond, and that's not a good look, eh?

I don't want to keep falling prey to this dynamic.  But here, since you asked.  I don't have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I'll at least provide a short pointer.

Like... this is literally black and white thinking?

Fallacy of the grey, ironic in this case.  "Black and white thinking" is not always bad or inappropriate; some things are in fact more or less binary and using the label "black and white thinking" to delegitimize something without checking to what degree it's actually right to be thinking in binaries is disingenuous and sloppy.

And why would a good and sane person ever want

I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous.  Trying to win the argument without actually having it.  And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—having the general effect of nudging readers toward a belief that anything they don't already see must not be real.

And what the fuck with "weeds" and "weeding" where the bad species is locally genocided?

Just because a plant is "non-desired" doesn't actually mean you need to make it not thrive. It might be mostly harmless. It might be non-obviously commensal. Maybe your initial desires are improper? Have some humility.

Abusing the metaphor.  Seizing on one of multiple metaphors, which were headlined explicitly as being attempts to clumsily gesture at or triangulate a thing, and importing a bunch of emotion on an irrelevant axis.  Trying to tinge the position you're disagreeing with as genocide.  A social "gotcha."  An applause light.  At the end, a hypocritical call for humility, right after not having humility yourself about whether or not weeding is good or necessary.  Black and white thinking, right after using the label "black and white" as a rhetorical weapon.  You later go on to talk about a property of actual weeds but don't even try to establish any way in which it's relevantly analogous.

Maybe your initial desires are improper?

"Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I'm going to do the Carlson/Shapiro thing of 'just asking a question' and then not settling it, because I can score points with the implication and then fade into the mists.  I don't have to stick my neck out or put any skin in the game."

Just because voting is wrong, here and there... like... so what? Some of my best comments have gotten negative votes and some of the ones I'm most ashamed of go to the top. This means that the voters are sometimes dumb. That's OK. That's life. Maybe educate them?

Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things.  Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP.  Also, infuriatingly smug and dismissive with "maybe educate them?" as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?

Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

"I failed at this, so I'm going to undermine other people trying to do a similar thing, and call it savviness.  Also, here, have some strawmanning of your point."

We don't need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn't too much work to do so. The rest might be mulch, but mulch is good too <3

Assertion with no justification and no detail and no model.  Ignoring the entire claim of the OP, which is that the current thing is observably not working.  And again, a fraction of the effort required to refute, so offering me the choice of "let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent."

A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just... like... living and letting live, and trying to learn from things you initially can't appreciate?

Isolated demand for rigor.  Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies?  Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources.  Instead of noting confusion about that ("I think you think you've made a case here, but I didn't follow it; can you expand on X?") just pretending like I hadn't done the work.  Same thing happening with "I'm saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented... probably... but not certainly."

...and I'd like to know what those are, how they can be detected in people or conversations or whatever??

Literally listed in the essay.  Literally listed in the essay.

Perhaps you could explain "epistemic hygiene" to me in mechanistic detail, and show how I'm messing it up?

Again the trap; "just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you've already done?" Framing things such that non-response will seem like I'm being uncooperative and unreasonable, when in fact you're just refusing to meet me halfway.  And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is "we've already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards."

But anyway, I'm doing it (a little) here.  For the hundredth time, even though it won't actually help much and you'll still be upvoted and I'll still be downvoted and I'll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.  

You don't wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn't have a preference between [martial arts actions] and [everything else] and certainly shouldn't enforce that people limit themselves to [martial arts actions] while participating in the class, that's black-and-white thinking, just let everyone put their ideas into a free marketplace!

Well-kept gardens die by pacifism. [LW · GW]  If you don't think that a garden being well-kept is a good thing, that's fine.  Go live in a messy garden.  Don't actively undermine someone trying to clean up a garden that's trying to be neat.

Alternately, "we used to feel comfortable telling users that they needed to just go read the Sequences.  Why did that become less fashionable, again?"

I try to mostly make peace, because I believe conflict and "intent to harm" is very very costly.

Except that you're actively undermining a thing which is either crucial to this site's goals, or at least plausibly so (hence my flagging it for debate).  The veneer of cooperation is not the same thing as actually not doing damage.

If we really need to start banning the weeds, for sure and for true... because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector... then I might propose that you be banned?

Strawmanning.  Strawmanning.

But I don't think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.

Except that you're actively undermining my attempt to pre-establish boundaries here.  To enshrine, in a place called "LessWrong," that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites. 

The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.

"The thing I want to do is strawman what you're arguing for as 'proactively harming people for failing to live up to an ideal,' such that I can gently condescend to you about how it's costly and cascades and leads to vaguely undefined bad outcomes.  This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays."

"I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it" is a bad plan for making the world good.

STRAWMANNING.  "You said [A].  Rather than engage with [A], I'm going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief."

The specific problem: whats the inter-rater reliability like for "decisions to weed"? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people... its a recipe for disaster.

"I bet it is low, but rather than proposing a test, I'm going to just declare it impossible on the scale of this site."

I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn't get a grip on "where to begin."

Replies from: dxu, habryka4, GWS, GWS
comment by dxu · 2021-11-08T21:40:44.229Z · LW(p) · GW(p)
  • Don't respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don't have one that's grounded in truth
  • Respond in brief, and the very culture that I'm saying currently isn't trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
  • Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
  • Respond at length to all such comments, even though it's easier to produce bullshit than to refute bullshit, meaning that I'm basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. "People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet."

I am less confident than you are in your points, and I am also of the opinion that both of Jennifer's comments were posted in good faith. I wanted to say, however, that I strongly appreciate your highlighting of this dynamic, which I myself have observed play out too many times to count. I want to reinforce the norm of pointing out fucky dynamics when they occur, since I think the failure to do this is one of the primary routes through which "not enough concentration of force" can corrode discussion; that alone would have been enough to merit a strong upvote of the parent comment.

(Separately I would also like to offer commiseration, since I perceive that you are Feeling Bad at the moment. It's not clear to me what the best way is to do this, so I settled for adding this parenthetical note.)

Replies from: Dweomite, Taran, JenniferRM
comment by Dweomite · 2021-11-15T05:25:12.980Z · LW(p) · GW(p)

I'd contend that a post can be "in good faith" in the sense of being a sincere attempt to communicate your actual beliefs and your actual reasons for them, while nonetheless containing harmful patterns such as logical fallacies, misleading rhetorical tricks, excessive verbosity, and low effort to understand your conversational partner.  Accusing someone of perpetuating harmful dynamics doesn't necessarily imply bad faith.

In fact, I see this distinction as being central to the OP.  Duncan talks about how his brain does bad things on autopilot when his focus slips, and he wants to be called on them so that he can get better at avoiding them.

comment by Taran · 2021-11-08T22:25:52.038Z · LW(p) · GW(p)

I want to reinforce the norm of pointing out fucky dynamics when they occur...

Calling this subthread part of a fucky dynamic is begging the question a bit, I think.

If I post something that's wrong, I'll get a lot of replies pushing back.  It'll be hard for me to write persuasive responses, since I'll have to work around the holes in my post and won't be able to engage the strongest counterarguments directly.  I'll face the exact quadrilemma you quoted, and if I don't admit my mistake, it'll be unpleasant for me!  But, there's nothing fucky happening: that's just how it goes when you're wrong in a place where lots of bored people can see.

When the replies are arrant, bad faith nonsense, it becomes fucky.  But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn't be able to tell whether you were looking at a good dynamic or a bad one.

So, calling this "fucky" is calling JenniferRM's post "bullshit".  Maybe that's your model of JenniferRM's post, in which case I guess I just wasted your time, sorry about that.  If not, I hope this was a helpful refinement.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T22:27:33.349Z · LW(p) · GW(p)

(My sense is that dxu is not referring to JenniferRM's post, so much as the broader dynamic of how disagreement and engagement unfold, and what incentives that creates.)

Replies from: dxu
comment by dxu · 2021-11-08T22:29:51.272Z · LW(p) · GW(p)

Endorsed.

Replies from: Taran
comment by Taran · 2021-11-08T22:57:24.439Z · LW(p) · GW(p)

Fair enough!  My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.  

(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others' posts "bullshit" without getting too worked up or disrupting the overall social order.  I've been in communities that worked that way but it seemed to just be a founder effect, I'm not sure how you'd create that norm in a group with a strong existing culture).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-09T00:20:08.451Z · LW(p) · GW(p)

It's often useful to have possibly false things pointed out to keep them in mind as hypotheses or even raw material for new hypotheses. When these things are confidently asserted as obviously correct, or given irredeemably faulty justifications, that doesn't diminish their value in this respect, it just creates a separate problem.

A healthy framing for this activity is to explain theories without claiming their truth or relevance. Here, judging what's true acts as a "solution" for the problem, while understanding available theories of what might plausibly be true is the phase of discussing the problem [LW · GW]. So when others do propose solutions, do claim what's true, a useful process is to ignore that aspect at first.

Only once there is saturation, and more claims don't help new hypotheses to become thinkable, only then this becomes counterproductive and possibly mostly manipulation of popular opinion.

comment by JenniferRM · 2021-11-14T18:27:46.597Z · LW(p) · GW(p)

This word "fucky" is not native to my idiolect, but I've heard it from Berkeley folks in the last year or two. Some of the "fuckiness" of the dynamic might be reduced if tapping out [? · GW] as a respectable move in a conversation.

I'm trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days. 

I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not. 

If I get voted down, or not upvoted, I don't care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in "damaging lesswrong".

I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?

Maybe he's trying to respond to every response as a potential "cost of doing the great work" which he is willing to shoulder?  But... I would expect him to get a sore shoulder though, eventually :-(

If "the general audience" is the causal locus through which a person's speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor's mind (who you are speaking to "in front of the audience")) then tapping out of a conversation might "make the original thesis seem to the audience to have less justification" and then, if the audience's brains were the thing truly of value to you, you might refuse to tap out?

This is a real stress. It can take lots and lots of minutes to respond to everything.

Sometimes problems are so constrained that the solution set is empty, and in this case it might be that "the minutes being too few" is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like "being in the same room with a whiteboard nearby". It is hard for me to math very well in the absence of shared scratchspace for diagrams.

Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I'm coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C'est la vie <3

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T18:33:45.070Z · LW(p) · GW(p)

My goal is to somehow help Duncan

If your goal is to somehow help Duncan, you could start by ceasing to relentlessly and overconfidently proceed with wrong models of me.

comment by habryka (habryka4) · 2021-11-08T22:29:16.386Z · LW(p) · GW(p)

I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like "illusion of transparency" and "trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame", but am not confident. 

I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don't actually really know what to do about the second one, like, on a deeper level. I feel like "people wanting to have a different type of discussion than the OP wants to have" is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it's a huge problem and worth focussing on at the margin.

comment by Stephen Bennett (GWS) · 2021-12-06T02:59:51.849Z · LW(p) · GW(p)

In light of your recent post on trying to establish a set of norms and guidelines for LessWrong (I think you accidentally posted it before it was finished, since some chunks of it were still missing, but it seemed to elaborate on things you put forth in stag hunt), it seems worthwhile to revisit this comment you made about a month ago that I commented on [LW(p) · GW(p)]. In my comment I focused on the heat of your comment, and how that heat could lead to misunderstandings. In that context, I was worried that a more incisive critique would be counterproductive. Among other things, it would be increasing the heat in a conversation that I believed to be too heated. The other worries were that I expected that you would interpret the critique as an attack that needed defending, I intuited that you were feeling bad and that taking a very critical lens to your words would worsen your mood, and that this comment is going to take me a bunch of work (Author's note: I've finished writing it. It took about 6 hours to compose, although that includes some breaks). In this comment, I'm going to provide that more incisive critique.

My goal is to engender a greater degree of empathy in you when you engage with commenters that disagree with you. This higher empathy would probably result in lower heat, which would allow you to more come closer to the truth since you would receive higher quality criticism. This is related to what habryka says here [LW(p) · GW(p)], where they say that "...I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch...", and Elizabeth says here [LW(p) · GW(p)] that "I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average." In order to do this, I'm going to reread your Stag Hunt post, reread the comment chain leading up to your comment, and then do a line-by-line analysis of that comment looking for violations of the guidelines to rationalist discourse that you set in Stag Hunt.

My goal is twofold: to provide evidence that you would be helped by greater empathy (and lower heat) directed towards your critics, and to echo what I see as the meat of Jennifer's comment; that if I were to adopt the framing I see in Stag Hunt, it would be on net detrimental to the LessWrong community.

Before all that, I want to reiterate: I like the beginning of your comment. Pointing out the rock-and-a-hard-place dilemma that you feel after reading her comment is a valuable insight, but I think that for the most part your comment would be stronger without the heated line-by-line critique of her comment. She gave you that invitation to do this and so the line-by-line focus on flaws in her comment is appropriate, but the heat you brought and your apparent confidence in assessing her mental state seems unwarranted. While you did not give such permission in that comment of yours, in the post itself you said:

I'd really like it if I were embedded in a supportive ecosystem. If there were clear, immediate, and reliable incentives for doing it right, and clear, immediate, and reliable disincentives for doing it wrong. If there were actual norms (as opposed to nominal ones, norms-in-name-only) that gave me hints and guidance and encouragement. If there were dozens or even hundreds of people around, such that I could be confident that, when I lose focus for a minute, someone else will catch me.

Catch me, and set me straight.

Because I want to be set straight.

Because I actually care about what's real, and what's true, and what's justified, and what's rational, even though my brain is only kinda-sorta halfway on board, and keeps thinking that the right thing to do is Win.

Sometimes, when people catch me, I wince, and sometimes, I get grumpy, because I'm working with a pretty crappy OS, here. But I try to get past the wince as quickly as possible, and I try to say "thank you," and I try to make it clear that I mean it, because honestly, the people that catch me are on my side. They are helping me live up to a value that I hold in my own heart, even though I don't always succeed in embodying it.

I like it when people save me from the mistakes I listed above. I genuinely like it, even if sometimes it takes my brain a moment to catch up.

I think that Jennifer's comment was, in part, doing this. I agree that her comment was highly flawed, and many of the critiques in your line-by-line are valid, but I expect that the net effect of your comment is to discourage both comments like hers (which it seems to me you think are a net negative contribution to the discussion), and also comments like this one. I should note here a great irony in the fact that this particular comment of yours has garnered the most analysis of this sort by me compared to any of your others. I think this is simply because I take great joy in pointing out what I see as hypocrisies, and so I would be surprised if it generalized to a similar comment to this one that was made in a different context. The rubric I'll be using to evaluate your comments is going to be the degree to which the comment falls into the mistakes you outline in Stag Hunt:

1 Make no attempt to distinguish between what it feels is true and what is reasonable to believe.
2 Make no attempt to distinguish between what it feels is good and what is actually good.
3 Make wildly overconfident assertions that it doesn't even believe (that it will e.g. abandon immediately if forced to make a bet).
4 Weaponize equivocation and maximize plausible deniability à la motte-and-bailey, squeezing the maximum amount of wiggle room out of words and phrases. Say things that it knows will be interpreted a certain way, while knowing that they can be defended as if they meant something more innocent.
5 Neglect the difference between what things look like and what they actually are; fail to retain any skepticism on behalf of the possibility that I might be deceived by surface resemblance.
6 Treat a 70% probability of innocence and a 30% probability of guilt as a 100% chance that the person is 30% guilty (i.e. kinda guilty).
7 Wantonly project or otherwise read into people's actions and statements; evaluate those actions and statements by asking "what would have to be true inside my head, for me to output this behavior?" and then just assume that that's what's going on for them.
8 Pretend that it is speaking directly to a specific person while secretly spending the majority of its attention and optimization power on playing to some imagined larger audience.
9 Generate interventions that will make me feel better, regardless of whether or not they'll solve the problem (and regardless of whether or not there even is a real problem to be solved, versus an ungrounded anxiety/imaginary injury).

I added the numbers because that makes them easier to reference. I am sufficiently confused by 1, 2, and 9 that I don't think I'd be able to identify them if I saw them, so I'll ignore those. The rest I'll summarize in one-or-two word phrases, which will make them easier to reference throughout in a way that is more legible to readers.

3: Overconfidence
4: Motte-and-bailey
5: [blank] (In the process of making this list, I couldn't figure out a short handle for this that wasn't just "Overconfidence" or "Strawmanning", although there does seem to be a difference between this and those. I'm a bit stuck and confused here, presumably I'm lacking some understanding of what this is that would let me compress it.)
6: Failure to track uncertainty. (I'm not sure if this point is intended to be an instance of the broader class of not tracking uncertainty or specific to tracking guilt).
7: Failure of empathy.
8: Playing to the crowd.

You also accuse Jennifer of strawmanning throughout, which I'll add to the argumentative tactics that you would like pointed out to you. I take strawmanning to mean "The act of presenting a weaker version of someone's argument to argue against. This is most noticable when paraphrasing their statement in words they would not endorse, and then putting those words in quotation marks".

Before any analysis of your comment, I'd like to summarize Jennifer's comment in my own words (from memory, I read her comment for the second time about 2 hours ago and I'm doing this while about 1/4 of the way through analyzing your comment):

You seem to be advocating for a more conflict-oriented framing of lesswrong discourse than I'm comfortable with. You keep coming back to a weed/weeding framing and a stag hunt, but I don't think that the rate of comments that violate an unstated set of rationalist norms has a substantive impact on our ability to engage in good discussions. When you propose that weeds be pruned from our garden, I take you to mean that users who violate those norms ought to be banned, and I wonder what metric will be used to do the banning. I suspect it will be on net destructive towards the goal of a prosperous garden for rationalist discourse. Indeed, if people who violate those norms ought to be banned, I suspect that I would advocate for your banning because you do those very things. I'm being critical of your post ("pokey"), and it seems to me that you find it unpleasant. Do we really want the levels of criticality to increase?

This is presumably quite different from what she actually said, but that's the essence of what I understood her to mean.

Anyways, enough exposition. I'll be quoting everything you say, line by line, and doing my best to describe the degree to which it lapses into any of the fallacies outlined above. I'll also provide running commentary to stitch everything together into a cohesive mass. Some lines won't have any commentary, which I'll denote with ".". If I interrupt a paragraph, I'll end the quote with "..." and begin the next quote with "...". I'm aiming for either dispassionate or empathetic tone throughout, wish me great skill:


If you think I'm irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.

I'm having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I'm also reasonably confident that my menu of options looks like:

  • Don't respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don't have one that's grounded in truth
  • Respond in brief, and the very culture that I'm saying currently isn't trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.

This makes it easier for me to model you and improves my sense of clarity surrounding the disagreement since I read it as a description of how you see yourself and how you see the disagreement between yourself and Jennifer. This is far and away my favorite part of your post.

In my view the individual points take an overly negative view of the outcomes of your potential options. If you didn't respond, I think you are overestimating the degree to which I and other commenters will think that Jennifer is right (relative to how "right" I think she is now, having read your response several times). If you responded in brief, it's harder for me to guess how I would view your comment because you did not respond in brief. Had you only included the part quoted above, for instance, I would have flagged Stag Hunt and Jennifer's comments as likely rooted in an unstated disagreement about something more fundamental than what the two of you are explicitly talking about, but I wouldn't know what it was (although it's hard to say how much of that is my current view intruding).

  • Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)

This comment supposes in a parenthetical that there are many things wrong with Jennifer's comment, but has not yet fortified that claim. From a rhetorical standpoint, I see this as justifying the subsequent line-by-line analysis of Jennifer's comment. It's also not clear to me why the existence of essays that describe the issues with Jennifer's comment make the citation of those essays in refuting her comment sensation-of-doom inducing. I'm guessing it's because you believe that if an essay exists that describes the problematic outcomes of a rhetorical/argumentative device you are about to use, you should never use that device?

There might be some Overconfidence in here, since I suspect that (had people not read your comment) Jennifer's comment would score less-than-the-mean in terms of its violation of site norms, although I don't know how we would measure this (and therefore turn it into a bet, which would let you examine the degree to which your comment engages in Overconfidence for yourself).

  • Respond at length to all such comments, even though it's easier to produce bullshit than to refute bullshit, meaning that I'm basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. "People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet."

I notice that this implies, but does not quite state, that Jennifer's comment is bullshit.

Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that "things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!"

Strawmanning. Jennifer's comment seems closer to "while weeds may indeed exist, they are hard to differentiate from the plants the garden is intended to cultivate and may have no negative effects on those plants".

Completely ignoring the assertion I made, with substantial effort and detail, that it's bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.

I took Jennifer's comment as disagreeing with that state of affairs, proposing that weeds might not be easily differentiable from non-weeds, and challenging the weeding/garden framing entirely. I think that Jennifer's comment would be stronger if she spoke to the specific instances you highlighted in the parenthetical of commenting/upvotes-gone-awry, although I should note that I found the comments that did that elsewhere somewhat confusing.

That seems bad.

And it doesn't matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn't read it and doesn't care, and there aren't enough other people chiming in. I've answered the call that you're making here half a dozen times, elsewhere. More than once on this very post. But that doesn't count for anything in your book, and the audience doesn't see it or care about it. From the audience's perspective, you made a pretty good comment and I didn't substantively respond, and that's not a good look, eh?

This reads to me as a mixture of several things:

  • A statement about your own mind (i.e. that you feel you are losing a social war), which you are the true authority on.
  • A statement about the state of LessWrong norms (i.e. that you feel that LessWrong norms are bad, and that your current attempts to improve them have no impact)
  • A statement about me and others who are reading this exchange between you and Jennifer (that we have not noticed that Jennifer violates some discourse norms in her comment because she is upvoted: a Failure of empathy)

I also have a couple points I'd like to respond to:

  • When you say "I've answered the call that you're making here...", I don't know what call you're referencing.
  • You say that "there aren't enough other people chiming in" in reference to "in-depth analysis of all the ways that a bad comment was bad". I think I'm doing here (although I don't endorse it phrased in those terms). I also feel discouraged w.r.t. making comments like these when I read that, although I'm not sure why. Perhaps I don't like being told I'm on the losing side of a war. Perhaps I don't like anticipating that this comment is futile.

I don't want to keep falling prey to this dynamic. But here, since you asked. I don't have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I'll at least provide a short pointer.

Like... this is literally black and white thinking?

Fallacy of the grey, ironic in this case. "Black and white thinking" is not always bad or inappropriate; some things are in fact more or less binary and using the label "black and white thinking" to delegitimize something without checking to what degree it's actually right to be thinking in binaries is disingenuous and sloppy.

And why would a good and sane person ever want

I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—...

This seems like a good critique.

...having the general effect of nudging readers toward a belief that anything they don't already see must not be real.

That isn't the effect that her rhetoric had on me, so I disagree with you on the object level.

I also think that normatively people ought to be cautious about reasoning about the consequences that other people's comments might have on an imagined audience, since it seems like the sort of thing that can be leveraged to disparage many comments that are on net beneficial to the platform.

Maybe your initial desires are improper?

"Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I'm going to do the Carlson/Shapiro thing of 'just asking a question' and then not settling it, because I can score points with the implication and then fade into the mists. I don't have to stick my neck out or put any skin in the game."

Strawmanning, playing to the crowd.

Just because voting is wrong, here and there... like... so what? Some of my best comments have gotten negative votes and some of the ones I'm most ashamed of go to the top. This means that the voters are sometimes dumb. That's OK. That's life. Maybe educate them?

Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with "maybe educate them?" as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?

Failure of empathy. It seems to me that Jennifer's dismissal of the importance of the relative scoring of a couple of comments stemmed from not seeing it tied to the point that the little things matter. There are 2173 words between the paragraph that begins "Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important..." and the paragraph in which you identify comments that had bad outcomes as measured by upvotes in your view (which begins "(I set aside a few minutes to go grab some examples...)"). That's a fair bit of time to track that particular point. Do you expect everyone to track your arguments with that level of fidelity? Do you track others' arguments that well? I'll remark that I typically don't, although I might manage to when it comes to pointing out hypocrisy because it's something that I have a proclivity for.

I'll also remark that I read this response as smug and dismissive, although my hypocrisy detector is rather highly tuned right now, and so I'm more likely to read hypocrisy when it isn't present.

Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

"I failed at this, so I'm going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point."

Strawmanning of the hypocritical variety.

I take Jennifer to be talking about the fact that the community does not agree with her with respect to voting norms (as measured by the behavior that she observes on LessWrong).

We don't need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn't too much work to do so. The rest might be mulch, but mulch is good too <3

Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working...

Her statement here seems to follow from her elsewhere stating that the goal of gardening is to grow the desired plants, and that weeding is largely immaterial to that goal. I agree that she has not provided a causal mechanism by which weeding, when brought back to the state of LessWrong comment culture, is immaterial to thriving plant life. However, I don't recall you making the other argument in your OP. You gestured towards that fact and it rested as a background assumption in much of your post, but it's not one that I remember you arguing or providing evidence for (beyond the claim that you are better than average at detecting the degree to which such things are problematic). I'm not going to re-re-read your OP to check this, but if you did make this claim I would like to hear it.

... And again, a fraction of the effort required to refute, so offering me the choice of "let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent."

I did not read her comment as a zinger. Also playing to the audience.

A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just... like... living and letting live, and trying to learn from things you initially can't appreciate?

Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that ("I think you think you've made a case here, but I didn't follow it; can you expand on X?") just pretending like I hadn't done the work...

Hmm, it looks like I also missed your argument in favor of the cost effectiveness of adversarial attacks on the weeds. I recall that your previous essay discussed the value of a concentration of force, which is a reason to support such attacks, but is not an argument about its cost effectiveness (you say a valuable use of resources, and I use cost effective. If there's a material difference there, let me know).

Same thing happening with "I'm saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented... probably... but not certainly."

Strawmanning.

...and I'd like to know what those are, how they can be detected in people or conversations or whatever??

Literally listed in the essay. Literally listed in the essay.

From memory, you listed fallacies that you yourself tended to fall into but when it came to evidence taken from other commenters it was a list of links without much context. There's also a difference between having a list of fallacies and having a mechanism by which those fallacies can be detected and corrected. Perhaps you're referring to the list of ideas that you list as "bad ideas" at the end, but then I'm confused about the degree to which you actually believe they're bad ideas. If she is saying that the strategy of selecting for weeds against desirable plants is necessary before the call to action (she is saying something probably importantly different, but tracking point of views is getting exhausting), and you have preemptively agreed that you do not have a good mechanism to do this, then I don't understand why you disagree with her disagreement here.

Perhaps you could explain "epistemic hygiene" to me in mechanistic detail, and show how I'm messing it up?

Again the trap

I feel I've talked about this particular phrase enough.

..."just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you've already done?"...

Strawmanning

...Framing things such that non-response will seem like I'm being uncooperative and unreasonable, when in fact you're just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is "we've already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards."

Failure of empathy, and possibly playing to the audience (to the extent that you are accusing her of playing to the audience without outright saying it).

But anyway, I'm doing it (a little) here...

Good!

...For the hundredth time, even though it won't actually help much and you'll still be upvoted and I'll still be downvoted and I'll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.

Overconfidence.

You don't wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn't have a preference between [martial arts actions] and [everything else] and certainly shouldn't enforce that people limit themselves to [martial arts actions] while participating in the class, that's black-and-white thinking, just let everyone put their ideas into a free marketplace!

To the extent that you're accusing Jennifer of sneering about you caring about rationalist discourse norms on LessWrong, this is a failure of empathy.

Well-kept gardens die by pacifism. If you don't think that a garden being well-kept is a good thing, that's fine. Go live in a messy garden. Don't actively undermine someone trying to clean up a garden that's trying to be neat.

My understanding of Jennifer's comment is that she believes you will make the garden messier with the arguments you are putting forth in Stag Hunt.

Alternately, "we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?"

I don't know the extent to which this is a rhetorical question, but to answer it earnestly I would expect that telling a user to read the sequences is an act that takes several orders of magnitude less effort than actually reading the sequences. I'm not confident about what the relative orders of magnitude should be between the critique-er and the critique-ee, but 1:2 (for a total of 1:10 effort) is where my intuition places the ratio. Reading a comment, deciding that it is unworthy of LessWrong discourse norms, and typing "read the sequences" is probably closer to a 1:5 ratio between the orders of magnitude of effort (i.e. it takes 100,000 times much effort to read the entirety of the sequences than it does to make such a comment).

I try to mostly make peace, because I believe conflict and "intent to harm" is very very costly.

Except that you're actively undermining a thing which is either crucial to this site's goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.

This read to me as Jennifer stating her desire for cooperation, which is a signal that doesn't come free! It cost her something, at a minimum the effort to type it.

Your response reads to me as throwing that request for cooperation back in her face and using her intent to cooperate as evidence that she is somehow even less cooperative than you expected prior to this statement. It's possible that you just intended to disagree with her on the material fact that she intends cooperation, or observing that her actions do not align with her words.

If we really need to start banning the weeds, for sure and for true... because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector... then I might propose that you be banned?

Strawmanning. Strawmanning.

I agree that the beginning of that statement is strawmanning.

The core of that statement in my eyes is the last statement; that if she agreed with the argument you put forth in stag hunt as she understands it, she would advocate for your banning.

To avoid further illusions of transparency, I'll analyze how I would act if I based my actions on what I understand you to argue in Stag Hunt: If I were to suspend my own judgment and base my actions solely on my best attempt to interpret what you advocate for in stag hunt, I would strong downvote your comment because I see it as much much more "weed-like" than the average comment on LessWrong. It is a violation of the point of view you put forth in stag hunt because it normalizes bad forms (I suspect it succeeds despite this because it is prefaced with a valuable insight). I believe it normalizes bad forms because I see it as strawmanning, projecting statements and actions into others minds, pretending to speak to Jennifer while actually speaking mostly to the LessWrong community at large, and failing to retain skepticism that you might have deceived yourself w.r.t. the extent of Jennifer's violations of rationalist discourse.

Instead, I weakly upvoted it because the first part of it is very useful, and responded to what I saw as the primary fault with the rest of it; that you engaged with Jennifer's comment from a very conflict-centric point of view which led to high heat. As a result of this framing, you misunderstood most of her comment.

But I don't think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.

Except that you're actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called "LessWrong," that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.

The boundaries that Jennifer is referring to here are boundaries on the extent of the conflict. What you advocate for in Stag Hunt is an expanding of those boundaries, and it was not clear to me upon reading it where those boundaries would end.

The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.

"The thing I want to do is strawman what you're arguing for as 'proactively harming people for failing to live up to an ideal,' such that I can gently condescend to you about how it's costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays."

While I agree that Jennifer is strawmanning here, this is the second instance of accusing Jennifer of strawmanning while strawmanning.

"I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it" is a bad plan for making the world good.

STRAWMANNING. "You said [A]. Rather than engage with [A], I'm going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief."

Same as above.

The specific problem: whats the inter-rater reliability like for "decisions to weed"? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people... its a recipe for disaster.

"I bet it is low, but rather than proposing a test, I'm going to just declare it impossible on the scale of this site."

Strawmanning. I take Jennifer as reiterating one of her central points here: if we take it as true that there are good comments and bad comments, and that we want to do something about the bad comments, then through what policy are we going to identify those bad comments (leaving aside what we then do about those bad comments)?

You had what you remarked were very bad ideas. Jennifer's argument rests on the claim that such methods are rare, costly, or do not exist (but does not make that claim explicit).

I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn't get a grip on "where to begin."

This seems mean to me. You already don't quote everything she says, you don't have to remark on those last two paragraphs.


I'm not sure that going line by line was the most effective way to achieve my goals. It was costly, but I didn't see another way to get you to internalize the fact that people are regularly taking costly measures to try to improve your model of the world, and I see you as largely ignoring them or accusing them of wrongdoing. Not all critiques of your work can be as comprehensive as mine is here, since as you pointed out, "it's easier to produce bullshit than to refute bullshit" (I granted myself this one zinger as motivation for finishing this comment, if others remain in the text they are not intended).


Meta-question: Is this the sort of thing that's appropriate to post as a top-level post? It seems fairly specific, but I worked hard on it and I imagine it as encapsulating the virtues that you put forth in Stag Hunt and your hopefully-soon-to-be-posted guidelines for rationalist discourse.

Edited for clarity on the 1:5 point and a few typos.

comment by Stephen Bennett (GWS) · 2021-11-10T02:09:01.857Z · LW(p) · GW(p)

I'm glad you took the time to respond here, and there is a lot I like about this comment. In particular, I appreciate this comment for:

  • Being specific without losing sight of the general message of the parent comment.
  • Sharing how you see your situation at the outset, which puts the tone of the comment in context.
  • Identifying clear points of disagreement where possible.

There are, however, some points of disagreement I'd like to raise and some possible deleterious consequences I'd like to flag.

I share the concern raised by habryka about the illusion of transparency, which may be increasing your confidence that you are interpreting the intended meaning (and intended consequences) of Jennifer's words. I'll go into (possibly too much) detail on one very short example of what you've written and how it may involve some misreading of Jennifer's comment. You quote Jennifer:

Perhaps you could explain "epistemic hygiene" to me in mechanistic detail, and show how I'm messing it up?

and respond:

Again the trap; ...

I was also confused about what you meant by epistemic hygiene when finishing the essays. Elsewhere someone asked whether they were one of the ones doing the bad thing you were gesturing towards, which is another question/insecurity I shared (I do not recall how you responded to that question). It is hopefully clear that when I say this here, in this way, that it is not a trap for you. It's statement of my confusion embedded in a broader point and I hope you feel no obligation to respond. The point of this exposition isn't to get clarity on that point, it's to (hopefully) inspire a shift of perspective. Your comment struck me is very high heat; that heat reflects a particular perspective. I don't know exactly what that perspective is, but it seems to me that you saw Jennifer's comments as threats. To the extent that you see a comment as a threat, the individual components of the comment take on more sinister airs. I tend to post in a calm tone, so most people have difficulty maintaining perspectives that see me as a threat. The perspective I'm hoping to affect in you is one of collaboration. I am hoping to leverage my nonthreatening way of raising the same confusion as Jennifer so that it is more natural to see that question of Jennifer's in a nonthreatening light. In doing so, I'm hoping to provide a method by which her comment as a whole takes on a less threatening tone (Again, I expect this characterization of your perspective to be wrong in important ways - you may not see her comment as precisely "threatening")

Framing her question as a trap also implies that it was "set", i.e. that putting you in a weakened position was part of her intent (although you might not have intended to imply this). It's possible that Jennifer had this intention, but I don't know and I suspect that you don't either. Perhaps you meant that it was a trap in the normative sense, i.e. that because Jennifer included that question you are placed (whether Jennifer intends it or not) in a no-win situation; that it's a statement about you (i.e. you have been trapped even if no one is a hunter setting traps). In the context of your high-heat comment, however, I as a reader expect that you believe Jennifer intended it as a trap.

I mentioned that I was trying to shift your perspective to one of collaboration, but I never gave the motivation for why. What are some of the negative consequences of the high-heat framing? I expect that you will get less of the kind of feedback you want on your posts. I tend to avoid social conflict - particularly social conflict that is high in heat. This neuroticism makes me disinclined to converse with people who adopt high-heat tones, in part because I worry that I will get a high-heat reaction. I do not think I would attempt to convey a broad-scope confusion/disagreement with you of the type that Jennifer did here. I would probably choose to nitpick or simply not respond instead, letting the general confusion remain (in part I do this here; quibbling over tone instead of trying to resolve the major points of confusion with your post. I might try to figure out how to describe my confusion with your post and ask you later). Now, I don't think you should be optimizing solely to get broad-scope-disagreement/confusion responses from neurotic people like me, but I expect you to want to know how your responses are received. The high heat from this comment, even though it is not directed at me, makes me (very slightly) afraid of you.

This relates back to Elizabeth's comment elsewhere, where she says

I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.

I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety). Mostly this is a fault of mine, but high heat responses are part of what I fear when I do not respond (there are lots of other things too, so don't please do not update strongly on times when I do not respond).


It's likely that this comment should have contained (or simply been entirely composed of) questions, since it instead relied on a fair bit of speculation on my part (although I tried to make most of my statements about my reading of your comment rather than your comment itself). I'm including some of those questions here instead of doing the hard work of rewriting my comment to include them in more natural places (along with some other questions I have). I also don't think it would be productive to respond to all of these at once, so respond only to the ones that you feel like:

  • Did you find my response nonthreatening?
  • Do you feel a difference in reaction to my stating confusion at epistemic hygiene and Jennifer stating confusion at that point?
  • Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
  • Do you find my characterization of your perspective, where Jennifer's comment is/was a threat, accurate?
  • Is a more collaborative perspective available to you at this moment?
  • If it is, do you find it changes your emotional reaction to Jennifer's comment?
  • Do you feel that your comment was high heat?
  • If so, what goals did the high heat accomplish for you?
  • And, do you believe they were worth the costs?
  • Did you find my comment welcome?

I share dxu's perception that you are Feeling Bad and want to extend you some sympathy (my expectation is that you'll enjoy a parenthetical here - all the more if I go meta and reference dxu's parenthetical - so here it is with reference and all).


EDIT: jessica -> Jennifer. Thanks localdeity.

Replies from: Duncan_Sabien, localdeity
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T03:08:16.448Z · LW(p) · GW(p)

I was also confused about what you meant by epistemic hygiene when finishing the essays.

In part, this is because a major claim of the OP is "LessWrong has a canon; there's an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts)."  I didn't set out to describe and define epistemic hygiene within the essay, because one of my foundational assumptions is "this work has already been done; we're just not holding each other to the available existing standards found in all the highly upvoted common memes."

It is hopefully clear that when I say this here, in this way, that it is not a trap for you.

This is evidence I wasn't sufficiently clear.  The "trap" I was referring to was the bulleted dynamic, whereby I either cede the argument or have to put forth infinite effort.  I agree that it wasn't at all likely deliberately set by Jennifer, but also there are ways to avoid accidentally setting such traps, such as not strawmanning your conversational partner.

(Strawmanning being, basically, redefining what they're saying in the eyes of the audience.  Which they then either tacitly accept or have to actively overturn.)

I think that, in the context of an essay specifically highlighting "people on this site often behave in ways that make it harder to think," doing a bunch of the stuff Jennifer did is reasonably less forgivable than usual.  It's one thing to, I dunno, use coarse and foul language; it's another thing to use it in response to somebody who's just asked that we maybe swear a little less.  Especially if the locale for the discussion is named LessSwearing (i.e. the person isn't randomly bidding for the adoption of some out-of-the-blue standard).

Your comment struck me is very high heat; that heat reflects a particular perspective. I don't know exactly what that perspective is, but it seems to me that you saw jessica's comments as threats.

Yes.  I do not think it was a genuine attempt to engage or converge with me (the way that Said, Elizabeth, johnswentsworth, supposedlyfun, and even agrippa were clearly doing or willing to do), so much as an attempt to condescend, lecture, and belittle, and the crowd of upvotes seemed to indicate either general endorsement of those actions, or a belief that it's fine/doesn't matter/isn't a dealbreaker.  This impression has not shifted much on rereads, and is reminiscent of exactly the prior experiences on LW that caused me to feel the need to write the OP in the first place.

  • Did you find my response nonthreatening?

Yes.

  • Do you feel a difference in reaction to my stating confusion at epistemic hygiene and jessica stating confusion at that point?

Yes.

  • Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)

It was trust-increasing and felt cooperative throughout.

  • Do you find my characterization of your perspective, where Jennifer's comment is/was a threat, accurate?

For the most part, yes.

  • Is a more collaborative perspective available to you at this moment?

I'm not quite sure what you're asking, here.  I can certainly access a desire to collaborate that is zero percent contingent on agreement with my claims.

  • If it is, do you find it changes your emotional reaction to Jennifer's comment?

No, or at least not yet.  supposedlyfun, for example, seems at least as "hostile" as Jennifer on the level of agreement, but at least bothered to cut out paragraphs they estimated would be likely to be triggering, and mention that fact.  That's a costly signal of "look, I'm really trying to establish a handshake, here," and it engendered substantial desire to reciprocate.  You, too, are making such costly signals.  If Jennifer chose to, that would reframe things somewhat, but in Jennifer's second comment there was a lot of doubling down.

  • Do you feel that your comment was high heat?

Yes. 

  • If so, what goals did the high heat accomplish for you?

This presupposes that it was ... sufficiently strategic, or something?

Goals that were not necessarily well-achieved by the reply:

  • Putting object-level critique in a public place, so the norm violations didn't go unnoticed (I'm not confident anyone else would have objected to the objectionable stuff)
  • Demonstrating that at least one person will in fact push back if someone does the epistemically sloppy bullying thing (I regularly receive messages thanking me for this service)
  • And, do you believe they were worth the costs?

I don't actively believe this, no.  It seems like it could still go either way.  I would be slightly more surprised by it turning out worth it, than by it turning out not worth it.

  • Did you find my comment welcome?

Yes.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-10T20:59:08.079Z · LW(p) · GW(p)

epistemic hygiene

This is an example of the illusion of transparency issue. Many salient interpretations of what this means [? · GW] (informed by the popular [LW · GW] posts [LW · GW] on the topic, that are actually not explicitly on this topic) motivate actions that I consider deleterious overall, like punishing half-baked/wild/probably-wrong hypotheses or things that are not obsequiously disclaimed as such, in a way that's insensitive to the actual level of danger of being misleading. A more salient cost is nonsense hogging attention, but that doesn't distinguish it from well-reasoned clear points that don't add insight hogging attention.

The actually serious problem is when this is a symptom of not distinguishing epistemic status of ideas on part of the author, but then it's not at all clear that punishing publication of such thoughts helps the author fix the problem. The personal skill of tagging epistemic status of ideas in one's own mind correctly is what I think of as epistemic hygiene, but I don't expect this to be canon, and I'm not sure that there is no serious disagreement on this point with people who also thought about this. For one, the interpretation I have doesn't specify community norms, and I don't know what epistemic-hygiene-the-norm should be.

comment by dxu · 2021-11-08T21:07:21.151Z · LW(p) · GW(p)

[Obvious disclaimer: I am not Duncan, my views are not necessarily his views, etc.]


It seems to me that your comment is [doing something like] rounding off Duncan's position to [something like] conflict theory, and contrasting it to the alternative of a mistake-oriented approach. This impression mostly comes from passages like the following:

You're sad about the world. I'm sad about it too. I think a major cause is too much poking. You're saying the cause is too little poking. So I poked you. Now what?

If we really need to start banning the weeds, for sure and for true... because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector... then I might propose that you be banned?

And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of "people can't grow, errors are defection, ban the defectors" because you would also think that you can't grow, and I can't grow, and if we're calling for each other's banning based on "essentializing pro-conflict social logic" because we both think the other is a "weed"... well... I guess its a fight then?

But I don't think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.

To the extent that this impression is accurate, I suspect you and Duncan are (at least somewhat) talking past each other. I don't want to claim I have a strong model of Duncan's stance on this topic, but the model I do have predicts that he would not endorse summaries of his positions along the lines of "people can't grow, errors are defection, ban the defectors"; nor do I think he would endorse a summary of his prescriptions as "more poking", "more fighting", or "more conflict".

Why is this an important clarification, in my view? Well, firstly, on the meta-level I should note that I don't find the "conflict versus mistake" lens particularly convincing; my feeling is that it fails to carve reality at the joints in at least some important ways, in at least some important situations. This makes me in general suspicious of arguments that [seem to me to] depend on this lens (in the sense of containing steps that route substantially through the lens in question). Of course, this is not necessarily an indictment of that lens' applicability in any specific case, but I think it's worth mentioning nonetheless, just to give an idea of the kind of intuitions I'm starting with.

In terms of the argument as it applies to this specific case: I don't think my model of Duncan particularly cares about the inherent motivations behind [what he would consider] violations of epistemic hygiene. Insofar as he does care about those motivations, I think it is only indirectly, in that he predicts different motivations will cause different reactions to pushback, and perhaps "better" motivations (to use a somewhat value-loaded term) will result in "better" reactions.

Of course, this is all very abstract, so let me be more specific: my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all. My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group's likelihood of doing this is significantly lower.

Model!Duncan then argues that, if the ambient level of pushback crosses a certain threshold, this will make being a perennial member of the second group unpleasant enough to be psychologically unsustainable; either they will self-modify into a member of the first group, or (more likely) they will simply leave. Model!Duncan's view is that the departure of such members is not a great loss to LW, and that LW should therefore strive to increase its level of ambient pushback, which (if done in a good way) translates to increasing epistemic standards on a site level.

Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this--which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.

Note also that this model makes no assumption that epistemic violations ("errors") are in any way equivalent to "defection", intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent. And, on the flipside of that coin, it is not so strange to imagine that even people who are striving to escape from the default human behavior may still need gentle reminders from time to time.

(And if there are people on this site who do not so strive, and for whom the reminders in question serve no purpose but to annoy and frustrate, to the point of making them leave--well, says model!Duncan, so much the worse for them, and so much the better for LW.)


Finally, note that at no point have I made an attempt to define what, exactly, constitute "epistemic violations", "epistemic standards", or "epistemic hygiene". This is because this is the point where I am least confident in my model of Duncan, and separately where I also think his argument is at its weakest. It seems plausible to me that, even if [something like] Duncan's vision for LW were to be realized, there would be still be substantial remaining disagreement about how to evaluate certain edge cases, and that that lack of consensus could undermine the whole enterprise.

(Though my model of Duncan does interject in response to this, "It's okay if the edge cases remain slightly blurry; those edge cases are not what matter in the vast majority of cases where I would identify a comment as being epistemically unvirtuous. What matters is that the central territory is firmed up, and right now LW is doing extremely poorly at picking even that low-hanging fruit.")

((At which point I would step aside and ask the real Duncan what he thinks of that, and whether he thinks the examples he picked out from the Leverage and CFAR/MIRI threads constitute representative samples of what he would consider "central territory".))

Replies from: JenniferRM
comment by JenniferRM · 2021-11-10T18:14:17.352Z · LW(p) · GW(p)

Thank you for this great comment. I feel bad not engaging with Duncan directly, but maybe I can engage with your model of him? :-)

I agree that Duncan wouldn't agree with my restatement of what he might be saying. 

What I attributed to him was a critical part (that I object to) of the entailment of the gestalt of his stance or frame or whatever. My hope was that his giant list of varying attributes of statements and conversational motivations could be condensed into a concept with a clean intensive definition other than a mushy conflation of "badness" and "irrational". For me these things are very very different and I'll say much more about this below.

One hope I had was that he would vigorously deny that he was advocating anything like what I mentioned by making clear that, say, he wasn't going to wander around (or have large groups of people wander around) saying "I don't like X produced by P and so let's impose costs (ie sanctions (ie punishments)) on P and on all X-like things, and if we do this search-and-punish move super hard, on literally every instance, then next time maybe we won't have to hunt rabbits, and we won't have to cringe and we won't have to feel angry at everyone else for game-theoretically forcing 'me and all of us' to hunt measly rabbits by ourselves because of the presence of a handful of defecting defectors who should... have costs imposed on them... so they evaporate away [LW · GW] to somewhere that doesn't bother me or us".

However, from what I can tell, he did NOT deny any of it? In a sibling comment he says:

Completely ignoring the assertion I made, with substantial effort and detail, that it's bad right now, and not getting better.  Refusing to engage with it at all.  Refusing to grant it even the dignity of a hypothesis.

But the thing is, the reason I'm not engaging with his hypothesis that I don't even know what his hypothesis is other than trivially obvious things that have been true, but which it has always been polite to mostly ignore?

Things have never been particularly good, is that really "a hypothesis"? Is there more to it than "things are bad and getting worse"? The hard part isn't saying "things are imperfect". 

The hard part, as I understand it, is figuring out a cheap and efficient solution that, that actually work, and that actually work systematically, in ways that anyone can use once they "get the trick" like how anyone can use arithmetic. He doesn't propose any specific coherent solution that I can see? It is like he wants to offer an affirmative case, but he's only listing harms (and boy does he stir people up on the harms) but then he doesn't have a causal theory of the systematic cause of the harms in the status quo, and he doesn't have a specific plan to fix them, and he doesn't demonstrate that the plan mechanistically links to the harms in the status quo. So if you just grant the harms... that leaves him with a blank check to write more detailed plans that are consistent with the gestalt frame that he's offered? And I think this gestalt frame is poorly grounded, and likely to authorize much that is bad.

Speaking of models, I like this as the beginning of a thoughtful distinction:

my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all.

I'm not sure if Duncan agrees with this, but I agree with it, and relevantly I think that it is likely that neither Duncan nor I likely consider ourselves in the first category. I think both of us see ourselves as "doctors around these parts" rather than "patients"? Then I take Duncan's advocacy to move in the direction of a prescription, and his prescription sounds to me like bleeding the patient with leeches. It sounds like a recipe for malpractice.

Maybe he thinks of himself as being around here more as a patient or as a student, but, this seems to be his self-reported revealed preference for being here:

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

(By contrast I'm still taking the temperature of the place, and thinking about whether it is useful to me larger goals, and trying to be mostly friendly and helpful while I do so. My larger goals are in working out a way to effectively professionalize "algorthmic ethics" (which was my last job title) and get the idea of it to be something that can systematically cause pro-social technology to come about, for small groups of technologists, like lab workers and programmers who are very smart, such that an algorithmic ethicist could help them systematically not cause technological catastrophes before they explode/escape/consume or other wise "do bad things" to the world, and instead cause things like green revolutions, over and over.)

So I think that neither of us (neither me nor Duncan) really expects to "grow as Rationalists" here because of "the curriculum"? Instead we seem to me to both have theories of what a good curriculum looks like, and... his curriculum leaves me aghast, and so I'm trying to just say that even if it might cut against his presumptively validly selfish goals for and around this website.

Stepping forward, this feels accurate to me:

My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group's likelihood of doing this is significantly lower.

So my objection here is simply that I don't simply don't think think that "shifting  one's epistemics closer to the ideal" is a universal solvent, nor even a single coherent unique ideal.

The core point is that agency is not simply about beliefs, it is also about values. 

Values can be objective: the objective needs for energy, for atoms to put into shapes to make up the body of the agent, for safety from predators and disease, etc.  Also, as planning becomes more complex, instrumentally valuable things (like capital investments) are subject to laws of value (related to logistics and option pricing and so on) and if you get your values wrong, that's another way to be a dysfunctional agent. 

VNM rationality (which, if it is not in the cannon of rationality right now, then the cannon of rationality is bad) isn't just about probabilities being bayesian it is also about expected values being linearly orderable and having no privileged zero, for example.

Most of my professional work over the last 4 years has not hinged on having too little bayes. Most of it has hinged on having too little mechanism design, and too little appreciation for the depths of coase's theorem, and too little appreciation for the sheer joyous magic of humans being good and happy and healthy humans with each other, who value and care about each other FIRST and then USE epistemology to make our attempts at caring work better.

Over in that other sibling comments Duncan is yelling at me for committing logical fallacies, and he is ignoring that I implied he was bad and said that if we're banning the bad people maybe we should ban him. That was not nice of me at all. I tried to be clear about this sort of thing here:

On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally might not be "sane and good" because your advice seemed to be violating ideas about conflict and economics that seem normative to me.

But he just... ignored it? Why didn't he ask for an apology? Is he OK? Does he not think of people on this website as people who owe each other decent treatment?

My thesis statement, at the outset, such as it was:

This post makes me kind of uncomfortable and I feel like the locus is in... bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design? 

So like... the lack of an ability to acknowledge his own validly selfish emotional needs... the lack of of a request for an apology... these are related parts of what feels weird to me. 

I feel like a lot of people's problems aren't rationality, as such... like knowing how to do modus tollens or knowing how to model and then subtract out the effects of "nuisance variables"... the main problem is that truth is a gift we give to those we care about, and we often don't care about each other enough to give this gift.

To return to your comments on moral judgements:

Note also that this model makes no assumption that epistemic violations ("errors") are in any way equivalent to "defection", intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent.

I don't understand why "intent" arises here, except possibly if it is interacting with some folk theory about punishment and concepts like mens rea?

"Defecting" is just "enacting the strategy that causes the net outcome for the participants to be lower than otherwise for reasons partly explainable by locally selfish reasons". You look at the rows you control and find the best for you. Then you look at the columns and worry about what's the best for others. Then maybe you change your row in reaction. Robots can do this without intent. Chessbots are automated zero sum defectors (and the only reason we like them is that the game itself is fun, because it can be fun to practice hating and harming in small local doses (because play is often a safe version of violence)).

People don't have to know that they are doing this to do this. If I person violates quarantine protocols that are selfishly costly they are probably not intending to spread disease into previously clean areas where mitigation practices could be low cost. They only intend to like... "get back to their kids who are on the other side of the quarantine barrier" (or whatever). The millions of people whose health in later months they put at risk are probably "incidental" and/or "unintentional" to their violation of quarantine procedures.

People can easily be modeled as "just robots" who "just do things mechanistically" (without imagining alternatives or doing math or running an inner simulator otherwise trying to taking all the likely consequences into account and imagine themselves personally responsible for everything under their causal influence, and so on). 

Not having mens reas, in my book, does NOT mean they should be protected necessarily, if their automatic behaviors hurts others.

I think this is really really important, and that "theories about mens rea" are a kind of thoughtless crux that separates me (who has thought about it a lot) from a lot of naive people who have relatively lower quality theories of justice.  

The less intent there is, the worse it it from an easy/cheap harms reduction perspective. 

At least with a conscious villain you can bribe them to stop. In many cases I would prefer a clean honest villain. "Things" (fools, robots, animals, whatever) running on pure automatic pilot can't be negotiated with :-(

...

Also, Duncan seems very very attached to the game-theory "stag hunt" thing? Like over in a cousin comment he says:

In part, this is because a major claim of the OP is "LessWrong has a canon; there's an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts)."

(I kind of want to drop this, because it involves psychologizing, and even when I privately have detailed psychological theories that make high quality predictions that other people will do bad things, I try not to project them, because maybe I'm wrong, and maybe there's a chance for them to stop being broken, but:

I think of "stag hunt" as a "Duncan thing" strongly linked to the whole Dragon Army experiment and not "a part of the lesswrong canon". 

Double cruxing is something I've been doing for 20 years, but not under that name. I know that CFAR got really into it as a "named technique", but they never put that on LW in a highly formal way that I managed to see, so it is more part of a "CFAR canon" than a "Lesswrong canon" in my mind?

And so far as I'm aware "strawmanning" isn't even a rationalist thing... its something from old school "critical thinking and debate and rhetoric" content? The rationalist version is to "steelman" one's opponents who are assumed to need help making their point, which might actually be good, but so far poorly expressed by one's interlocutor.

I am consciously lowering my steelmanning of Duncan's position. My objection is to his frame in this case. Like I think he's making mistakes, and it would help him to drop some of his current frames, and it would make lesswrong a safer place to think and talk if he didn't try to impose these frames as a justification for meddling with other people, including potentially me and people I admire.)

...

Pivoting a bit, since he is so into the game theory of stag hunts... my understanding is that in 2-person Stag Hunt a single member of the team causes a failure of both to "get the benefit" so it becomes essential to get perfect behavior from literally everyone. The key difference with a prisoner's dilemma is that "non-defection (to get the higher outcome)" is a nash equilibrium, because playing different things is even worse for each of the two players than playing any similar move.

A group of 5 playing stag hunt, with a history of all playing stag, loves their equilibrium and wants to protect it and each probably has a detailed mental model of all the others to keep it that way, and this is something humans do instinctively, and it is great.

But what about N>5? Suppose you are in a stag hunt where each of N persons has probability P of failing at the hunt, and "accidentally playing rabbit". Then everyone gets a bad outcome with probability (1-(1-P)^N). So almost any non-trivial value of N causes group failure.

If you see that you're in a stag hunt with 2000 people: you fucking play rabbit! That's it. That's what you do. 

Even if the chances of each person succeeding is 99.9% and you have 2000 in a stag hunt... the hunt succeeds with probability 13.52% and that stag had better be really really really really valuable. Mostly it fails, even with that sort of superhuman success rate. 

But there's practically NOTHING that humans can do with better than maybe a 98% success rate. Once you take a realistic 2% chance of individual human failure into account, with 2000 people in your stag hunt you get a 1 in 2.83x10^18 chance of a successful stag hunt.

If you are in a stag hunt like this, it is socially and morally and humanistically correct to announce this fact. You don't play rabbit secretly (because that hurts people who didn't get the memo). 

You tell everyone that you're playing rabbit, even if they're going to get angry at you for doing so, because you care about them.

You give them the gift of truth because you care about them, even if it gets you yelled at and causes people with dysfunctional emotional attachments to attack you.

And you teach people rabbit hunting skills, so that they get big rabbits, because you care about them.

And if someone says "we're in a stag hunt that's essentially statistically impossible to win and the right answer is to impose costs on everyone hunting rabbit" that is the act of someone who is either evil or dumb.

And I'd rather have a villain, who knows they are engaged in evil, because at least I can bribe the villain to stop being evil. 

You mostly can't bribe idiots, more's the pity.

Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this--which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.

I think maybe your model of Duncan isn't doing the math and reacting to it sanely? 

Maybe by "stag hunt" your model of Duncan means "the thing in his head that 'stag hunt' is a metonym for" and it this phrase does not have a gears level model with numbers (backed by math that one plug-and-chug), driving its conclusions in clear ways, like long division leads clearly to a specific result at the end?

An actual piece of the rationalist canon is "shut up and multiply [? · GW]" and this seems to be something that your model of Duncan is simply not doing about his own conceptual hobby horse?

I might be wrong about the object level math. I might be wrong about what you think Duncan thinks. I might be wrong about Duncan himself. I might be wrong to object to Duncan's frame.

But I currently don't think I am wrong, and I care about you and Duncan and me and humans in general, and so it seemed like the morally correct (and also the epistemically hygienic thing ) is to flag my strong hunch (which seems wildly discrepant compared to Duncan's hunches, as far as I understand them) about how best to make lesswrong a nurturing and safe environment for people to intellectually grow while working on ideas with potentially large pro-social impacts.

Duncan is a special case. I'm not treating him like a student, I'm treating him like an equal who should be able to manage himself and his own emotions and his own valid selfish needs and the maintenance of boundaries for getting these things, and then, to this hoped-for-equal, I'm saying that something he is proposing seems likely to be harmful to a thing that is large and valuable. Because of mens rea, because of Dunbar's Number, because of "the importance of N to stag hunt predictions", and so on.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T19:17:45.191Z · LW(p) · GW(p)

dxu:

my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist,

Jennifer:

I think that it is likely that neither Duncan nor I likely consider ourselves in the first category.

Duncan, in the OP, which Jennifer I guess skimmed:

What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-14T16:40:00.711Z · LW(p) · GW(p)

I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said. 

I apologize for over-simplifying, maybe I should have added "primarily" and/or "currently" to make it more literally true.

In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood... maybe looking to score points for unfairness?

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

My model here is that this is your self-identified "revealed preference" for actually being here right now.

Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.

This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I'm not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)

It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?

And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?

...

There's an interesting possible equivocation here.

(1) "Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong".

(2) "Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong... which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning".

These are very similar. I like having them separate so that I can agree and disagree with you <3

Also, consider then a third idea:

(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.

I think that you probably think 1 and 2 are true and 3 is false.

I think that 2 is true, and 3 is true.

Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits). 

Because I think 2 is true, I think you're motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.

In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on. 

If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can't meet that need, THEN (maybe?) you need to get those needs met somewhere else.

For what its worth, I think that 1 is false for many many people, and probably it is also false for you.

I don't think you should leave, I just think you should be less interested in a "pro-stag-hunting jihad" and then I think you should get the need (that was prompting your stag hunting call) met in some new way.

I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them [LW · GW] some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There's a virtuous cycle here potentially!

In my opinion, such a "skill building in rabbit hunting techniques" sort of rationality... is all that can be done in an environment like this.

Also I think this kind of teaching environment is less available in many places, and so it isn't that this place is bad for not offering more, it is more that it is only "better by comparison to many alternatives" while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)

So in my model (where 2 is true) "because 1 is false for many (and maybe even for you)" and 3 is true... therefore your whole stag hunt concept, applied here, suggests to me that you're "low key seeking to gain social permission" from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.

I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) "place on the internet" full of people semi-mindlessly shrieking at each other by default.

If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures

These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.  

In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the "rabbit hunting rationality improvement" function.

In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a "level" that was required-at-all to meet your particular educational needs.

To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong's canon for granted while also tolerating and even encouraging variations... because it certainly isn't the case that lesswrong is perfect.

(There's a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure "smart person mental elbow grease" and also in memetic diversity) stays over longer periods of time on a trajectory of "getting less wrong over time"... though I don't know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)

...

So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth "out there" where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.

Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn't complain about it until or unless someone hired that firm to "impose costs" on me... then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad... it is just often the "last refuge of the incompetent".

In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...

...so long as they don't damage the "good hubness" of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong's explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok... indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).

Replies from: SaidAchmiz, Duncan_Sabien
comment by Said Achmiz (SaidAchmiz) · 2021-11-14T21:44:51.014Z · LW(p) · GW(p)

… modular sub-communities …

… a staging area and audition space for more specific and more demanding subcultures …

Here is a thng I wrote some years ago (this is a slightly cleaned up chat log, apologies for the roughness of exposition):

There was an analogue to this in WoW as well, where, as I think I’ve mentioned, there often was such a thing as “within this raid guild, there are multiple raid groups, including some that are more ‘elite’/exclusive than the main one”; such groups usually did not use the EPGP [LW · GW] or other allocation system of the main group, but had their own thing.

(I should note that such smaller, more elite/exclusive groups, typically skewed closer to “managed communism” than to “regulated capitalism” on the spectrum of loot systems, which I do not think is a coincidence.)

[name_redacted]: Fewer people, higher internal trust presumably.

“Higher internal trust” is true, but not where I’d locate the cause. I’d say “higher degree of sublimation of personal interest to group interest”.

[name_redacted]: Ah. … More dedicated?

Yes, and more willing to sacrifice for the good of the raid. Like, if you’re trying to maintain a raiding guild of 100 people, keep it functioning and healthy over the course of months or years, new content, people joining and leaving, schedules and life circumstances changing, different personalities and background, etc., then it’s important to maintain member satisfaction; it’s important to ensure that people feel in control and rewarded and appreciated; that they don’t burn out or develop resentments; that no one feels slighted, and no one feels that anyone is favored; you have to recruit, also...

All of these things are more important than being maximally effective at downing this boss right now and then the next five bosses this week.

If you focus on the latter and ignore the former, your guild will break and explode, and people on WoW-related news websites will place stories about your public meltdowns in the Drama section, and laugh at you.

On the other hand… if you get 10 guys together and you go “ok dudes, we, these particular 10 people, are going to show up every single Sunday for several months, play for 6 hours straight each time, and we will push through absolutely the most challenging content in the game, which only a small handful [or sometimes: none at all] of people in the world have done”… that is a different scenario. There’s no room for “I’m not the tank but I want that piece of tank gear”, because if you do that you will fail.

What a group like that promises (which a larger, more skill-diverse, less elite/exclusive, group cannot promise) is the incredible rush of pushing yourself—your concentration, your skill, your endurance, your coordination, your ingenuity—to the maximum, and succeeding at something really really hard as a result.

That is the intrinsic motivation which takes the place of the extrinsic motivation of getting loot. As a result, the extrinsic motivation is no longer a resource which it is vitally important to allocate.

In that scenario, your needs are the group’s needs; the group’s successes are your successes; there is no separation between you and the group, and consequently the need for equity in loot allocation falls away, and everything is allocated strictly by group-level optimization.

Of course, that sort of thing doesn’t scale, and neither can it last, just as you cannot build a whole country like a kibbutz. But it may be entirely possible, and perfectly healthy, to occasionally cleave off subgroups who follow that model, then to meld back into the overgroup at the completion of a project (and never having really separated from it, their members continuing to participate in the overgroup even as they throw themselves into the subproject).

Replies from: JenniferRM
comment by JenniferRM · 2021-11-19T05:42:55.560Z · LW(p) · GW(p)

Yeah! This is great. This is the kind of detailed grounded cooperative reality that really happens sometimes :-)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T18:19:38.707Z · LW(p) · GW(p)

I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood

If a person writes "I currently get A but what I really want is B"

...and then you selectively quote "I currently get A" as justification for summarizing them as being unlikely to want B...

...right after they've objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...

...this is not "simplification."

Apologizing for "over-simplifying," under these circumstances, is a cop-out.  The thing you are doing is not over-simplification.  You are [not talking about simpler versions of me and my claim that abstract away some of the detail].  You are outright misrepresenting me, and in a way that's reeeaaalll hard to believe is not adversarial, at this point.

It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.

If a person wholly misses you once, that's run-of-the-mill miscommunication.

If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that's a bad sign.

If, after you again note that they've misrepresented you in a crucial fashion, they apologize for "over-simplifying," they've demonstrated that there's no point in trying to engage with them.

I explicitly acknowledge that I see my self as more of a teacher than a student round these parts.

I find this unpromising, in light of the above.

Replies from: jimmy
comment by jimmy · 2021-11-14T22:26:49.170Z · LW(p) · GW(p)

I'm torn about getting into this one, since on one hand it doesn't seem like you're really enjoying this conversation or would be excited to continue it, and I don't like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don't really know me and therefore I don't really have the respect/trust resources I'd normally lean on for difficult conversations like this (both in the "likely emotionally significant" and also "just large inferential distances with few words" senses).

On the other hand I think there's something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you're interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.


If you're interested, here's where I'm starting:

Jennifer has explicitly stated that at this point her goal is to help you. This doesn't seem to have happened. While it's important to track possibilities like "Actually, it's been more helpful than it looks", it looks more like her attempt(s) so far have failed, and this implies that she's missing something.

Do you have a model that gives any specific predictions about what it might be? Regardless of whether it's worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she's missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?

Would you be interested in hearing the predictions my models give?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T23:09:05.394Z · LW(p) · GW(p)

I don't have a gearsy model, no.  All I've got is the observations that:

  • Duncan's post objects to a cluster of things X, Y, and Z
  • Jennifer's response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
  • Jennifer's response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
  • Duncan's objections to X, Y, and Z within Jennifer's pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
  • As is to be expected, given that the whole motivation for the OP was "LessWrong keeps indulging in and upvoting X, Y, and Z," Jennifer's being upvoted.

I'm interested in hearing both your model and your predictions.  Perhaps a timescale of days-weeks is better than a timescale of hours-days.

Replies from: jimmy, Benito
comment by jimmy · 2021-12-02T20:09:48.586Z · LW(p) · GW(p)

There's a lot here, and I've put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I'm going to abandon hopes of the latter and go all in on the former. I'm going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like "steelman" without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.

Here's an oversimplified model to play with:

  • Changing minds requires attention, and bigger changes require more attentions.
  • Bidding for bigger attention requires bigger respect, or else no reason to follow.
  • Bidding for bigger respect requires bigger security, or else not safe enough to risk following. 
  •  Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn't attended to, and good things don't happen

GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says "I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety)."

Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan's ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it's better looked at from an outside perspective?) would be limited to that which wouldn't harm Duncan's social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.

Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn't necessarily mean the message doesn't get through to others, so there is a trade off here between "Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan" and "Credibly convey the serious problems with Duncan's thesis, as I see them, to all those willing to follow".

Later, she talks about wanting to help Duncan specifically, and doesn't seem to have done so. There are a few possible explanations for this.

1) When she said it, there might have been an implied "[I'm only going to put in a certain level of work to make things easy to hear, and beyond that I'm willing to fail]". In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn't count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.

2) It's possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn't have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan's objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.

3) It could also be that she's got her own little hang up on the issue of "respect", which caused a blind spot here. I put an asterisk there earlier, because she was only showing "little respect" in one sense, while showing a lot in another. If you say to someone "Lol, your ideas are dumb", it's not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you're also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone "Lol, your ideas are dumb" because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.

She explicitly points at this when she says "Duncan is a special case. I'm not treating him like a student, I'm treating him like an equal", and then hints at a blind spot when she says (emphasis her own) "who should be able to manage himself and his own emotions" -- translating to my model, "manage himself and his emotions" means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. "Should" often points at a willful refusal to update ones map to what "is", and instead responding to it by flinching at what isn't as it "should" be. This isn't necessarily a mistake (in the same way that flinching away from a hot stove isn't a mistake), and while she does make other related comments elsewhere in the thread, there's no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it's likely "I don't like having to admit that people don't demonstrate as much security as I think they should, and I don't wanna admit that it's a thing that is going to stay real and problematic even when I flinch at it". Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.

I don't want to confuse my personal impression with the conditional predictions of the model itself, but I do think it's worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn't agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
 

comment by Ben Pace (Benito) · 2021-11-15T02:59:18.852Z · LW(p) · GW(p)

Just checking, what are X, Y and Z? 

(I'm interested in a concrete answer but would be happy with a brief vague answer too!)

(Added: Please don't feel obliged to write a long explanation here just because I asked, I really just wanted to ask a small question.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-15T03:24:34.922Z · LW(p) · GW(p)

The same stuff that's outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say "just the basics, consistently done."

Regenerating the list again:

Engaging in, and tolerating/applauding those who engage in:

  • Strawmanning (misrepresenting others' points as weaker or more extreme than they are)
  • Projection (speaking as if you know what's going on inside other people's heads)
  • Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
  • Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you've observed
  • Overstating the strength of your claims
  • Being much quieter in one's updates and oopses than one was in one's bold wrongness
  • Weaponizing equivocation/doing motte-and-bailey
  • Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth

This is not an exhaustive list.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-19T05:29:27.651Z · LW(p) · GW(p)

Mechanistically... since stag hunt is in the title of the post... it seems like you're saying that any one person committing "enough of these epistemic sins to count as playing stag" would mean that all of lesswrong fails at the stag hunt, right?

And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)

Also, what you're calling "projection" there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can't choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(

(For myself, I try not to assume I even know what's happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition [LW · GW].)

The practical upshot here, to me, is that if the models you're advocating here are true, then it seems to me like lesswrong will inevitably fail at "hunting stags".

...

And yet it also seems like you're exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then... maybe we will eventually all play stag and thus eventually, as a group, catch a stag?

So under the models that you seem to me to have offered, the (numerous individual) costs won't buy any (group) benefits? I think? 

There will always inevitably be a fly in the ointment... a grain of sand in the chip fab... a student among the masters... and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?

And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!

And that's (in my book) quite good... even if it means we will always fail at hunting stags.

...

The thing I think that's good about lesswrong has almost nothing to do with bringing down a stag on this actual website.

Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can "do more good thinking" in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.

I'm (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time... Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).

You're against "engaging in, and tolerating/applauding" lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.

Am I missing something? What?

Replies from: tomcatfish, Duncan_Sabien
comment by Alex Vermillion (tomcatfish) · 2021-12-12T19:49:55.384Z · LW(p) · GW(p)

I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.

The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like "The value of cooperators and defectors is " is that you are trying to make the argument look weak. If there is another reason to do this, I'd appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-19T17:22:07.245Z · LW(p) · GW(p)

Mechanistically... since stag hunt is in the title of the post... it seems like you're saying that any one person committing "enough of these epistemic sins to count as playing stag" would mean that all of lesswrong fails at the stag hunt, right?

No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.

Replies from: Lumpyproletariat
comment by Lumpyproletariat · 2021-11-22T22:18:31.389Z · LW(p) · GW(p)

Epistemic status: socially brusque wild speculation. If they're in the area and it wouldn't be high effort, I'd like JenniferRM's feedback on how close I am.

My model of JenniferRM isn't of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite's comment below, they say:

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.

My model of the model which which outputs words like these is that they're very confident in their own understanding--viewing themself as a "teacher" rather than a student--and are trying to lead someone who they think doesn't understand by the nose through a conversation which has been plotted out in advance.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-23T06:05:42.811Z · LW(p) · GW(p)

Plausible to me.  (Thanks.)

comment by Dweomite · 2021-11-15T01:48:14.534Z · LW(p) · GW(p)

And why would a good and sane person ever want to impose costs on third parties ever except like in... revenge because we live in an anarchic horror world, or (better) as punishment after a wise and just proceeding where rehabilitation would probably fail but deterrence might work? 

This paragraph sounds to me like when you say "costs" you are actually thinking of "punishments", with an implication of moral wrongdoing.  I'm uncertain that Duncan intended that implication (and if he did, I'd like to request that both of you use the more specific term).

 

If you continue to endorse the quoted paragraph as applied to "costs" that are not necessarily punishments, then I further contend that costs are useful in several scenarios where there is no implication of wrongdoing:

The SUPER obvious example is markets, where costs are used as a mechanism to control resource allocation.

Another common scenario is as a filter for seriousness.  Suppose you are holding a contest where you will award a prize to the best foozle.  If entry in the contest is free of charge, you might get a lot of entries from amateur foozle artists who know that they have negligible chance of winning, but who see no downside to entering; you will then be forced to spend resources judging those entries.  If you instead charge a $10 entrance fee, then most of those people will not bother to enter, and you'll only have to judge the foozles from artists who self-evaluate as having a real shot.

There is no implication that entering the contest is an evil thing that no one should ever do.  It's just a mechanism for transferring some of the transactional costs onto the party that is best able to judge the value of the transaction, so that we end up with better decisions about which transactions to perform.

Another use for costs is for creating precommitments.  If you want to avoid a certain future action, attaching a cost to that action can be helpful both for generating the willpower to stick to your decision and also for convincing other people that you will stick to it.  There exist services people voluntarily sign up for where you agree to pay money if you break a precommitment.

 

Additionally, I feel you are unfairly maligning deterrence.  You imply it should only be used where rehabilitation would probably fail, but rehabilitation only prevents that offender from repeating the offense, whereas deterrence discourages anyone from repeating the offense; this creates many scenarios where deterrence might be desirable in addition to rehabilitation (or where rehabilitation is irrelevant, e.g. because that particular offender will never have a similar opportunity again).

You also imply deterrence should only be used after meeting an extremely high standard of evidence; most people only consider this necessary for extreme forms of deterrence (e.g. jail) but permit a much weaker standard of evidence for mild forms (e.g. verbal chastisement; leaving a negative review).  I think this common view is probably correct on cost/benefit grounds (less caution is required in situations where a mistake causes less harm).

comment by Dweomite · 2021-11-15T01:02:50.211Z · LW(p) · GW(p)

Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

I wish this statement explaining what goal your advice is designed to optimize had appeared at the top of the advice, rather than the bottom.

 

My current world-model predicts that this is not what most people believe points are for, and that getting people to treat points in this way would require a high-effort coordinated push, probably involving radical changes to the UI to create cognitive distance from how points are used on other sites.

Specifically, I think the way most people actually use points is as a prosthetic for nonverbal politics; they are the digital equivalent of shooting someone a smile or a glower.  Smiles/glowers, in turn, are a way of informing the speaker that they are gaining/losing social capital, and informing bystanders that they could potentially gain/lose social capital depending on which side of the issue they support.

My model says this is a low-level human instinctive social behavior, with the result that it is very easy to make people behave this way, but simultaneously very hard for most people to explain exactly what they are doing or why.

This self-opacity, combined with a common slightly-negative valence attached to the idea of using social disapproval as a way to sculpt the behavior of others, results in many people leaving out the "social capital" part of this explanation and describing upvotes/downvotes as meaning "I want to see more/fewer posts like this".  Which I think is still importantly different from "I want this exact comment to appear higher/lower on this page," in that it implies the primary purpose is about sculpting future content rather than organizing existing content.

(Note that all of the above is an attempt at description, not prescription.)

You've framed this issue as one of educating people about points.  I think a better framing would be that the Internet already has an established norm, and that norm is a natural attractor due to deep human instincts, and you are proposing a new, incompatible, and significantly less-stable norm to replace it.  I would be willing to entertain arguments that this is somehow a worthwhile switch, but my prior is against it.

Also, I find it mildly alarming that your personal strategy for reinforcing this norm involves explicitly refusing the benefits that it could have provided to you (by starting at the bottom of the page, reading comments in the inverse of the order you want the point system to recommend).  Norms that do not benefit their own defenders are less stable, and the fact that you are discarding at least some of the potential value makes it harder to argue that the whole thing is net-positive.

comment by Slider · 2021-11-08T23:20:13.546Z · LW(p) · GW(p)

Like... this is literally black and white thinking? 

This is written in a way that seems to imply that if it is black and white thinking that would be bad. It also doesn't read as a question despite having a question mark.

People whos neurotype makes them default to black and white thinking can get really good when a concept does apply or doesn't apply. It has strengths and weaknesses. You are taking the attitude that it is widely known for its weaknesses. Demonstrating what is being glossed over or what kinds of things would be missed by it. I guess later in the post there are descipriton how other stances have it better but I think failure analysis on what is worse for the strongly dualistic view is sorely needed. 

One possible source of uncomfortableness if if the typical mind assumptions doesn't hold because the interaction target mind is atypical.

Phrasings like "And why would a good and sane person ever [...]" seem to prepare to mark individuals for rejection. And again it has a question word but doesn't read like a question.

I am worried about a mechanic where people recognised as deviant are categorised as undesirable. 

I do share worry about a distinction between "in" and "out" leads to a very large out group and actions that are as large as possible instead of proprotioned to neccesity or level of guarantee.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-10T09:42:47.002Z · LW(p) · GW(p)

"Black and white thinking" is another name for a reasonably well defined cognitive tendency that often occurs in proximity to reasonably common mental problems.

Part of the reason "the fallacy of gray" is a thing that happens is that advice like that can be a useful and healthy thing for people who are genuinely not thinking in a great way. 

Adding gray to the palette can be a helpful baby step in actual practice.

Then very very similar words to this helpful advice can also be used to "merely score debate points" on people who have a point about "X is good and Y is bad". This is where the "fallacy" occurs... but I don't think the fallacy would occur if it didn't have the "plausible cover" that arises from the helpful version. 

A typical fallacy of gray says something like "everything is gray, therefore lets take no action and stop worrying about this stuff entirely".

One possible difference, that distinguishes "better gray" from "worse gray" is whether you're advocating for fewer than 2 or more than 2 categories.

Compare: "instead of two categories (black and white), how about more than two categories (black and white and gray), or maybe even five (pure black, dark gray, gray, light gray, pure white), or how about we calculate the actual value of the alternatives with actual axiological math which in some sense gives us infinite categories... oh! and even better the math might be consistent with various standards like VNM rationality and Kelly and so on... this is starting to sound hard... let's only do this for the really important questions maybe, otherwise we might get bogged down in calculations and never do anything... <does quick math> <acts!>"

My list of "reasons to vote up or down" was provided partly for this reason. 

I wanted to be clear that comments could be compared, and if better comments had lower scores than worse comments that implied that the quantitative processes of summing up a bunch of votes might not be super well calibrated, and could be improved via saner aggregate behavior. 

Also the raw score is likely less important than the relative score. 

Also, numerous factors are relevant and different factors can cut in opposite ways... it depends on framing, and different people bring different frames, and that's probably OK. 

I often have more than one frame in my head at the same time, and it is kinda annoying, but I think maybe it helps me make fewer mistakes? Sometimes? I hope?

Phrasings like "And why would a good and sane person ever [...]" seem to prepare to mark individuals for rejection. And again it has a question word but doesn't read like a question.

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.

I wasn't expecting him to just totally dodge it.

To answer my own question: cops are an example of people who can be good and sane even though they go around hurting people.

However, cops do this mostly only while wearing a certain uniform, while adhering to written standards, and while under the supervision of elected officials who are also following written standards. Also, all the written standards were written by still other people who were elected, and the relevant texts are available for anyone to read. Also, courts have examined many many real world examples, and made judgement calls, with copious commentary, illustrating how the written guidelines can be applied to various complex situations.

The people cops hurt, when they are doing "a good job imposing costs on bad behavior" are people who are committing relatively well defined crimes that judges and juries and so on would agree are bad, and which violate definitions written by people who were elected, etc.

My general theory here is that vigilantism (and many other ways of organizing herds of humans) is relatively bad and "right's respecting rule of law" (generated by the formal consent of the governed), is the best succinct formula I know of for virtuous people to engage in virtuous self rule.

In general, I think governors should be very very very very careful about imposing costs and imposing sanctions for unclear reasons rather than providing public infrastructure and granting clear freedoms.

Replies from: Slider
comment by Slider · 2021-11-10T13:06:58.157Z · LW(p) · GW(p)

There is another phenomenon that also gets referred to as "black and white thinking" that has more to do with rigidity of thought. The mechanisms of that are different. I am bit unsure whether it has a more standard name and wanted to find fact information but only found an opinon piece where at number 5 there is a differential between that and splitting.

I do recognise how the text fills recognition criteria for splitting and the worry seems reasonable but to me it sounds more like splitting hairs. The kind of thing were I would argue that within probability zero there is difference between "almost never" and "actually never" and for some thing it would make or break things.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-14T17:33:31.286Z · LW(p) · GW(p)

If you look at some of the neighboring text, I have some mathematical arguments about what the chances are for N people to all independently play "stag" such that no one plays rabbit and everyone gets the "stag reward".

If 3 people flip coins, all three coins come up "stag" quite often. If a "stag" is worth roughly 8 times as much as a rabbit, you could still sanely "play stag hunt" with 2 other people whose skill at stag was "50% of the time they are perfect".  

But if they are less skilled than that, or there are more of them, the stag had better be very very very valuable.

If 1000 people flip coins then "pure stag" comes up one in every 9.33x10^302 times. Thus, de facto, stag hunts fail at large N except for one of those "dumb and dumber" kind of things where you hear the one possible coin pattern that gives the stag reward and treat this as good news and say "so you're telling me there's a chance!"

I think stag hunts are one of these places where the exact same formal mathematical model gives wildly different pragmatic results depending on N, and the probability of success, and the value of the stag... and you have to actually do the math, not rely on emotions and hunches to get the right result via the wisdom one one's brainstem and subconscious and feelings and so on.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T18:36:29.214Z · LW(p) · GW(p)

Coin flips are an absolutely inappropriate model for stag hunts; people choosing stag and rabbit are not independent in the way that coin flips are independent; that's the whole point.  Incentives drive everyone toward rabbit; agreements drive people toward stag.  All of the reasoning descending from the choice to model things as coin flips is therefore useless.

comment by Ruby · 2021-11-08T22:49:21.630Z · LW(p) · GW(p)

Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

For the record, as "arch-moderator", I care about karma for more reasons than just that, in line with Oli's list here [LW(p) · GW(p)].

Replies from: Ruby
comment by Ruby · 2021-11-08T23:17:55.539Z · LW(p) · GW(p)

Although this isn't how I think about karma, on reflection, I think it's a good and healthy frame, and I'm glad you have it and brought it up with your detailed suggestion.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-10T08:38:30.992Z · LW(p) · GW(p)

Yeah, my larger position is that karma (and upboats and so on) are brilliant gamifications of "a way to change the location of elements on a webpage". Reddit is a popular website, that many love, for a reason. I remember Digg. I remember K5. I remember Slashdot. There were actual innovations in this space, over time, and part of the brilliance in the improvements was in meeting the needs of a lot of people "where they are currently at" and making pro-social use of many tendencies that are understandably imperfect.

Social engineering is a thing, and it is a large part of why our murder rate is so low, and our material prosperity is so high. It is super important and, done well, is mostly good. (I basically just wish that more judges and lawyers and legislators in modern times could program computers, and brought that level of skill to the programming of society.)

However, I also think that gamification ultimately should be understood as a "mere" heuristic... as a hack that works on many humans who are full of passions and confusions in predictable ways... If everyone was a sage, I think gamification would be pointless or even counter-productive.

A contextually counter-productive heuristic is a bias. In a deep sense we have biases because we sometimes have heuristics that are being applied outside of their training distribution by accident.

The context where gamification might not work: Eventually you know you are both the rider and the elephant. Your rider has trained (and is still training) your elephant pretty well, and sometimes even begins to ruefully be thankful that the elephant had some good training, because sometimes the rider falls asleep and it was only the luck of a well-trained elephant that kept them from tragedy. 

Anyone who can get to this point (and I'm nowhere close to perfect here, but sometimes in some domains I think I'm getting close)... one barrier to progress that arises as one tries to get the rider and the elephant to play nicely, is that other people are trying to make your elephant go where they think it should go and which your rider is pretty sure is bad. This can feel tedious or sad or... yeah. It feels like something.

Advertising, grades, tests, praise, criticism, and structured incentives in general can be a net positive under some circumstances, and so can gamification, but I don't think any are to be generically "trusted, full stop".

Right now, when I try to "make the voting be not as bad" I can dramatically change the order in which comments occur, and this is often an improvement. I run out of time before I run out of power. I don't read everything, and when reading casually I'm "not supposed to be voting" and if I find myself "reflexively upvoting" it causes a TAP to kick in to actually stop and think, and compare my reflex to my ideals, and maybe switch over to thoughtfully voting on things.

Maybe one day I'll find that, without any action on my part, the order of the comments matches my ideal, or actually even has hidden virtues where it is discrepant, because maybe my ideals are imperfect. When that day arrives maybe I will stop "feeling guilty about feeling good about getting upvoted"... if that makes sense :-)

comment by 1a3orn · 2021-11-06T14:45:05.842Z · LW(p) · GW(p)

LW is likely currently on something like a Pareto frontier of several values, where it is difficult to promote one value better without sacrificing others. I think that this is true, and also think that this is probably what OP believes.

The above post renders one axis of that frontier particularly emotionally salient, then expresses willingness to sacrifice other axes for it.

I appreciate that the post explicitly points out that is willing to sacrifice these other axes. It nevertheless skims a little bit over what precisely might be sacrificed.

Let's name some things that might be sacrificed:

(1) LW is a place newcomers to rationality can come to ask questions, make posts, and participate in discussion, hopefully without enormous barrier to entry. Trivial inconveniences to this can have outsized effects.

(2) LW is a kind of bulletin board and coordination center for things of general interest to an actual historical communities. Trivial inconveniences to sharing such information can once again have an outsized effect.

(3) LW is a place to just generally post things of interest, including fiction, showerthoughts, and so on, to the kind of person who is interested in rationality, AI, cryonics, and so on.

All of these are also actual values. They impact things in the world.

Some of these could also have essays written about them, that would render them particularly salient, just like the above essay.

But the actual question here is not one of sacred values -- communities with rationality are great! -- but one of tradeoffs. I don't think I understand those tradeoffs even the slightest bit better after reading the above.

Replies from: MondSemmel
comment by MondSemmel · 2021-11-06T15:26:05.294Z · LW(p) · GW(p)

I don't think you've made a convincing case that LW is on a Pareto frontier of these values, and I don't know what such a case would look like, either. I've personally made several suggestions here in the comments (for LW feature improvements) that would make some things better without necessarily making anybody worse off. Feature suggestions would take resources to implement, but as far as I can tell the LW team has sufficient resources [LW · GW] to act on whatever it considers its highest-EV actions.

As for the rest of your post: I appreciate that you mention other values to consider, and that you don't want them to be traded off for one another. In particular, I strongly agree that I do not want to increase barriers to entry for newcomers.

But I strongly disapprove of your imputing motives into the OP that aren't explicitly there, or that aren't there without ridiculous numbers of caveats (like the suggestions OP himself flagged as "terrible ideas").

OP even ends with a disclaimer that "this essay is not as good as I wished it would be". In contrast, this entire section of yours reads to me as remarkably uncharitable and in bad faith:

The above post renders one axis of that frontier particularly emotionally salient, then expresses willingness to sacrifice other axes for it.

I appreciate that the post explicitly points out that is willing to sacrifice these other axes. It nevertheless (as is common for this genre of rhetoric, which wants you to care deeply about one axis) skims a little bit over what precisely might be sacrificed.

...

Some of these could also have essays written about them, that would render them particularly salient, just like the above essay. You could try to create a mood of desperate urgency where sacrificing other values to accomplish them seems necessary.

But the actual question here is not one of sacred values -- communities with rationality are great! --

If you want to suggest that OP is part of a "genre of rhetoric": make the case that it is, name it explicitly. Make your own words vulnerable, put your own neck out there.

Instead of making your own object-level arguments, you're imputing bad motives into the OP, insinuating things without pointing to specific quotes, and suggesting that arguments for your case could be made, but that you won't make the effort to make them.

You even end on an applause light [LW · GW] ffs.


Circling back to the object level of the essay, namely improving the culture here: As I mention in my comment on the Karma system [LW(p) · GW(p)], which I've explicitly singled out in my suggestions for improvement [LW(p) · GW(p)]: Your comment is half decent, half terrible, but the only way I have to interact with it is to assign it a single scalar (upvote or downvote). So I choose to strong-downvote it but leave this comment for clarity.

Replies from: 1a3orn
comment by 1a3orn · 2021-11-06T16:31:45.578Z · LW(p) · GW(p)

I meant a relative Pareto frontier, vis-a-vis the LW team's knowledge and resources. I think your posts on how to expand the frontier are absolutely great, and I think they (might) add to the available area within the frontier.

"If you want to suggest that OP is part of a "genre of rhetoric": make the case that it is, name it explicitly."

I mean, most of OP is about evoking emotion about community standards; deliberately evoking emotions is a standard part of rhetoric. (I don't know what genre -- ethos if you want to invoke Aristotle -- but I don't think it particularly matters.) OP explicitly says that he would like LW to be smaller -- i.e., sacrifice other values, for the value he's just evoked emotion about. I take this to just be a description of how the essay works, not a pejorative imputation of motives.

I could definitely have done better, and I too went through several drafts, and the one I posted was probably posted because I was tired of editing rather than because it was best. I have removed the sentences in the above that seem most pejorative.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:25:06.131Z · LW(p) · GW(p)

An important note, that I couldn't manage to fold into the essay proper:

I have direct knowledge of at least three people who would like to say positive things about their experience at Leverage Research, but feel they cannot.  People who are fully on board with the fact that their experiences do not erase Zoe's, but whose own very different stories are relevant to understanding what actually went wrong, so that it can be fixed for those who suffered, and prevented in the future.

And at least these three people are not speaking up, because the current state of affairs is such that they feel they can't do so without committing social suicide.

This is a fact about them, not a fact about LessWrong or what would actually happen.

But it sure is damning that they feel that way, and that I can't exactly tell them that they're wrong.

Personally, I think it's clear that there was some kind of disease here, and I'd like to properly diagnose it, and that means making it possible to collect all of the relevant info, not an impoverished subset.

We were missing Zoe's subset.  It's good that we have it now.

It's bad that we're still dealing with an incomplete dataset, though, and it's really bad that a lot of people seem to think that the dataset isn't meaningfully incomplete.

Replies from: Taran, Spiracular, Linch, Viliam
comment by Taran · 2021-11-07T08:26:48.686Z · LW(p) · GW(p)

But it sure is damning that they feel that way, and that I can't exactly tell them that they're wrong.

You could have, though.  You could have shown them the many [LW(p) · GW(p)] highly-upvoted personal accounts [LW(p) · GW(p)] from former Leverage staff [LW(p) · GW(p)] and other Leverage-adjacent people [LW(p) · GW(p)].   You could have pointed out that there aren't any positive personal Leverage accounts, any at all, that were downvoted on net.  0 and 1 are not probabilities, but the evidence here is extremely one-sided: the LW zeitgeist approves of positive personal accounts about Leverage.  It won't ostracize you for posting them.

But my guess is that this fear isn't about Less Wrong the forum at all, it's about their and your real-world social scene.  If that's true then it makes a lot more sense for them to be worried (or so I infer, I don't live in California).  But it makes a lot less to bring to bring it up here, in a discussion about changing LW culture: getting rid of the posts and posters you disapprove of won't make them go away in real life.  Talking about it here, as though it were an argument in any direction at all about LW standards, is just a non sequitur.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T08:37:55.933Z · LW(p) · GW(p)

Thanks for gathering these.  They are genuinely helpful (several of them I missed).

But yes, as you inferred, the people I've talked to are scared about real-life consequences such as losing funding or having trouble finding employment, which are problems they don't currently have but suspect they will if they speak up.

I reiterate that this is a fact about them, as oppose to a fact about reality, but they're not crazy to have some weight on it.

Replies from: Taran
comment by Taran · 2021-11-07T09:45:54.400Z · LW(p) · GW(p)

When it comes to the real-life consequences I think we're on the same page: I think it's plausible that they'd face consequences for speaking up and I don't think they're crazy to weigh it in their decision-making (I do note, for example, that none of the people who put their names on their positive Leverage accounts seem to live in California, except for the ones who still work there).  I am not that attached to any of these beliefs since all my data is second- and third-hand, but within those limitations I agree.

But again, the things they're worried about are not happening on Less Wrong.  Bringing up their plight here, in the context of curating Less Wrong, is not Lawful [LW · GW]: it cannot help anybody think about Less Wrong, only hurt and distract.  If they need help, we can't help them by changing Less Wrong; we have to change the people who are giving out party invites and job interviews.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-11-07T14:41:22.547Z · LW(p) · GW(p)

I expect that many of the people who are giving out party invites and job interviews are strongly influenced by LW. If that's the case, then we can prevent some of the things Duncan mentions by changing LW in the direction of being more supportive of good epistemics (regardless of which "side" that comes down on), with the hope of flow-through effects.

Replies from: Taran
comment by Taran · 2021-11-08T11:58:56.742Z · LW(p) · GW(p)

I expect that many of the people who are giving out party invites and job interviews are strongly influenced by LW.

The influence can't be too strong, or they'd be influenced by the zeitgeist's willingness to welcome pro-Leverage perspectives, right?  Or maybe you disagree with that characterization of LW-the-site?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T17:53:37.064Z · LW(p) · GW(p)

Things get complicated in situations where e.g. 70% of the group is welcoming and 30% of the group is silently judging and will enact their disapproval later.  And the zeitgeist that is willing to welcome pro-Leverage perspectives might not be willing to actively pressure people to not discriminate against pro-Leverage folk.  Like, they might be fine with somebody being gay, but not motivated enough to step in if someone else is being homophobic in a grocery store parking lot, metaphorically speaking.

(This may not describe the actual situation here, of course.  But again it's a fear I feel like I can't dismiss or rule out.)

comment by Spiracular · 2021-11-07T02:21:15.467Z · LW(p) · GW(p)

"three people... would like to say positive things about their experience at Leverage Research, but feel they cannot":

Oof. I appreciate you mentioning that.

(And a special note of thanks, for being willing to put down a concrete number? It helps me try to weigh it appropriately, while not compromising anonymity.)


Navigating the fact that people seem to be scared of coming forward on every side of this, is hard. I would love advice on how to shape this thread better.

If you think of something I can do to make talking about all of {the good, the bad, the neutral, the ugly, and the complicated}, easier? I can't guarantee I'll agree to it, but I really do want to hear it.

Please feel free to reach out to me on LW, anytime in the next 2 months. Not just Duncan, anyone. Offer does expire at start of January, though.

I am especially interested in concrete suggestions that improve the Pareto Frontier of reporting, here. But I'm also pretty geared up to try to steelman any private rants that get sent my way, too.

(In this context? I have already been called all of "possessed, angry, jealous, and bad with secrets." I was willing to steelman the lot, because there is a hint of truth in each, although I really don't think any of them are the clearest lens available. If you can be kinder than that, then you're already doing better than the worst baseline that I have had to steelman here.)


P.S. I recognize it is easy to cast me as being on "the other side?" It's an oversimplification, and I'd love to have a more balanced sense of what the hell happened. But I also don't want my other comments to come as a late surprise, to anyone who is already a bit spooked.

So, my personal story is in here [LW(p) · GW(p)], along with some of my current sense-making.

Also, to people who really only want to talk about their story privately, with whoever it is that you trust? That's valid, and I hope you're doing okay.

Replies from: Spiracular
comment by Spiracular · 2021-11-07T03:21:50.028Z · LW(p) · GW(p)

Hm... I notice I'm maybe feeling some particular pressure to personally address this one?

Because I called out the deliberate concentration of force [LW · GW] in the other direction [LW(p) · GW(p)] that happened on an earlier copy of the BayAreaHuman thread [LW · GW].


I am not really recanting that? I still think something "off" happened there.

But I could stand up and give a more balanced deposition.

To be clear? I do think BAH's tone was a tad aggressive. And I think there were other people in the thread, who were more aggressive than that. I think Leverage Basic Facts EA [EA · GW] had an even more aggressive comment thread.

I also think each of the concrete factual claims BAH made, did appear to check out with at least one corner of Leverage, according to my own account-collecting (although not always at the same time).

(I also think a few of the LBFEA's wildest claims, were probably true. Exclusion of the Leverage website from Wayback Machine is definitely true*. The Slack channels characterizing each Pareto attendee as a potential recruit, seems... probably true?)

There were a lot of corners of Leverage, though. Several of them were walled off from the corners BAH talked about, or were not very near to it.

For what it's worth, I think the positive accounts in the BAH comment thread were also basically honest? I up-voted several of them.

Side-note: As much as I don't entirely trust Larissa? I do think some of her is at least trying to hold the fact that both good and bad things happened here. I trust her thoughts, more than Geoff's.

* Delisted from Wayback: The explanation I've heard, is that Geoff was sick of people dragging old things up to make fun of the initial planning document, and critiquing the old Connection Theory posts.


I am also dead-certain that nobody was going into the full story, and some of that was systematic. "BAH + commentary" put together, still doesn't sum to enough of the whole truth, to really make sense of things.

Anna & Geoff's initial twitch-stream included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that "Leverage was poaching donors," which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned. That all was an important missing piece of the puzzle.

((Meta: Noticing I should add this to Timeline and Threads somewhere? Doing that now-ish.))

(I also personally just really liked Anna's thoughts on "narrative addiction" being something to watch out for? Maybe that's just me.)

The dissolution & information agreement was another important part. Thank you, Matt Falshaw, for putting some of that in a form that could be viewed by people outside of the ecosystem.

I also haven't met anybody except Zoe (and now me, I guess?) who seems to have felt able to even breathe a word about the "objects & demons" memetics thing. I think that was another important missing piece.

Some people do report feeling incapable of speaking positively about Leverage in EA circles? I personally didn't experience a lot of this, but I saw enough surprise when I said good things about Reserve, that it doesn't surprise me particularly. Leverage's social network and some of its techniques were clearly quite meaningful to some people, so I can imagine how rough 'needing to write that out of your personal narrative' could have been.

comment by Linch · 2021-11-10T04:34:02.024Z · LW(p) · GW(p)

Have people considered just making a survey and sending it out to former Leverage staff? This really isn't my scene but it seems like while surveys have major issues, it's hard for me to imagine that surveys are worse at being statistically representative than qualitative accounts that went through many selection filters,

comment by Viliam · 2021-11-06T23:36:38.096Z · LW(p) · GW(p)

And at least these three people are not speaking up, because the current state of affairs is such that they feel they can't do so without committing social suicide.

Not even pseudonymously?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T00:00:25.575Z · LW(p) · GW(p)

Their stories are specific enough that pseudonymity isn't viable.

comment by Ruby · 2021-11-06T18:35:38.630Z · LW(p) · GW(p)

Strong upvote. Thank you for writing this, it articulates the problems better than I had them in my head and enhances my focus. This deserves a longer reply, but I'm not sure if I'll get to write it today, so I'll respond with my initial thoughts.

What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.

I want this too.

In the big, important conversations, the ones with big stakes, the ones where emotions run high—
I don’t think LessWrong, as a community, does very well in those conversations at all.

Regarding the three threads you list, I, others involved in managing of LessWrong, and leading community figures who've spoken to me are all dissatisfied with how those conversations went and believe it calls for changes in LessWrong.

Solutions I am planning or considering:

  • Technological solutions (i.e. UI changes). Currently, I think it's difficult to provide norm-enforcing feedback on comments (you are required to write another comment, which is actually quite costly). One is also torn between signalling agreement/disagreement with statement and approval/disapproval of the reasoning. These issues could be addressed by factoring karma into two axes (approve/disapprove, agree/disagree) and also possible something like "epistemic reacts" where you can easily tag a comment as exemplifying a virtue or vice. I think that would give standard-upholding users (including moderators) a tool to uphold the standards.
    • There's a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them. This can maybe worked around, say giving react abilities only to trusted users or something, but it's non-trivial.
    • Another thing is that new-users are currently on too-even footing with established users. You can make an account and your comments will look the same as a user who's proven themselves. This could be addressed by marking new users as such (Hacker News does this) or we can create spaces where new users cannot easily participate (more on this in a moment).
    • Not a solution, but a problem to be solved: when it comes to users, high karma can in part indicate highly valuable contributions, but also is just a measure of engagement. Someone with hundreds of low scoring comments can have a much higher score than someone of higher standards with only a few standout posts. This means that karma alone is inadequate to segment out better users from lower standard and quality adhering ones.
  • I am interested in creating "Gardens within the Garden". As you say, counting up, LessWrong does well compared to the GenPop but far from sufficient. I think it would be good to have a place where people can level-up, and a further higher-quality space towards which people can strive to be admitted to. Admission could be granted by moderators (likely) or by passing an adequate test (if we are able to create such a one), I imagine you (Duncan) would actually be quite helpful in designing it.
  • I think our new user system is woefully inadequate. The system needs to change so that admission to LessWrong as a commenter and poster is not something that is taken for granted, and that new users are made aware that many (most?) new users will be turned away (once their initial contributions seem low enough quality).
    • Standards need to be made clear to new users, and for that matter, they need to be clarified to everyone. This is hard because to to me at least, picking the right standards is not easy. Picking the wrong standards to enforce could kill LessWrong (which I think would be worse than living with the current students).
    • I think that starting by getting the standards for new users clear ("stopping the bleeding") we can then begin to extend that onto the existing user base. As a general approach we (the moderators) have a much higher bar for banning long-term users over new users [1].

This is just the cached list that I'm able to retrieve on the spot. There are surely more good things that I'm forgetting or haven't thought of.

I think it isn’t. I think that a certain kind of person is becoming less prevalent on LessWrong, and a certain other kind of person is becoming more prevalent, and while I have nothing against the other kind, I really thought LessWrong was for the first group.

It is the definitely the case that people who I want on LessWrong are not there because the discussion doesn't meet their standards. They have told me. I want to address this, although it's somewhat hard because the people I want tend to be opinionated about standards in ways that conflict, or at least whose intersection would be a norm-enforcement burden that neither moderators or users could tolerate. That said, I think there are improvements in quality that would be universally regarded as good and would shift the culture and userbase in good directions.

In no small part, the duty of the moderation team is to ensure that no LessWronger who’s trying to adhere to the site’s principles is ever alone, when standing their ground against another user (or a mob of users) who isn’t

I would really like this to be true.

Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.

If you can find me people capable of being these moderators, I will hire them. I think the number of people who have mastered the standards you propose and are also available is...smaller than I have been able to locate so far.

Timelines for things happening from LW team
Progress is a little slow at the moment. Since the restructuring into Lightcone Infrastructure [LW · GW], I'm the only full-time member of the LessWrong team. I still get help with various tasks from other Lightcone members, and jimrandomh independently does dev work as an open source contributor; however, I'm the only one able to drive large initiatives (like rescuing the site's norms) forward. Right now the bulk of my focus on hiring [2]. Additionally, I've begun doing some work on the new user process, and I hope to begin are the experiments with karma factorization. Those are smaller steps than what's required, unfortunately.

If you or someone you know is a highly capable software engineer with Rationalist virtue, please contact me. While the community does have many software developers, the number who are skilled enough and willing to live in Berkeley and work on LessWrong is not so high that it's trivial to hire.

--

[1] In the terminology of Raemon, I believe we have some Integrity Debt in disclosing how many new users we ban (and their content that we remove).
[2] It's plausible I should drop hiring and just focused on everything in the OP/I mention above, but I consider LessWrong "exposed" right now since I'm neither technically strong enough or productive enough to maintain the site alone, which makes me reliant people outside the team, which is a kind of brittle way for things to be.

Replies from: supposedlyfun, Shamash, Chris_Leong, MondSemmel, hg00, Yoav Ravid
comment by supposedlyfun · 2021-11-07T22:28:10.674Z · LW(p) · GW(p)

Regarding the three threads you list, I, others involved in managing of LessWrong, and leading community figures who've spoken to me are all dissatisfied with how those conversations went and believe it calls for changes in LessWrong.

I'm deeply surprised by this. If there is a consensus among the LW managers and community figures, could one of them write a post about it laying out what was dissatisfactory and what changes they feel need to be made, or at least the result they want from the changes? I know you're a highly conscientious person with too much on zir hands already, so please don't take this upon yourself.

Replies from: habryka4, Duncan_Sabien
comment by habryka (habryka4) · 2021-11-08T01:20:00.605Z · LW(p) · GW(p)

I am also surprised by this! I think this sentence is kind of true, and am dissatisfied with the threads, but I don't feel like my take is particularly well-summarized with the above language, at least in the context of this post (like, I feel like this sentence implies a particular type of agreement with the OP that I don't think summarizes my current position very well, though I am also not totally confident I disagree with the OP). 

I am in favor of experimenting more with some karma stuff, and have been encouraging people to work on that within the Lightcone team. I think there is lots of stuff we could do better, and definitely comparing us to some ideal that I have in my head, I think things definitely aren't going remotely as well as I would like them to, but I do feel like the word "dissatisfied" seems kind of wrong. I think there are individual comments that seem bad, but overall I think the conversations have been quite good, and I am mildly positively surprised by how well they have been going. 

Replies from: Duncan_Sabien, Vladimir_Nesov
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T01:46:45.003Z · LW(p) · GW(p)

(As the author of the OP, I think my position is also consistent with "quite good, and mildly positively surprised."  I think the difference is counting up vs. counting down?  I'm curious whether you think quite good when counting down from your personal vision of the ideal LessWrong.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-08T20:38:18.483Z · LW(p) · GW(p)

When counting down we are all savages dancing to the sun gods in a feeble attempt to change the course of history.

More seriously though, yeah, definitely when I count down, I see a ton of stuff that could be a lot better. A lot of important comments missing, not enough courage, not enough honesty, not enough vulnerability, not enough taking responsibility for the big picture.

Replies from: Ruby
comment by Ruby · 2021-11-08T23:52:58.914Z · LW(p) · GW(p)

I did indeed mean "dissatisfied" in a "counting down" sense.

comment by Vladimir_Nesov · 2021-11-08T05:09:01.145Z · LW(p) · GW(p)

The most obvious/annoying issue with karma is false disagreement zero equilibrium controversy tug of war that can't currently be split into more specific senses of voting to reveal that actually there is a consensus.

This can't be solved by pre-splitting, it has to act as needed, maybe co-opting the tagging system, with the default tag being "Boostworthy" (but not "Relevant" or anything specific like that), ability to see the tags if you click something, and ability to tag your vote with anything (one tag per voter, so to give a specific tag you have to untag "Boostworthy", and all tags sum up into the usual karma score that is the only thing that shows by default until you click something). This has to be sufficiently inconvenient to only get used when necessary, but then somehow become convenient enough for everyone to use (for that specific comment).

On the other hand there is Steam that only has approve/disapprove votes and gives vastly more useful quality ratings than most rating aggregators that are even a little bit more nuanced. So any good idea is likely to make things worse. (Though Steam doesn't have a zero equilibrium problem because the rating is the percentage of approve votes.)

Replies from: Viliam, Yoav Ravid
comment by Viliam · 2021-11-08T12:08:25.260Z · LW(p) · GW(p)

Is it more important to see absolute or relative numbers of votes? To me it seems that if there are many votes, the relative numbers are more important: a comment with 45 upvotes and 55 downvotes is not too different from a comment with 55 upvotes and 45 downvotes; but one of them would be displayed as "-10 karma" and the other as "+10 karma" which seems different a lot.

On the other hand, with few votes, I would prefer to see "+1 karma" rather than "100% consensus" if in fact only 1 person has voted. It would be misleading to make a comment with 1 upvote and 0 downvotes seem more representative of the community consensus than a comment with 99 upvotes and 1 downvote.

How I perceive the current voting system, is that comments are somewhere on the "good -- bad" scale, and the total karma is a result of "how many people think this is good vs bad" multiplied by "how many people saw this comment and bothered to vote". So, "+50 karma" is not necessarily better than "+10 karma", maybe just more visible; like a top-level comment made immediately after writing the article, versus an insightful comment made three days later as a reply to a reply to a reply to something.

But some people seem to have a strong opinion about the magnitude of the result, like "this comment is good, but not +20 good, only +5 good" or "this comment is stupid and deserves to have negative karma, but -15 is too low so I am going to upvote it to balance all those upvotes" -- which drives me crazy, because it means that some people's votes depend on whether they were among the early or late voters (the early voters expressing their honest opinion, the late voters mostly voting the opposite of their honest opinion just because they decided that too much agreement is a bad thing).

Here is my idea of a very simple visual representation that would reflect both the absolute and relative votes. Calculate three numbers: positive (the number of upvotes), neutral (the magical constant 7), and negative (the number of downvotes), then display a rectangle of fixed width and length, divided proportionally into green (positive), gray (neutral) and red (negative) parts.

So a comment with 1 upvote would have a mostly gray line with some green on the left, the comment with 2 upvotes would have almost 2× as much green... but the comments with 10 upvotes and 12 upvotes would seem quite similar to each other. A comment with 45 upvotes and 55 downvotes, or vice versa, would have a mostly half-green half-red line, so obviously controversial.

Replies from: Yoav Ravid, Duncan_Sabien, Yoav Ravid
comment by Yoav Ravid · 2021-11-08T12:39:41.980Z · LW(p) · GW(p)

But some people seem to have a strong opinion about the magnitude of the result, like "this comment is good, but not +20 good, only +5 good" or "this comment is stupid and deserves to have negative karma, but -15 is too low so I am going to upvote it to balance all those upvotes" -- which drives me crazy, because it means that some people's votes depend on whether they were among the early or late voters (the early voters expressing their honest opinion, the late voters mostly voting the opposite of their honest opinion just because they decided that too much agreement is a bad thing).

I think this comes from a place of also seeing karma as reward/punishment, and thinking the reward/punishment is enough/too high, or from a place of seeing the score as representing where it should be relative to other comments, or just from trying to correct for underratedness/overratedness. 

I sometimes do this, and think it's alright with the current voting system, but I think it's a flaw of the voting system that it creates this dynamic.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T17:55:43.205Z · LW(p) · GW(p)

the early voters expressing their honest opinion, the late voters mostly voting the opposite of their honest opinion just because they decided that too much agreement is a bad thing

This is a misconstrual.  The late voters are also expressing their honest opinion, it's just that their honest opinion lies on a policy level rather than a raw stimulus-response level.

It's at least as valid (and, I suspect, somewhat more valid) to have preferences of the form "this should be seen as somewhat better than that" than to have preferences of the form "I like this and dislike that."

comment by Yoav Ravid · 2021-11-08T12:44:31.098Z · LW(p) · GW(p)

Here is my idea of a very simple visual representation that would reflect both the absolute and relative votes. Calculate three numbers: positive (the number of upvotes), neutral (the magical constant 7), and negative (the number of downvotes), then display a rectangle of fixed width and length, divided proportionally into green (positive), gray (neutral) and red (negative) parts.

So a comment with 1 upvote would have a mostly gray line with some green on the left, the comment with 2 upvotes would have almost 2× as much green... but the comments with 10 upvotes and 12 upvotes would seem quite similar to each other. A comment with 45 upvotes and 55 downvotes, or vice versa, would have a mostly half-green half-red line, so obviously controversial.

This is interesting and I would like to see a demo of it. The upside to this suggestion is that since it's only a visual change and doesn't actually change the way karma and voting works, it could get tested and reverted very easily, just needs to be built. 

Replies from: Viliam
comment by Viliam · 2021-11-08T16:17:47.580Z · LW(p) · GW(p)

It could even be displayed to the right from the current karma display, so we could temporarily have both, like this:

Aspiring Rationalist   5h   < 10 >    ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░▓

comment by Yoav Ravid · 2021-11-08T05:32:24.381Z · LW(p) · GW(p)

Interesting, this gave me an idea for something a bit different.

We'll have a list of good attributes a comment can have (Rigor, Effort, Correctness/Accuracy/Precision, Funny, etc.). By default you would have one attribute (perhaps 'Relevant'), and users will be able to add whichever attributes they want (perhaps even custom ones). These attributes will be voteable by users (no limit on how many you can vote on), and will show at the top of the comment together with their score (sorted by absolute value). I'm not sure how it would be used to sort comments or give points to users, though.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T01:01:21.720Z · LW(p) · GW(p)

(I expect that having written this post + being friendly with much of the team will result in me being a part of some conversations on this in the near future; if there are summaries I can share here and otherwise it would be a long time before word got out, I'll try to do so.)

comment by Shamash · 2021-11-07T18:18:35.988Z · LW(p) · GW(p)

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is my reasoning that is flawed. In such cases, it seems intuitive to write a critical comment which explicitly states what I perceive to be faulty about that claim or argument and what thought processes have led to this perception. In the case that these criticisms are valid, then the discussion of the subject is improved and those who read the comment will benefit. If the criticisms are not valid, then I may be corrected by a response that points out where my reasoning went wrong, helping me avoid making such errors in the future.

Amateur rationalists like myself are probably going to make mistakes when it comes to criticism of other people's written content, even when we strive to follow community guidelines. My concern with your suggestions is that these changes may discourage users like me from creating flawed posts and comments that help us grow as rationalists. 

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:11:45.467Z · LW(p) · GW(p)

I think there's a real danger of that, in practice.

But I've had lots of experience with "my style of moderation/my standards" being actively good for people taking their first steps toward this brand of rationalism; lots of people have explicitly reached out to me to say that e.g. my FB wall allowed them to do just those sorts of first, flawed steps.

A big part of this is "if the standards are more generally held, then there's more room for each individual bend-of-the-rules."  I personally can spend more spoons responding positively and cooperatively to [a well-intentioned newcomer who's still figuring out the norms of the garden] if I'm not also feeling like it's pretty important for me to go put out fires elsewhere.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:12:41.425Z · LW(p) · GW(p)

Or in other words, that's part of what I was clumsily gesturing at with "Cooperate past the first 'defect' from your interlocutor."  I should've written "first apparent defect."

comment by Chris_Leong · 2021-11-07T06:50:14.661Z · LW(p) · GW(p)

"If you can find me people capable of being these moderators, I will hire them. I think the number of people who have mastered the standards you propose and are also available is...smaller than I have been able to locate so far."


I think the best way to do this would be to ask people to identify a few such comments and how they would have rewritten the comment.

comment by MondSemmel · 2021-11-06T22:39:46.004Z · LW(p) · GW(p)

If I may add something, I wish users occasionally had to explain or defend their karma votes a bit. To give one example that really confuses me, currently the top three comments on this thread are:

  1. a clarification [LW(p) · GW(p)] by OP (Duncan) - makes sense
  2. a critical comment [LW(p) · GW(p)] which was edited after I criticized it [LW(p) · GW(p)]; now my criticism is at ~0 karma, without any comments indicating why. This would all be fine, except the comment generated no other responses, so now I don't even understand why I was the only one who found the original objectionable, or why others didn't like my response to it; and I don't remotely understand the combination of <highly upvoted OP> and <highly upvoted criticism which generates no follow-up discussion>. (Also, after a comment is edited, is there even a way to see the original? Or was my response just doomed to stop making sense once the original was edited?)
  3. another critical comment [LW(p) · GW(p)], which did generate the follow-up discussion I expected

(EDIT: Have fixed broken links.)

Replies from: supposedlyfun
comment by supposedlyfun · 2021-11-07T22:33:26.514Z · LW(p) · GW(p)

(You've done good work in this post's comment section, IMO.)

I wish users occasionally had to explain or defend their karma votes a bit

Maybe if a comment were required in order to strongly upvote or strongly downvote? As someone who does those things fairly often, I wouldn't hate this change. Sitting here imagining a comment I initially wanted to strongly upvote but didn't because of such a rule, I feel okay about the fact that I was deterred, given this site's standards.

Or maybe a 1 in 3 chance that a strong upvote will require a comment.

comment by hg00 · 2021-11-09T02:59:38.166Z · LW(p) · GW(p)

There's a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them.

I think this is usually done subconsciously -- people are more motivated to find issues with arguments they disagree with.

comment by Yoav Ravid · 2021-11-06T20:09:55.579Z · LW(p) · GW(p)

Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.

If you can find me people capable of being these moderators, I will hire them. I think the number of people who have mastered the standards you propose and are also available is...smaller than I have been able to locate so far.


How you would test if someone fits the criteria? Can those people be trained?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T20:13:33.585Z · LW(p) · GW(p)

I do in fact claim that I could do some combination of identify and train such people, and I claim that I am justified and credible in believing this about myself based on e.g. my experience helping CFAR train mentors and workshop instructors.  I mention this somewhat-arrogant belief because Ruby highlighted me in his comment as someone who might be able to help on related matters. 

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-06T20:23:22.917Z · LW(p) · GW(p)

Sorry, I think my comment came across as casting doubt on that, but my intention was to ask out of genuine curiosity and interest. I wonder how the process/test might look like.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T00:45:30.766Z · LW(p) · GW(p)

Double sorry!  I didn't read you as casting doubt, and didn't have defensive feelings while writing the above.

I do believe people can be trained, though I have a hard time boiling down either "how" or why I believe it.

As for how to test, I don't have a ready answer, but if I develop one I'll come back and note it here.  (Especially if e.g. Ruby asks me for help and I actually deliver.)

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-07T05:13:20.307Z · LW(p) · GW(p)

Alright, thanks :)

(If anyone else has Ideas I'd be glad to hear them)

comment by agrippa · 2021-11-08T21:53:33.354Z · LW(p) · GW(p)

I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. I think this post is pretty much an example of that:
- a lot of time is spent gesturing / sermoning about the importance of fighting biases etc. with no particularly informative or novel content (it is after all intended to "remind people of why they care".). I personally find it difficult to engage critically with this kind of high volume and low density. 
- ultimately the intent seems to be an effort to coordinate power against types of posters that Duncan doesn't like

I just don't see how most of this post is supposed to help me be more rational. The droning on makes it harder to engage as an adversary, than if the post were just "here are my terrible ideas", but it does so in an arational way.

I bring this up in part because Duncan seems to be advocating that his adherence to LW norms means he can't just propagandize etc.

If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel, how I think, what I believe, and what I think should be the case.

I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you're going to dismiss all of the "I" statements as being mere window dressing or something (I'm not sure that's what you're doing, but it seems like something like that is necessary, to pretend that they weren't omnipresent in what I wrote), you need to do so explicitly.  You need to argue for them not-mattering; you can't just jump straight to ignoring them, and pretending that I was propagandizing.

If people here really think you can't propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that's bad for rationality.

I am sure that Duncan will be dissatisfied with this response because it does not engage directly with his models or engage very thoroughly by providing examples from the text etc. I'm not doing this stuff because I just don't actually think it serves rationality to do so.

While I'm at it:

Duncan:

I'm not trying to cause appeals-to-emotion to disappear.  I'm not trying to cause strong feelings oriented on one's values to be outlawed.  I'm trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.

To me it seems really obvious that if I said to Duncan in response to something, "you are just sacrificing long-term goals for the sake of short-term point-scoring", (if he chose to respond) he would write about how I am making a bald assertion and blah blah blah. How I should retract it and instead say "it feels to me you are [...]" and blah blah blah. But look, in this quote there is a very clear and "uncited" / non-evidentiated claim that people are sacrifiing their long-term goals for the sake of short-term point-scoring. I am not saying it's bad to make such assertions, just saying that Duncan can and does make such assertions baldly while adhering to norms. 

To zoom out, I feel in the OP and in this thread [LW(p) · GW(p)] Duncan is enforcing norms that he is good at leveraging but that don't actually protect rationality. But these norms seem to have buy in. Pooey!

I continuously add more to this stupid post in part because I feel the norms here require a lot of ink gets spilled and that I substantiate everything I say. It's not enough to just say "you know it seems like you are doing [x thing I find obvious]". Duncan is really good at enforcing this norm and adhering to it. 

But the fact is that this post was a stupid usage of my time that I don't actually value having written, completely independent of how right I am about anything I am saying or how persuasive.

Again I submit:

I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you're going to dismiss all of the "I" statements as being mere window dressing or something (I'm not sure that's what you're doing, but it seems like something like that is necessary, to pretend that they weren't omnipresent in what I wrote), you need to do so explicitly.  You need to argue for them not-mattering; you can't just jump straight to ignoring them, and pretending that I was propagandizing.

Look, if I have to reply to every single attack on a certain premise, before I am allowed to use this premise, then I am not going to be allowed to use the premise ever. Because Duncan has more time than me allocated to this stuff, and seemingly more than most people who criticize this OP. But that seems like a really stupid norm. 

I made this top level because, even though I think the norm is stupid, among other norms I have pointed out, I also think that Duncan is right that all of them are in fact the norm here.

Replies from: dxu, Duncan_Sabien
comment by dxu · 2021-11-08T22:18:13.900Z · LW(p) · GW(p)

If I'm reading you correctly, it sounds like there's actually multiple disagreements you have here--a disagreement with Duncan, but also a disagreement with the current norms of LW.

My impression is primarily informed by these bits here:

I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. [...]

If people here really think you can't propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that's bad for rationality.

Could you say more about this? In particular, assuming my reading is accurate, I'm interested in knowing (1) ways in which you think the existing norms are inadequate to the task of preventing bad-faith point-scoring, (2) whether you think it's possible to patch those issues by introducing better norms. (Incidentally, if your answer to (2) is "yes", then it's actually possible your position and Duncan's are less incompatible than it might first appear, since you might just have different solutions in mind for the same problem.)

(Of course, it's also possible you think LW's norms are irreparable, which is a fact that--if true--would still be worth drawing attention to, even if it is somewhat sad. Possibly this is all you were trying to do in the parent comment, in which case I could be reading more into what you said than is there. If that's the case, though, I'd still like to see it confirmed.)

Replies from: agrippa
comment by agrippa · 2021-11-09T00:14:33.799Z · LW(p) · GW(p)

Maybe it is good to clarify: I'm not really convinced that LW norms are particularly conducive to bad faith or psychopathic behavior. Maybe there are some patches to apply. But mostly I am concerned about naivety. LW norms aren't enough to make truth win and bullies / predators lose. If people think they are, that alone is a problem independent of possible improvements. 
 

since you might just have different solutions in mind for the same problem.

I think that Duncan is concerned about prejudicial mobs being too effective and I am concerned about systematically preventing information about abuse from surfacing. To some extent I do just see this as a conflict based on interests -- Duncan is concerned about the threat of being mobbed and advocating tradeoffs accordingly, I'm concerned about being abused / my friends being abused and advocating tradeoffs accordingly. But to me it doesn't seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.

I don't think Duncan acknowledges the presence of tradeoffs here but IMO there absolutely have to be tradeoffs. To me the generally upvoted and accepted responses to jessicata's post are making a tradeoff to protect MIRI against mudslinging, disinformation, mobbing while also making it scarier to try to speak up about abuse. Maybe the right tradeoff is being made and we have to really come down on jessicata for being too vague and equivocating too much, or being a fake victim of some kind. But I also think we should not take advocacy regarding these tradeoffs at face value, which yeah LW norms seem to really encourage.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T01:48:03.413Z · LW(p) · GW(p)

I like this highlighting of the tradeoffs, and have upvoted it. But:

But to me it doesn't seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.

... I think this is easier to say when one has never been the target of a prejudicial mob on LessWrong, and/or when one agrees with the mob and therefore doesn't think of it as prejudicial.

I've been the target of prejudicial mobbing on LessWrong.  Direct experience.  And yes, it impacted work and funding and life and friendships outside of the site.

Replies from: agrippa
comment by agrippa · 2021-11-09T02:19:32.196Z · LW(p) · GW(p)

I was not aware of any examples of anything anyone would refer to as prejudicial mobbing with consequences. I'd be curious to hear about your prejudicial mobbing experience.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T03:27:42.684Z · LW(p) · GW(p)

I think it's better (for the moment at least) to let Oliver speak to the most salient one, and I can say more later if need be.  I suspect Oliver would provide a more neutral POV.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T22:03:01.027Z · LW(p) · GW(p)

propagandize / pointscore / accumulate power with relative ease

There's a way in which this is correct denotatively, even though the connotation is something I disagree with.  Like, I am in fact arguing for increasing a status differential between some behaviors that I think are more appropriate for LW and others that I think are less appropriate.  I'm trying at least to be up front about what those behaviors are, so that people can disagree.  e.g. if you think that it's actually not a big deal to distinguish between observation and inference, because people already do a good job teasing those apart.

But yes: I wouldn't use the "power" frame, but there's a way in which, in a dance studio, there's "power" to be had in conforming to the activity of dance and dance instruction, and "less power" in the hands of people not doing those things.

an effort to coordinate power against types of posters that Duncan doesn't like

I don't think this is the case; I want to coordinate power against a class of actions.  I am agnostic as to who is taking those actions, and even specifically called out that if there are people who are above the local law we should be candid about that fact.

I am not saying it's bad to make such assertions, just saying that Duncan can and does make such assertions baldly while adhering to norms.

The example you cite seems pretty fair!  I think that's a place where I'm failing to live up, and it's good that you highlighted it.

Duncan is enforcing norms that he is good at leveraging but that don't actually protect rationality. But these norms seem to have buy in. Pooey!

If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.

Replies from: agrippa
comment by agrippa · 2021-11-08T23:35:49.499Z · LW(p) · GW(p)

If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.

 

Brevity

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T23:51:23.720Z · LW(p) · GW(p)

Strong upvote.

(I think the norms I'm pulling for increase brevity; more consistent standards mean less need to bend over backwards ruling out everything else in each individual case.)

Replies from: agrippa
comment by agrippa · 2021-11-09T00:19:22.620Z · LW(p) · GW(p)

Your OP is way too long (or not sufficiently indexed) for me to, without considerable strain, determine how much or how meaningfully I think this claim is true. Relatedly I don't know what you are referring to here.

comment by Elizabeth (pktechgirl) · 2021-11-09T03:12:35.433Z · LW(p) · GW(p)

All of which is to say that I spend a decent chunk of the time being the guy in the room who is most aware of the fuckery swirling around me, and therefore the guy who is most bothered by it.

I feel like this claim is being used as evidence to bolster a call to action, without being well supported.  Unsupported evidence is always worse than supported, but it's especially grating in this case because providing counter-evidence can reasonably be predicted to be interpreted as a social attack, which people have inhibitions against making (and then when do they do get made, come out in really unproductive ways). I would prefer a norm where, if you're going to claim raw or relative intelligence as a reason to believe you, you need to provide at least a link to evidence.

Replies from: Duncan_Sabien, TekhneMakre
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T03:33:04.226Z · LW(p) · GW(p)

That's a fine norm.  What sort of evidence would you be interested in, in this case?

I can more easily provide copious evidence of me being aware of fuckery that others aren't (e.g. 100 FB posts over the last year, probably) than of high-enough-intelligence-to-justify "decent chunk."  But I could e.g. dig up my SAT scores or something.

I expect that people are sort of adversarially misconstruing "decent chunk" as a claim of something like "the vast majority of the time," or whatever, which I grant I left myself open to [LW · GW] but is not a claim I would make (since it isn't true).

Replies from: pktechgirl, pktechgirl
comment by Elizabeth (pktechgirl) · 2021-11-09T04:53:18.827Z · LW(p) · GW(p)

I should note that my interest in sharing the FB posts is that they render “Duncan’s claim to rare insight” discussable. I don’t expect them to move my opinion much because we’re FB friends, and I see many of them already.

Now that you’ve acknowledged that this is fair to assess in its own right, I’d like to share my current assessment of your insight levels:
 

  • You have at least once said things I found extremely novel and useful, although still had major disagreements with. I shared the first part with you privately, although did not update you when I developed more concerns about the model.
  • You have at least once waged a major campaign alone at great social cost, that I appreciate a great deal and respected a lot. I already shared this one privately with you but it seems worth noting in public.
  • Your FB is a mix of things I agree with and disagree with. Some of the disagreements I might change my mind on if we could have a good discussion, and under other circumstances I would be really excited to have those discussions, because I care about this a lot too and people paying sufficient attention are rare. But I (almost?) never do, because just the thought makes me feel weary. I expect the conversations to be incredibly high friction with no chance of movement on your part. I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.
    • One counterargument is that there are lots of comments on those posts. That’s true, and maybe they’re covering all the bases, but I would be surprised.
    • A reasonable question here is if losing those discussions represents lost value to you. I can point you to public twitter threads of mine I think you would find interesting, and to a pseudonymous blog if you agree to limited anonymity we can discuss out of band.
    • It is only in writing this up that I realized how much lost value this represents to me: I do think if friction was low enough we’d have really interesting discussions we both learned from.
  • Things that contribute to that feeling of friction
    • In one private interaction where we disagreed on a norm, your frame was “you [Elizabeth’s] frame is impossibly bad and meta-uncooperative”
    • You’ve repeatedly deleted posts on LW and FB, in what sure looked like anger.
    • General sense from reading your LW and FB posts/comments, although I haven’t done a quantified assessment here.
    • I remember on FB post claiming you were good at receiving feedback, which given my priors means you have so thoroughly discouraged feedback you're not aware of the scope of the problem.

I guess my overall claim here is not that you don’t have useful insight- you do- but that your certainty in the superiority of that insight reduces your ability to iterate and correct, and nobody can get everything right on the first try.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T06:06:40.292Z · LW(p) · GW(p)

I do not have, and do not believe I have claimed to have, anything like "certainty in the superiority of my insight."  Happy to just state here explicitly: I don't have anything like certainty in the superiority of my insight.

What I have is confidence that, when I'm perceiving that something is going sideways, something is, in fact, going sideways.

That's a far cry from always knowing what it is, which is itself a far cry from having any idea how to fix it.

I'm confused as to how I'm perceived as claiming superiority of insight when e.g. all I could come up with in the above essay was a set of ideas that I myself identified as terrible and insufficient.

Replies from: GWS
comment by Stephen Bennett (GWS) · 2021-11-09T06:46:34.045Z · LW(p) · GW(p)

My comment is low context both because I don’t think I’ve seen you and Elizabeth talk before and also because I only skimmed the parent comments.

When you say

I'm confused as to how I'm perceived as claiming superiority of insight when e.g. all I could come up with in the above essay was a set of ideas that I myself identified as terrible and insufficient.

This doesn’t seem to me evidence against you claiming to have superior insight under Elizabeth’s usage of the term. My reading is that she uses the term relatively, I.e. that she believes that you believe your claims about the world are right while others’ claims are wrong (or more likely to be true than others’). Terrible, as you used it in the essay, I took to be in absolute terms, as in “will these interventions help? Idk”

I’m more confident in the quote not providing evidence against her usage of “superior insight” than I am in defining her intended meaning.

comment by Elizabeth (pktechgirl) · 2021-11-09T04:29:31.539Z · LW(p) · GW(p)

I would indeed be interested in the FB posts, with the caveat that those then become debatable, and it seems quite possible that that will overwhelm comments on the topic of this post. I also think it would help to declare how you intend to handle people bringing up statements and actions (public or private) not of your choosing. I think deluks handled his comment extremely poorly and I would have done it very differently,* but I do think your claim makes "every bad decision you ever made" relevant, and the overall track record of threads on every bad decision a person ever made is extremely bad.

I think the FB posts are better evidence of "being the one to speak up" than "the only one noticing". That's not a criticism: speaking up is really important, and doing it when it matters is a huge service. People who notice but don't act aren’t very helpful. But it is a different claim

*I can get into how if you are curious but am worried deluks has poisoned the well on reasonable discussion of that problem. 

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T04:36:10.091Z · LW(p) · GW(p)

I assume by "very differently," you mean "would not have included outright falsehoods."

I don't see the connection between:

(approximately) "I'm pretty smart and I am pretty conscientious and also I've hyperfocused on this domain for a long time, so I notice stuff in this domain a lot more often than most people"

and

[whatever claim you and deluks think I made that is tantamount to an assertion of perfection, or something]

Like, it seems that you're asking me to defend some outlandish claim that I have not made.  That's the only situation in which "every bad decision I ever made" would be relevant—if I had claimed to always or overwhelmingly often be competent, or something.  It sounds like "this guy claimed no white ravens!  All we gotta do is find one!"

I clearly did not make that claim, as you can see by scrolling up and just reading.  I claimed that I twitch over this stuff, and frequently observe fuckery that other people do not even notice.  

Replies from: Duncan_Sabien, pktechgirl
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T04:49:54.845Z · LW(p) · GW(p)

But anyway, in addition to the-corpus-of-all-of-my-essays, such as the handful I've published this month and things like In Defense of Punch Bug and It's Not What It Looks Like and Invalidating Imaginary Injury and Common Knowledge and Miasma and any number of my Admonymous answers, I also spent five minutes scrolling back through my most recent FB posts and here are the first twenty or so that seemed to be about noticing or caring about fuckery (not organized by impressiveness):

https://www.facebook.com/duncan.sabien/posts/4946318165402861
https://www.facebook.com/duncan.sabien/posts/4924756570892354
https://www.facebook.com/duncan.sabien/posts/4920978771270134
https://www.facebook.com/duncan.sabien/posts/4918223831545628
https://www.facebook.com/duncan.sabien/posts/4918136751554336
https://www.facebook.com/duncan.sabien/posts/4911478975553447
https://www.facebook.com/duncan.sabien/posts/4907166399318038
https://www.facebook.com/duncan.sabien/posts/4907029615998383
https://www.facebook.com/duncan.sabien/posts/4889568061077872
https://www.facebook.com/duncan.sabien/posts/4888310204536991?comment_id=4888448571189821
https://www.facebook.com/duncan.sabien/posts/4877212198980125?comment_id=4877320285635983
https://www.facebook.com/duncan.sabien/posts/4874111979290147
https://www.facebook.com/duncan.sabien/posts/4859938947374117
https://www.facebook.com/duncan.sabien/posts/4855597157808296
https://www.facebook.com/duncan.sabien/posts/4840078599360152
https://www.facebook.com/duncan.sabien/posts/4834628186571860
https://www.facebook.com/duncan.sabien/posts/4832098550158157
https://www.facebook.com/duncan.sabien/posts/4825568070811205
https://www.facebook.com/duncan.sabien/posts/4822568564444489
https://www.facebook.com/duncan.sabien/posts/4821518481216164

... and I note that those twenty are all just within the past six weeks.  A six-week period in which I also wrote six LW essays about social dynamics.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-09T07:44:13.364Z · LW(p) · GW(p)

FYI, every single one of these posts (yes, I tested all the links) is inaccessible to me, because they require logging into Facebook.

(I’m posting this to note that this isn’t a problem specific to that one other post, but seems to be a general problem.)

Replies from: Yoav Ravid, Raemon
comment by Yoav Ravid · 2021-11-09T08:53:37.140Z · LW(p) · GW(p)

I was able to see 5-6 in firefox without incognito, and then it asked me to log in (both on ones I already saw and ones I didn't). Seems like some sort of "You have 3 more articles this month" tactic but without telling you.

comment by Raemon · 2021-11-09T07:53:26.527Z · LW(p) · GW(p)

Are you opening them in incognito browsers? They seem to work straightforwardly for me in non-logged-in browsers and don't know what might be different for you.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-09T07:55:48.815Z · LW(p) · GW(p)

This is groundhog day Ray; we just found out that it doesn't work on Opera and Firefox [LW(p) · GW(p)].

(And apparently Chrome Incognito on Windows? I'm confused about the exact line there, because it works on my Chrome Incognito on Mac.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-09T14:48:58.644Z · LW(p) · GW(p)

So far, this problem has replicated on every browser on every platform I’ve tried it on, in both regular and private windows. Chrome, Firefox, Opera, on Mac, Windows, Linux… I have not been able to view any of the given posts in any way at all.

comment by Elizabeth (pktechgirl) · 2021-11-09T05:20:11.750Z · LW(p) · GW(p)

I assume by "very differently," you mean "would not have included outright falsehoods."

 

The first of many differences, yes. I also would have emphasized the part where I thought the evidence something was wrong was obvious and you didn't (in ways that were visible to me), not the part where you proactively coordinated evidence sharing when it was personally costly to you, which was a social good.

I did not think you were claiming perfection, but I think "It's like being a native French speaker and dropping in on a high school French class in a South Carolina public school" is a very strong claim of superiority, far beyond "I notice stuff in this domain a lot more often than most people". Native speakers can be wrong, but in a disagreement with a disengaged high schooler you will basically always take the word of the native speaker.  I additionally think the problems in iteration I outlined in a sister thread really put a ceiling on your insights, although admittedly that affects analysis and improvements much more than noticing.

Replies from: Duncan_Sabien, Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T06:24:59.930Z · LW(p) · GW(p)

Also, re: evidence that something was wrong was obvious

I dunno.  This sounds like an excuse, and an excuse is all that many people will hear, but:

My current model is that Brent, whether consciously or unconsciously/instinctively, did in fact do something resembling cultivating me as a shield, by never egregiously misbehaving in my sight.  And many of the other people around me, seeing egregious misbehavior somewhat often, assumed (reasonably) that I must be seeing it, too, and not minding.

But after it all started to come out, there were something like a dozen fully dealbreaking anecdotes handed to me by not-necessarily-specifically-but-people-in-the-reference-class-of Rob, Oli, Nate, Logan, Nick, Val, etc., any one of which would have caused me to spring into action, except they just never mentioned it and I was never in the room to see it.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-11-09T06:31:35.168Z · LW(p) · GW(p)

FWIW: I believe you that Brent cultivated you, and I think you talking about that has been really useful in educating people (including me) about how toxic people do that. I do think it had to be some damn strong cultivation to overcome the baseline expectations set by his FB posts, and I'd be interested in hearing you talk about what he did to overcome that baseline- not because I think you were especially susceptible, but because whatever he did worked on a lot of people, and that makes it useful to understand.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T06:43:05.827Z · LW(p) · GW(p)

Well, for starters, I had unfollowed him on FB by about 2016 as a result of being just continually frustrated by his relentless pessimism.  So I probably missed a whole lot of what others saw as red flags.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-11-09T06:59:37.697Z · LW(p) · GW(p)

This indeed changes my opinion a fair bit, and I should have had it as a more active hypothesis.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T06:10:25.099Z · LW(p) · GW(p)

I think "It's like being a native French speaker and dropping in on a high school French class in a South Carolina public school" is a very strong claim of superiority, far beyond "I notice stuff in this domain a lot more often than most people".

I find this helpful, and I think it's a fair and reasonable reading that I should have ruled out.

What I meant by choosing that example in particular was that French contains a lot of sounds which English speakers literally can't perceive at first, until they practice and build up some other background knowledge.  That's ... not entirely different from a claim of superiority, but I tried to defuse the sense of superiority by noting that a lot of it comes from just relentlessly attending to the domain—"it's not that I'm doing anything magic here, many of the people I'm hanging out with are smarter or conscientiouser, it's just that I happen to have put in more reps is all."

It didn't work.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-11-09T06:54:23.750Z · LW(p) · GW(p)

Ah, this makes sense and is helpful, and now that you've spelled it out I can see how it connects to other things in the post in ways I didn't before. It also makes cases of failure much less relevant, since no one has all phenomes.

Worth noting that I noticed the kerning example seemed very different than the native speaker example, but the "native speaker in a room full of bored teenagers" claim felt so strong I resolved in that direction.

comment by TekhneMakre · 2021-11-09T03:36:52.632Z · LW(p) · GW(p)

(I agree that there's an issue here with bucketing counter-evidence with social attack, and that a norm of providing evidence is preferable to a silent bucket error; also, it seems likely that there are belief-like claims that (1) are useful to make, even though at least by default they're mixed in with social moves, and that (2) are difficult (costly, say) to provide evidence for. If such claims are common, it might be worth having an epistemic status that can contain that nuance, something like "this claim is acknowledged to be too costly for many pairs of people to reach Aumann agreement on, and shouldn't yet function as a common knowledge belief among groups containing many such pairs, but it's still what I think for what that's worth". Maybe there's a short phrase that already means this, like "from my perspective...", though sadly such things are always diluting.)

comment by arikagan · 2021-11-06T20:40:11.939Z · LW(p) · GW(p)

Summary: I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of. I think it may do some of the very things it is arguing against, although my epistemic status is that I have not put in enough time to analyze it for a truly charitable reading. 

Disclaimer: I don't have the time (or energy) to put in the amount of thought I'd want to put in before writing this comment. But nevertheless, my model of Duncan wants me to write this comment, so I'm posting it anyway. Feel free to ignore it if it's wrong, useless, or confusing, and I'm sorry if it's offensive or poorly thought out! 

Object-level: I quite liked your two most recent posts on Concentration of Force and Stag Hunts. I liked them enough that I almost sent them to someone else saying "here's a good thing you should read!" It wasn't until I read the comment below [LW(p) · GW(p)] by SupposedlyFun that I realized something slightly odd is going on and I hadn't noticed it. I really should have noticed some twinge in the back of my mind on my own, but it took someone else pointing it out for me to catch it. I think you might be guilty of the very thing you're complaining about in this second essay. I'm not entirely sure. But if I'm right about what this second post is about, you'd want me to write this comment. 

Of course, this is tricky because the problem is I'm not sure I can accurately summarize what this second post is about. The first post was very clear - I can give a one-sentence summary I'd give an >80% you'd agree with as an accurate summary (you can win local battles by strategically outnumbering someone locally without outnumbering them in the war more generally, and since such victories can snowball in social realms, we should be careful to notice and leverage this where possible as it's a more effective use of force). Whereas I only give <30% you'd agree with any attempted summary of this post I could give. In your defense, I didn't click out to all of the comments in the other 3 posts that you give as examples of things going wrong. I also didn't read through the entire post a second time. Both of these should be done for a more charitable reading. On the other hand, I committed a decent amount of time to reading both of these essays all the way through, and I imagine anything more than that is a slightly unreasonable standard for effort to understand your core claim. 

I have something like the vague understanding that you think LW is doing something bad, you want less of it, and more of something better. Maybe you merely just want more Rationality and I'm not missing anything, but I think you're trying to make a more narrow point and I'm legitimately not sure what it is. I get that you think the recent Leverage drama is not a good example of Rationality. But without following a number of the linked comments, I can't say exactly what you think went wrong. I have my own views of this from having followed the Leverage drama, but I don't think this should be a prerequisite to understanding the claims in your post. 

Your comment below [LW(p) · GW(p)] provides some additional nuance by giving this example: "I have direct knowledge of at least three people who would like to say positive things about their experience at Leverage Research, but feel they cannot." Maybe the issue is you merely need to provide more examples? But that feels like a surface-level fix to a deeper problem, even though I'm not sure what the deeper problem is. All I can say is that I left the post with an emotion (of agreement), not a series of claims I feel like I can evaluate. Whereas your other posts feel more like a series of claims I can analyze and agree or disagree with. What's particularly interesting is I read the essay through and I was like "Yeah Duncan woo this is great, +1" and I didn't even notice I didn't know precisely the narrow thing you're arguing for until I read SupposdelyFun's comment saying the same. This suggests you might be doing the very thing (I think) you're arguing against: using rhetoric and well-written prose to convince me of something without my even knowing exactly what you've convinced me of. That the outgroup is bad (boo!) that the warriors for rationality are getting outnumbered (yikes!) and that we should rally to fix it (huzzah!).

I'm not entirely sure. My thinking around this post isn't clear enough to know precisely what I'm objecting to, but I'm noticing a vague sense of confusion, and I'm hoping that pointing it out is helpful. I do think that putting out thinking on this topic is good in general, and meta-discussion about what went wrong with the Leverage conversation seems sorely needed, so I'm glad that you're starting a conversation about it (despite my comments above).

Replies from: anon03, agrippa, Duncan_Sabien
comment by anon03 · 2021-11-07T01:00:39.429Z · LW(p) · GW(p)

Hmm, my two-sentence summary attempt for this post would be: "In recent drama-related posts, the comment section discussion seems very soldier-mindset instead of scout-mindset, including things like up- and down-voting comments based on which "team" they support rather than soundness of reasoning, and not conceding / correcting errors when pointed out, etc. This is a failure of the LW community and we should brainstorm how to fix it."

If that's a bad summary, it might not be Duncan's fault, I kinda skimmed.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T01:04:43.392Z · LW(p) · GW(p)

Strong upvoted for the effort it takes to write a short, concise thing.  =P

I endorse this as a most-of-it summary, though I think the details matter.

comment by agrippa · 2021-11-09T17:03:32.126Z · LW(p) · GW(p)

I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of.

I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T18:24:33.061Z · LW(p) · GW(p)

It is indeed a concern.

The alarm is a bit tempered by the fact that this doesn't seem to be a majority view, but "40% of readers" would be deeply problematic and "10% of readers" would still probably indicate some obvious low-hanging fruit for fixing a real issue.

Looking at the votes, I don't think it's as low as 4% of readers, which is near my threshold for "no matter what you do, there'll be a swath this large with some kind of problem."

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T20:46:58.094Z · LW(p) · GW(p)

I do think I'm doing something (in this post specifically) that might be accurately described as "dropping down below the actual level of rigor, to meet the people who are WAY below the level of rigor halfway, and try to encourage them to come up."  I've made an edit to the author's note now that you've helped me notice this.

I think my overall model is something like "there are some folk who nominally support the norms but grant themselves exceptions too frequently in practice, and there are some folk who don't actually care all that much about the norms, or who subordinate the norms to something else."

(Where "the norms" is the stuff covered in the Sequences and SlateStarCodex and in the lists of things my brain does in the post above and so forth.)

And I think I'm arguing that we should encourage the former to better adhere to their own values, and support them in doing so, and maybe disincentivize or disinvite (some fraction or subset) of the latter.

re: "But without following a number of the linked comments, I can't say exactly what you think went wrong," I'm happy to detail an example or two, if anyone wants to say "hey, what's wrong with this??" though part of why I didn't detail a large number of them in the OP is that I don't have the spoons for it.

comment by localdeity · 2021-11-06T11:22:02.878Z · LW(p) · GW(p)

My brain notes, in passing, that Eternal September, and the Septembers that preceded it, can be described in terms of concentration of force.  If a forum has a certain culture, and a bunch of noobs come in without it (instead exhibiting some kind of "mainstream lowest-common-denominator" culture, incompatible with that of the forum)... If they come in one by one, then they'll face negative reinforcement (downvotes and/or critical comments), pushing them to either adopt the forum's culture or leave; if they arrive in a clump, then they'll be in a position to positively support each other, reducing the negative-reinforcement effect.  If enough of them arrive in a group, then the forum's "immune system" may fail to stop them, and they may end up changing the forum's culture.

Today, a sudden influx of users tends to come from some big event.  Like when a hugely popular site drops a link to your forum, or when there's a huge story on your forum that draws a lot of interest from outsiders.  (The Leverage story is one of the latter; a story like Zoe Curzi's is fascinating to humans.  I told a non-rationalist friend about it, and he said it was very juicy gossip.  It also came up in discussions with rationalists, some of whom are more active than others on LW; I wonder if the posts were also linked to on other sites.  I wonder if LW admins could confirm patterns like "The recent huge threads about Leverage and about MIRI had a higher proportion of non-users, of new users, and of less-regular users than most other threads." HTTP referrer headers can give the proximate source of inbound links, while interpersonal gossip is harder to track.  Actually, regarding links, habryka posted an image [LW(p) · GW(p)] recently [incidentally, the top post, "I don't know how to count that low", is an example of getting linked to by Hacker News] ... but although it's interesting, I don't think it directly addresses my above question.)

So, it seems like sudden large influxes of users, attributable to big stories or big inbound links, are a danger to a forum's culture.  That seems to be "common/received wisdom" from some portions of the internet I've occupied.

This brings to mind a funny episode from Hacker News's history, where its creator posted something like "We've gotten written about by a major news site and will probably be flooded by mainstream people who want to talk about politics and don't care about technical subjects, so please make an extra effort to post and upvote links about Erlang internals and things like that to discourage those people", and existing literal-minded users took this to heart and filled the front page with nothing but Erlang links.

Anyway, combining one of the proposals with "concentration of force", this generates the following idea: If you have something like "dedicated volunteers or paid posters", then by far the best time to deploy them is when you have what looks like one of these big influxes.  (It seems an obvious enough combination that I'm mildly surprised that this wasn't on Duncan's list of terrible-idea suggestions.  Perhaps because it goes against the focus on small things?  Heh, Gunnar_Zarncke has the same suggestion [LW(p) · GW(p)].)  My impression is that existing moderators did put a bunch of time into following the huge threads; but, like, if you had a squad of "reserve forces" of extra moderators kept on retainer, who only get called in occasionally, that's probably more efficient and effective.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T17:40:00.434Z · LW(p) · GW(p)

(Because it wasn't a terrible idea. =P)

comment by Raelifin · 2021-11-07T23:37:01.978Z · LW(p) · GW(p)

First of all, thank you, Duncan, for this post. I feel like it captures important perspectives that I've had, and problems that I can see and puts them together in a pretty good way. (I also share your perspective that the post Could Be Better in several ways, but I respect you not letting the perfect be the enemy of the good.)

I find myself irritated right now (bothered, not angry) that our community's primary method of highlighting quality writing is by karma-voting. It's a similar kind of feeling to living in a democracy--yes, there are lots of systems that are worse, but really? Is this really the best we can do? (No particular shade on Ruby or the Lightcone team--making things is hard and I'm certainly glad LW exists and is as good as it is.)

Like, I think I have an idea that might make things substantially better that's not terrible: make the standard signal for quality being a high price on a quality-arbitrated betting market. This is essentially applying the concept of Futarchy to internet forums (h/t ACX and Hanson). (If this is familiar to you, dear reader, feel free to skip to responses to this comment, where I talk about features of this proposal and other ideas.) Here's how I could see it working:

When a user makes a post or comment or whatever, they also name a number between 0 and 100. This number is essentially a self-assessment of quality, where 0 means "I know this is flagrant trolling" and 100 means "This is obviously something that any interested party should read". As an example, let's say that I assign this comment an 80.

Now let's say that you are reading and you see my comment and think "An 80? Bah! More like a 60!" You can then "downvote" the comment, which nudges the number down, or enter your own (numeric) estimate, which dramatically shifts the value towards your estimate (similar to a "strong" vote). Behind the scenes, the site tracks the disagreement. Each user is essentially making a bet around the true value of the post's quality. (The downvote is a bet that it's "less than 80".) What are they betting? Reputation as judges! New users start 0 judge-of-quality-reputation, unless they get existing users to vouch for them and donate a bit of reputation. (We can call this "karma," but I think it is very important to distinguish good-judge karma, from high-quality-writing karma!) When voting/betting on a post/comment, they stake some of that reputation (maybe 10% up to a cap of 50? (Just making up numbers here for the sake of clarity; I'd suggest actually running experiments)).

Then, you have the site randomly sample pieces of writing, weighting the sampling towards those that are most controversial (ie have the most reputation on the line). Have the site assign these pieces of writing to moderators whose sole job is to study that piece of writing and the surrounding context and to score its quality. (Perhaps you want multiple moderators. Perhaps there should be appeals, in the form of people betting against the value set by the moderator. Etc. More implementation details are needed.) That judgment then resolves all the bets, and results in users gaining/losing reputation.

Users who run out of reputation can't actually bet, and so lose the ability to influence the quality-indicator. However, all people who place bets (or try to place bets when at zero/negative reputation) are subsidized a small amount of reputation just for participating. (This inflation is a feature, encouraging participation in the site.) Thus, even a new user without any vouch can build up ability to influence the signal by participating and consistently being right.

Replies from: Raelifin, Yoav Ravid, Raelifin, Raelifin
comment by Raelifin · 2021-11-07T23:37:30.543Z · LW(p) · GW(p)

To my mind the primary features of this system that bear on Duncan's top-level post are:

  • High-reputation judges can confidently set the quality signal for a piece of writing, even if they're in the minority. The truth is not a popularity contest, even when it comes to quality.
  • The emphasis on betting means that people who "upvote" low-quality posts or "downvote" high-quality ones are punished, making "this made me feel things, and so I'm going to bandwagon" a dangerous mental move. And people who make this sort of move would be efficiently sidelined.

In concert, I expect that it would be much easier to bring concentrated force down on low-quality bits of writing. Which would, in turn, I think make the quality price/signal a much more meaningful piece of information, instead of the current karma score which is as others noted, is overloaded as a measure.

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-08T00:57:32.333Z · LW(p) · GW(p)

I like this idea. It has a lot of nice attributes. 

I wrote some in the past about what all the different things are that a voting/karma system on LW is trying to produce, with some thoughts on some proposals that feel a bit similar to this: https://www.lesswrong.com/posts/EQJfdqSaMcJyR5k73/habryka-s-shortform-feed?commentId=8meuqgifXhksp42sg [LW(p) · GW(p)] 

Replies from: Raelifin
comment by Raelifin · 2021-11-08T02:52:29.232Z · LW(p) · GW(p)

Nice. Thank you. How would you feel about me writing a top-level post reconsidering alternative systems and brainstorming/discussing solutions to the problems you raised?

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-08T22:56:11.831Z · LW(p) · GW(p)

Seems great! It's a bit on ice this week, but we've been thinking very actively about changes to the voting system, and so right now is the right time to strike the iron if you want to change the teams opinion on how we should change things, and what we should experiment with.

comment by Yoav Ravid · 2021-11-08T04:54:17.545Z · LW(p) · GW(p)

I think this is too complex for a comment system, but upvoted for an interesting and original idea. 

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-08T05:03:19.686Z · LW(p) · GW(p)

My sense is that the basic UI interaction of "look at a price and judge it as wrong" has the potential to be surprisingly simple for a comment section. I often have intuitions that something is "overpriced" or "underpriced".

But I find the grounding-out process pretty hard to swallow. I'd be spending so much of my time thinking about who was grounding it out and how to model them socially, which is a far more costly operation than my current one that's just "do I think the karma number should go up or down".

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-08T05:03:43.196Z · LW(p) · GW(p)

But also strong upvoted for an exciting and original idea.

comment by Raelifin · 2021-11-07T23:37:50.840Z · LW(p) · GW(p)

One obvious flaw with this proposal is that the quality-indicator would only be a measure of expected rating by a moderator. But who says that our moderators are the best judges of quality? Like, the scheme is ripe for corruption, and simply pushing the popularity contest one level up to a small group of elites.

One answer is that if you don't like the mods, you can go somewhere else. Vote with your feet, etc.

A more turtles-all-the-way-down answer is that the stakeholders of LW (the users, and possibly influential community members/investors?) agree on an aggregate set of metrics for how well the moderators are collectively capturing quality. Then, for each unit of time (eg year) and each potential moderator, set up a conditional prediction market with real dollars on whether that person being a moderator causes the metrics to go up/down compared to the previous time unit. Hire the ones that people predict will be best for the site.

Replies from: Viliam
comment by Viliam · 2021-11-08T12:46:24.076Z · LW(p) · GW(p)

I guess the question is, what is the optimal amount of consensus. Where do we want to be, on the scale from Eternal September to Echo Chamber?

Seems the me that the answer depends on how much correct we are, on average. To emphasise: how much correct we actually are, not how much correct we want to be, or imagine ourselves to be.

On a website where moderators are correct about almost everything, most disagreement is a noise. (It may provide a valuable feedback on "what other people believe", but not on how things actually are.) It is okay to punish disagreement, because in the rare situations where it is correct and you notice it, you can afford the karma hit for opposing the moderators. (And hopefully the moderators are smart enough to start paying attention when a member in good standing surprisingly decides to take a karma hit.)

On a website where moderators are quite often wrong, punishing disagreement means that the community will select for people who share the same biases, or who are good at reading the room.

I believe that people are likely to overestimate how much "other reasonable people" agree with them, which is why echo chambers can happen to people who genuinely see themselves as "open to other opinions, as long as those opinions are not obviously wrong (spoiler: most opinions you disagree with do seem obviously wrong)". As a safety precaution against going too far in a positive feedback loop (because even if you believe that the moderators already go too far in some direction, the prediction voting incentivizes you to downvote all comments that point it out), there should be a mechanism to express thoughs that go against the moderator consensus. Like, a regular thread to say "I believe the moderators are wrong about X" without being automatically punished for being right. That is, a thread with special rules where moderators would upvote comments for being well-articulated without necessarily being correct.

comment by Raelifin · 2021-11-07T23:38:05.906Z · LW(p) · GW(p)

I also want to note that this proposal isn't mutually exclusive with other ideas, including other karma systems. It seems fine to have there be an additional indicator of popularity that is distinct from quality. Or, more to my liking, would be a button that simply marks that you thought a post was interesting and/or express gratitude towards the writer, without making a statement about how bulletproof the reasoning was. (This might help capture the essence of Rule Thinkers In, Not Out [LW · GW] and reward newbies for posting.)

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2021-11-09T05:54:36.559Z · LW(p) · GW(p)

First, I think promoting and encouraging higher standards is, if you'll pardon the idiom, doing God's work. 

Thank you. 

I'm so appreciative any time any member of a community looks to promote and encourage higher standards. It takes a lot of work and gets a lot of pushback and I'm always super appreciative when I see someone work at it.

Second, and on a much smaller note, if I might offer some......... stylistic feedback?

I'm only speaking here about my personal experience and heuristics. I'm not speaking for anyone else. One of my heuristics — which I darn well know isn't perfectly accurate, but it's nevertheless a heuristic I implicitly use all the time and which I know others use — is looking at language choices made when doing a quick skim of a piece as a first-pass filter of the writer's credibility.

It's often inaccurate. I know it. Still, I do it.

Your writing sometimes, when you care about an issue, seems to veer very slightly into resembling the writing of someone who is heated up about a topic in a way that leads to less productive and coherent thought.

This leads my default reaction to discounting the credibility of the message slightly.

I have to forcibly remind myself not to do that in your case, since you're actually taking pretty cohesive and intelligent positions. 

As a small example:

These are all terrible ideas.

These are all

terrible

ideas.

I'm going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.

I just — umm, in my personal... umm.... filters... it doesn't look good on a skim pass. I'm not saying emulate soul-less garbage at the expense of clarity. Certainly not. I like your ideas a lot. I loved Concentration of Force. 

I'm just saying that, on the margin, if you edited down some of the first-person language and strong expressions of affect a little bit in areas where you might be concerned about it being "not yet a place where I can rely on my reputation for saying what I actually mean"... it might help credibility.

I've written quite literally millions of words in my life, so I can say from firsthand experience that lines like that do successfully pre-empt stupid responses so you get less dumb comments.

That's true.

But I think it's likely you take anywhere from a 10% to 50% penalty to credibility to many casual skimmers of threads who do not ever bother to comment (which, incidentally, is both the majority of readers and me personally in 2021).

I see things like the excerpted part, and I have to consciously remind myself not to apply a credibility discount to what you're saying, because (in my experience and perhaps unfairly) I pattern match that style to less credible people and less credible writing.

Again, this is just a friendly stylistic note. I consider myself a fan. If I'm mistaken or it'd be expensive to implement an editing filter for toning that down, don't bother — it's not a huge deal in the grand scheme of things, and I'm really happy someone is working on this.

I suppose I'm just trying to improve the good guys' effectiveness for concentration of force reasons, you could say.

Salut and thanks again.

Replies from: Duncan_Sabien
comment by Raemon · 2021-11-07T17:28:29.133Z · LW(p) · GW(p)

Some thoughts on resource bottlenecks and strategy.

There's a lot I like about the set of goals Duncan is aiming for here, and IMO the primary question is one of prioritization.

I do think some high-level things have changed since 2018-or-so. Back when I wrote Meta-tations on Moderation [LW · GW], the default outcome was that LW withered and died, and it was really important people move from FB to LW. Nowadays, LW seems broadly healthy, the team has more buy-in, and I think it's easier to do highly opinionated moderation more frequently for various reasons.

On the other hand, we did just recently refactor the LW team into Lightcone Infrastructure. Most of the team is now working on a broader project of "figure out the most important bottlenecks facing humanity's ability to coordinate on x-risk, and building things that fix that bottleneck" (involving lots of pivoting). Ruby is hiring more people to build more capacity on the LW team but hiring well is a slow process. And most of the plans that seem to accomplish (some version of) what Duncan is pointing to here seem really expensive.

The good news is that we're not money-constrained much these days. The biggest bottlenecked resource is team-attention. When I imagine the "hire a bunch of moderators to full-time respond to every single comment" (not an inherently crazy idea IMO), the bottleneck is vetting, hiring, training and managing those moderators. 

I do think "which standards exactly, and what are they aiming for?" is a key question. A subproblem is that rigidly pushing for slightly misaligned standards is really infuriating and IMO drives people away from the site for reasons I don't think are good. Part of reason I think hiring moderators is high effort is that I think a bad (or, "merely pretty good") moderator can be really annoying and unhelpful.

I am pretty optimistic about technological solutions that don't require scaling human attention (and I do think there's a lot of low-hanging fruit there). 

Brainstorming some ideas:

  • Users get better automated messaging about site norms as they first start posting. Duncan and Ruby both mentioned variants of this.
    • One option is that in order to start growing in vote power, a user has to read some stuff and pass some kind of multiple choice test about the site norms. (I might even make it so everyone temporarily loses the ability to Strong Vote until they've taken the test)
  • Checking if a user has gotten a moderate amount of net-downvotes recently (regardless of total karma), and some combo of flagging them for mod-attention, and giving them an automated "hey, it seems like something has been up with your commenting lately. You should reflect on that somehow [advice on how to do so, primary suggestion being to comment less frequently and more thoughtfully]. If it keeps up you may temporarily lose posting privileges."
  • FB-style reacts that let people more easily give feedback about what's wrong with something without replying to it.
Replies from: Raemon, mingyuan, Chris_Leong
comment by Raemon · 2021-11-07T17:55:09.009Z · LW(p) · GW(p)

Something that was previously seemed some-manner-of-cruxy between me and Duncan (but I'm not 100% sure about the flavor of the crux) is "LessWrong who's primary job is to be a rationality dojo" vs "LessWrong who's primary job is to output intellectual progress." 

Where, certainly, there's good reason to think the Intellectual Progress machine might benefit from a rationality dojo embedded in it. But, that's just one of the ideas for how to improve rate-of-intellectual progress. And my other background models point more towards other things as being more important for that.

BUT there is a particular model-update I've had that is new, which I haven't gotten around to writing up yet. (This is less of a reply to Duncan and more to other people I've argued with over the years)

A key piece of my model is that a generative intellectual process looks very different from the finished output. It includes lots of leaps of intuition, inferential distance, etc. In order to get top-thinkers onto LW on a regular basis rather than in small private discords, it's really important for them to be able to think-out-loud without being legible at every step here. And the LW team got a lot of complaints from good authors about LW being punishing about this in 2018. 

But there's a different problem which is that newcomers who haven't yet gotten a lot of practice thinking deliberately/rationality, need to get that practice. If you show up at university, you basically write bad essays for 4 years and only your professor (who is paid) is obligated to read them.

And then, there is a blurry line between "metaphorical undergraduates who are still learning", "metaphorical grad students (who write 'real' things but not always at high quality with good judgment"), and "metaphorical professors".

In 2018, lots of people agreed LW was too nitpicky. But an update I made in late 2019 was that the solutions for metaphorical undergrads, grads, and professors might look pretty different. This probably has relationship with the preformal/formal/postformal distinction that Vaniver points at elsethread. And I think this lends itself to a reasonable operationalization of "who are the cool kids who are above the law?" (if one tried implementing something like that suggestion in Duncan's OP)

So I now think it's more reasonable to have new users basically expect to have all of their stuff critiqued about basic things. 

(but – I still think it's important to have a good model of what intellectual generativity requires for the critique to be useful, a fair amount of the time)

A further complication is "what's up with metaphorical 'grad students'" who sort of blur the line between on how much leeway it makes sense to give them. I think many past LW arguments about moderation also had a component of "who exactly are the students, grad students and professors here?"

None of this translates immediately into an obviously good process. But is part of the model of how I think such a process should get designed.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:06:28.351Z · LW(p) · GW(p)

Strong agreement with this, assuming I've understood it.  High confidence that it overlaps with what Vaniver laid out, and with my interpretation of what Ben was saying in the recent interaction I described under Vaniver's comment.

EDIT: One clarification that popped up under a Vanvier subthread: I think the pendulum should swing more in the direction laid out in the OP.  I do not think that the pendulum should swing all the way there, nor that "the interventions gestured at by the OP" are sufficient.  Just that they're something like necessary.

comment by mingyuan · 2021-11-07T20:21:06.609Z · LW(p) · GW(p)

Small addition: LW 1.0 made it so you had to have 10 karma before making a top-level post (maybe just on Main? I don't remember but probably you do). I think this probably matters a lot less now that new posts automatically have to be approved, and mods have to manually promote things to frontpage. But I don't know, theoretically you could gate fraught discussions like the recent ones to users above a certain karma threshold? Some of the lowest-quality comments on those posts wouldn't have happened in that case.

comment by Chris_Leong · 2021-11-11T17:07:00.772Z · LW(p) · GW(p)

I guess where I'd like to see more moderator intervention would largely be in directing the conversation. For example, by creating threads for the community to discuss topics that you think it would be important for us to talk about.

comment by Alexander (alexander-1) · 2021-11-08T09:02:39.733Z · LW(p) · GW(p)

I think you are getting at something here, Duncan. I've become interested in the following question lately: "How should rationalists conduct themselves if their goal is to promote rationality [? · GW]?" Now, I understand that promoting rationality is not every rationalist's top priority, hence I stated that condition explicitly.

I've been thoroughly impressed by how Toby Ord conducts himself in his writings and interviews. He is kind, respectful, reassuring and most importantly, he doesn't engage in fear-mongering despite working on x-risks. In his EA interview, he said, "Let us not get into criticising each other for working on the second most important thing." I found this stunningly thoughtful and virtuous. I find this to be an excellent example of someone going about achieving their goals effectively.

As much as I like Dawkins and love his books, I will admit that his attitude is sometimes unhelpful towards his own goals. I recall hearing (forgot where) that before a debate on spirituality, Dawkins' interlocutor asked him to read some documents ahead of the discussion. Dawkins showed up having not read the papers and said, "I did not read your documents because I know they are wrong." [citation needed] Now, this attitude might have been amusing to some in the audience, but, on the whole, this is irrational given the goal is to promote science.

Whenever I engage in motivated reasoning, motivated scepticism or subtle ad hominem in an argument, I can feel it. It feels like I am making a mistake. I feel a vague sense of guilt and confusion in the back of my head and a lump in my throat upon engaging in such conduct. I like the idea of leaning into confusion, which I recall coming across somewhere in the sequences and elsewhere on LessWrong. Still, I would like to become more proficient at actively avoiding these mistakes in the first place.

Since it was posted, I have been closely following the My experience at and around MIRI and CFAR [LW · GW] post, but I didn't know who or what to believe. Anecdotes were being thrown like hotcakes and every which way. Given this confusion, I became more interested in learning a lesson from the situation than picking sides, debunking claims or pointing fingers at culprits.

comment by MondSemmel · 2021-11-06T13:31:37.483Z · LW(p) · GW(p)

Suggestions for LW features that could shape its culture by (dis)incentivizing certain behavior (without thinking about how hard they would be to implement):

On How LW Appears to Outside Readers

There's a certain kind of controversial post that inevitably generates meta-discussion of whether people should be allowed to post it here (most recently in this book review [LW · GW]). Crucially, the arguments I see there are not "I don't like this" but usually "I'm afraid of what will happen when people who don't like this see it, and associate LW with it". (I found this really tedious, and would prefer a culture where we stick our own heads out and stick to "I don't like this" rather than appealing to third parties.) Also, I wish there were a way to preempt this objection, so as to not fight the same battle over and over.

Other posts are in dispute (as in, have an unusually high fraction of downvotes), like jessicata's post [LW · GW], but a casual reader might only see the post (and maybe its positive karma score), with all the controversy and nuance happening in the utterly impenetrable comments section.

So what might one do about that?

  • Some Reddit-style sites compute a controversy score (percentage of downvotes). Then one could sort by controversy, or by default filter controversial posts from the frontpage, or visibly flag controversial posts in a way people not familiar with LW would understand, or tag as "controversial", or something.
  • If controversial posts are rare in number, this problem can also be tackled by mod intervention. For instance, mods could have the power to put a disclaimer (/ disclaimer template) above a post, or these could be triggered automatically by some specific metrics. Some (bad) examples:
    • "This post has a notable fraction of downvotes, and a high number of nested comment threads. This indicates that it's in dispute. Check the comments for details."
    • Or: a manual or automatic disclaimer on high-traffic non-frontpaged private blog posts (like the book review [LW · GW]) to indicate to readers who come from elsewhere: This post is not frontpaged. LW has less strict moderation standars for private blogs. The karma score (upvotes) of posts reflect positive sentiment towards the contribution by the poster (e.g. taking the time to write a review), they're not automatically an endorsement of <the reviewed item>. And so on.

Handling High-Stakes Controversies

Duncan's post notes various ways in which recent controversies were not handled optimally. Some (probably bad) suggestions for site features to help handle such situations better:

  • The disclaimer thing, so new readers know that a post is controversial before they read it and take its claims at face value. This might also be warranted for some high-karma controversial comments.
  • Moderation
    • Mods could try to "turn down the heat" by setting stricter commenting guidelines, or temporarily prevent users below some karma threshold from posting, or something.
    • Flagging important clarifying comment threads so they appear higher in the comment order, or with extra highlighting.
    • Mark comments by moderators-acting-as-moderators with a flair or other highlighting.
  • On-site low-friction ways to post anonymously, with the option of leaking some non-identifying information like "I've been a LW user for >3 years with >1k karma", or to ping specific LW users with "user X can vouch for my identity", which user X could then confirm with a single click. Though I would not want this anonymity feature available on non-controversial posts.
  • A feature for a user to "request moderation for this post", or "request stricter commenting guidelines" or to indicate "this post seems controversial to me" or something.

Of course such features don't solve a problem by themselves, but they can help, and I'm more optimisic of attempts to improve a culture if the site infrastructure supports those attempts and incentivizes that improved culture.

Rewarding Exceptional Content

As noted in the LW book review bounty program [LW · GW], exceptional content on LW is potentially very valuable, so it makes sense to incentivize it. The karma system helps, but it's not enough - as can be seen when extra incentives come into play, e.g. the extra reviews [LW · GW] generated by the bounty program.

Some features beyond the karma system that could help here:

  • Reddit nowadays has a separate "awards" system which users can use to reward exceptional content. I don't like the specific implementation at all - it's full of one-upmanship of progressively more expensive awards, and posts with lots of awards just look cluttered - but one could imagine an implementation that would work here.
  • For instance, there are a number of active bounties [? · GW] on LW, but one could imagine a smaller-scale version of setting and rewarding bounties that would work better if built into the site, e.g. for Question posts (to reward the best answer), or just to gift someone money for writing a particularly important post or comment.
    • That said, this is the kind of thing that, if implemented suboptimally, could easily incentivize detrimental behavior, instead.
  • Or (like mentioned in the Controversies section), what about a system for users or mods to flag particularly high-quality or high-importance comments so they get some extra highlighting or something?

Related:

Because comments are much less discoverable than posts, lots of high-effort high-value comments, even if they're very-high-karma, get lost in the masses of LW comments and are hard to find or refer to later on.

What could be done about that?

  • For instance, if users or mods see a comment thread of exceptional and enduring value, they could flag it as such (I already occasionally see follow-up comments of the form "This is good enough for a top-level post!"), and then others (volunteers or paid contributors) could turn the best ones of those into top-level posts, with karma going to the original posters.

(To end on a meta comment: I spent >2.5h on three high-effort comments in this thread, and would be disappointed if they got lost in the shuffle. Conversely, I'm more likely to make the effort in the future if I have a sense that it paid off in some way.)

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-06T14:56:51.410Z · LW(p) · GW(p)

First of all, thanks for your three comments, I think they provide valuable analysis and suggestions.

Another thing I notice about controversial posts: Because of how the front page works, posts that get a lot of comments get more exposure, because they show up in recent discussion, while posts that are correct and valuable but uncontroversial are likely to get less comments (even if everyone who upvotes them left a simple "good post" comment) unless they somehow manage to generate discussion.

I'm not entirely sure if it a "bug" or a "feature". On the one hand, posts that are agreed to be good and valuable get drowned out, on the other hand, perhaps it's exactly the controversial posts that deserve attention to resolve the controversy?

One way to counteract that is to leave more simple, generic comments like "Thanks for writing this", "This was great", "I enjoyed reading this", etc. People (including me sometimes when I consider making them) worry about not adding anything substantial, but I think that's not a problem, I like seeing these comments and the Karma system should help get the more substantial comments near the top. 

That's a social suggestion though, rather than a feature that can be implemented in the website. I don't have an idea for a feature that could deal with that (the main ones currently are the 'magic' sorting on the frontpage, curation, and the frontpage recommendations, but I don't think the latter two have a big impact on this).

  • For instance, there are a number of active bounties [? · GW] on LW, but one could imagine a smaller-scale version of setting and rewarding bounties that would work better if built into the site, e.g. for Question posts (to reward the best answer), or just to gift someone money for writing a particularly important post or comment.
    • That said, this is the kind of thing that, if implemented suboptimally, could easily incentivize detrimental behavior, instead.

I like the idea (Had it myself as well) of having a feature that lets users directly give monetary rewards [LW · GW] to other users for comments and posts. One of the problems of a Reddit rewards style system is that it's still internet points at the end of the day, and there's a limitation for how much internet points can be worth. 

On the other hand, money has obvious utility, and would definitely create an incentive to post very good comments and posts, and to give even more polish to very good comments and posts you would have written anyway. At the extreme, it could let particularly good and prolific users get some tangible revenue from their participation on LessWrong (a bit like having a patreon), which seems like a great thing to me.

I'm curious what you think are the suboptimal ways (and the optimal one) to implement this and what detrimental behavior they can incentivize?

Because comments are much less discoverable than posts, lots of high-effort high-value comments, even if they're very-high-karma, get lost in the masses of LW comments and are hard to find or refer to later on.

One thing that was talked about in the past that could help is an option to tag comments with tags, and have them show up on tag pages somehow. There are pros and cons and implementation details that were talked about, but generally speaking I think this is an interesting suggestion and would like to see it tried.

Replies from: MondSemmel
comment by MondSemmel · 2021-11-06T17:14:55.610Z · LW(p) · GW(p)

On the one hand, I like the gesture of commenting nice but relatively empty stuff like "this was great". On the other hand I dislike spam, and this feels kind of redundant with the karma system. Not sure what I think about this.

I'm curious what you think are the suboptimal ways (and the optimal one) to implement this and what detrimental behavior they can incentivize?

As one random example: I could pay 10$ to any comment which agrees with me, and to any comment which criticizes one of my own critics. I don't even need to say this explicitly, and yet over time it would still absolutely warp discussions.

And what if a critic notices that and offers 20$ each? Then we're suddenly in an arms race.

I'm not sure how to prevent failure modes like that.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-06T17:26:42.870Z · LW(p) · GW(p)

Oh, well the way I imagined it the reward isn't visible or influences how comments are shown. So I guess it could hypothetically create an incentive to make comments that agree with you, but if it's not visible and has no direct influence that seems very unlikely. If it is then it's more likely, but It still doesn't seem like it would be a big problem. Maybe moderators can have automatic monitoring for that kind of thing like they have for mass up/down voting?

comment by chanamessinger (cmessinger) · 2021-11-19T21:15:12.460Z · LW(p) · GW(p)

For what it's worth, I think some of those terrible ideas are great or close to great.

In particular:

  • Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like.  Flood the site with training data.
  • Make a fork of LessWrong run by me, or some other hopeless idealist that still thinks that there might be something actually good that we can get if we actually do the thing (but not if we don't).
  • Create an anonymous account with special powers called TheCultureCurators or something, and secretly give the login credentials to a small cadre of 3-12 people with good judgment and mutual faith in one another's good judgment.  Give TheCultureCurators the ability to make upvotes and downvotes of arbitrary strength, or to add notes to any comment or post à la Google Docs, or to put a number on any comment or post that indicates what karma TheCultureCurators believe that post should have.

Rob Bensinger wants me to note that he agrees.

 

The first one would be costly and annoying to lots of people but also time boxed and super interesting. Training data is really good, and very pedagogically valuable.

The second one just seems low cost to everyone except the idealist, so if they're willing, great!

The third would be controversial and complicated, but for instance putting a number for what karma they think it should have wouldn't change the current voting system and add information and could be time boxed like the first one.

Mostly I appreciate just the generation of lots of ideas to give my brain more to chew on and a sense that bigger things are possible.

Also more generally I really resonate with "dear God, I need the other people around me to be good at this to be my best self."

 

I'm curious what you and others think of Raelfin's post about the karma system: https://www.lesswrong.com/posts/xN2sHnLupWe4Tn5we/improving-on-the-karma-system
 

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-19T21:53:12.907Z · LW(p) · GW(p)

I like it, but not as much as I like two-axis proposals, which I think can be done with a smooth enough UI that they don't impose a burden.

With something like the below, you can click to weak vote and hold to strong vote, just like we currently do, and can in one click express each of the four following positions:

  • I like/agree with this point, and furthermore think it's being expressed correctly/is in line with norms of reasoning and discourse I want to see more of on LW (dark blue)
  • I like/agree with this point, but I want to note objection with how it's being expressed/have reservations about whether it's good rationality or good discourse (pale orange)
  • I dislike/disagree with this point, and want to note objection with how it's being expressed (dark orange)
  • I dislike/disagree with this point, but want to endorse/support the way it was arrived at and expressed (pale blue)
Replies from: cmessinger
comment by chanamessinger (cmessinger) · 2021-11-19T21:56:10.451Z · LW(p) · GW(p)

Oh yeah, I've seen you post this before, I liked it!

Replies from: Duncan_Sabien
comment by Vladimir_Nesov · 2021-11-08T02:54:18.738Z · LW(p) · GW(p)

Nuance is the cost of precision and the bane of clarity. I think it's an error to feel positively about nuance (or something more specific like degrees of uncertainty), when it's a serious problem clogging up productive discourse, that should be burned with fire whenever it's not absolutely vital and impossible to avoid.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T03:06:16.261Z · LW(p) · GW(p)

Uh.  I want to make a nuanced response here, distinguishing the difference between "feeling positively about nuance when it's net positive and negatively when its costs exceed its benefits, and trying to distinguish between the net positive case and the net negative case, and addressing the dynamics driving each" and so forth, but your comment above makes me hesitate.

(I also think this.)

EDIT: to clarify/sort-of-summarize, for those who don't want to click through: I think there's a compelling argument to be made that much or even the majority of intellectual progress lies in the cumulative ability to make ever-finer distinctions, i.e. increasing our capacity for nuance.  I think being opposed to nuance is startling, and in my current estimation it's approximately "being opposed to the project of LessWrong."  Since I don't believe that Vladimir is opposed to the project of LessWrong, I declare myself confused.

Replies from: Vladimir_Nesov, SaidAchmiz, SaidAchmiz
comment by Vladimir_Nesov · 2021-11-08T03:31:51.667Z · LW(p) · GW(p)

The benefits of nuance are not themselves nuance. Nuance is extremely useful, but not good in itself, and the bleed-through of its usefulness into positive affect is detrimental to clarity of thought and communication.

Capacity for nuance abstracts away this problem, so might be good in itself. (It's a capacity, something instrumentally convergent. Though things useful for agents can be dangerous for humans.)

comment by Said Achmiz (SaidAchmiz) · 2021-11-08T05:26:22.828Z · LW(p) · GW(p)

I agree with Vladimir, FWIW.

In fact, this is one of the major problems I have with—forgive me for saying so!—your own posts. They are very nuanced! But this makes them difficult, sometimes almost impossible, to understand (not to mention very long); “bane of clarity” seems exactly right to me. (Indeed, I have noticed this tendency in the writing of several members of the LW team as well, and a few others.)

You say:

I think there’s a compelling argument to be made that much or even the majority of intellectual progress lies in the cumulative ability to make ever-finer distinctions, i.e. increasing our capacity for nuance.

There is certainly something to this view. But the counterpoint is that as you make ever finer distinctions, two trends emerge:

  1. The distinctions come to matter less and less—and yet, they impose at least constant, and often increasing, cognitive costs. But this is surely perverse! Cognitive resource expenditures should be proportional to importance/impact, otherwise you end up wasting said resources—talking, and thinking, more and more, about things that matter less and less…

  2. The likelihood that the distinctions you are making, and the patterns you are seeing, are perceived inaccurately, or even are entirely imaginary, increases dramatically. We might analogize this to attempting to observe increasingly tiny (or more distant) physical objects—there comes a point where the noise inherent in our means of observation (our instruments, etc.) dominates our observations.

I think that both of these trends may be seen in discussions taking place on Less Wrong, and that they are responsible for a good share of the epistemic degradation we can see.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T06:58:39.184Z · LW(p) · GW(p)

I disagree with 1 entirely (both parts), and while 2 is sort of logically necessary, that doesn't mean the effect is as large as you imply with "increases dramatically," nor that it can't be overcome.  c.f. it's not what it looks like.

(Reply more curt than usual for brevity's sake.  =P)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-08T07:30:23.535Z · LW(p) · GW(p)

for brevity's sake

I think of robustness/redundancy as the opposite of nuance for the purposes of this thread. It's not the kind of redundancy where you set up a lot of context to gesture at an idea from different sides, specify the leg/trunk/tail to hopefully indicate the elephant. It's the kind of redundancy where saying this once in the first sentence should already be enough, the second sentence makes it inevitable, and the third sentence preempts an unreasonable misinterpretation that's probably logically impossible.

(But then maybe you add a second paragraph, and later write a fictional dialogue where characters discuss the same idea, and record a lecture where you present this yet again on a whiteboard. There's a lot of nuance, it adds depth by incising the grooves in the same pattern, and none of it is essential. Perhaps there are multiple levels of detail, but then there must be levels with little detail than make sense out of context, on their own, and the levels with a lot of detail must decompose into smaller self-contained points. I don't think I'm saying anything that's not tiresomely banal.)

comment by Said Achmiz (SaidAchmiz) · 2021-11-08T05:12:21.616Z · LW(p) · GW(p)

for those who don’t want to click through

Note that the linked content is inaccessible for those without a Facebook account.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T05:45:56.828Z · LW(p) · GW(p)

...false?  I just opened it in an incognito window and it worked fine.  All my posts are public.

But anyway here's the text:

Nate on Twitter, h/t Logan (transcribed for the non-avians):

Thread about a particular way in which jargon is great:

In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts.

(A complement I once got from a research partner went something like "you just keep reframing the problem ever-so-slightly until the solution seems obvious". 

<3)

Sometimes a bunch of small shifts leave people talking a bit differently, b/c now they're thinking a bit differently. The old phrasings don't feel quite right -- maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc.

(Coarse examples: folks who think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim more subtle linguistic shifts regularly come hand-in-hand w/ good thinking.)

I suspect this phenomenon is one cause of jargon. Eg, when a rationalist says "my model of Alice wouldn't like that" instead of "I don't think Alice would like that", the non-standard phraseology tracks a non-standard way they're thinking about Alice.

(Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alt manner of speaking w/out picking up the alt manner of thinking, etc.)

Of course, there are various other causes of jargon -- eg, it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc.

As such, I'm ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.

"Stop using phrases that meticulously track uncommon distinctions you've made; we already have perfectly good phrases that ignore those distinctions, and your audience won't be able to tell the difference!"

No.

My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.

Example: according to me, "my model of Alice wants chocolate" leaves Alice more space to disagree than "I think Alice wants chocolate", in part b/c the denial is "your model is wrong", rather than the more confrontational "you are wrong".

In fact, "you are wrong" is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on "you're wrong", and suggests (eg) "I disagree" or perhaps "you're wrong about whether I want chocolate".

"But everyone knows that "you're wrong" has a silent "(about X)" parenthetical!", my straw conversational partner protests. I disagree. English makes it all too easy to represent confused thoughts like "maybe I'm bad".

If I were designing a language, I would not render it easy to assign properties like "correct" to a whole person -- as opposed to, say, that person's map of some particular region of the territory.

The "my model of Alice"-style phrasing is part of a more general program of distinguishing people from their maps. I don't claim to do this perfectly, but I'm trying, and I appreciate others who are trying.

And, this is a cool program! If you've tweaked your thoughts so that it's harder to confuse someone's correctness about a specific fact with their overall goodness, that's rad, and I'd love you to leak some of your techniques to me via a niche phraseology.

There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it's *wonderful*. I would love to encounter a lot more jargon, in this sense.

(I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of "category".)

Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don't, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult!

(This is a worry that arises in me when I imagine, eg, dropping my rationalist dialect.)

In sum, my internal dialect has drifted away from American English, and that suits me just fine, tyvm. I'll do my best to be newcomer-friendly and inclusive, but I'm unwilling to drop distinctions from my words just to avoid an odd turn of phrase.

Thank you for coming to my TED talk. Maybe one day I'll learn to cram an idea into a tweet, but not today.

“Thread about a particular way in which jargon is great:”

 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-08T08:16:16.570Z · LW(p) · GW(p)

...false? I just opened it in an incognito window and it worked fine. All my posts are public.

In a regular window (Firefox): https://dl.dropboxusercontent.com/s/jyxf86t5hah9lbc/Screen%20Shot%202021-11-08%20at%203.12.59%20AM.png?dl=0

In a private window (Firefox): https://dl.dropboxusercontent.com/s/33i7ben66877zaz/Screen%20Shot%202021-11-08%20at%203.13.46%20AM.png?dl=0

In a regular window (Opera): https://dl.dropboxusercontent.com/s/bd4uu7iu0rctizl/Screen%20Shot%202021-11-08%20at%203.14.32%20AM.png?dl=0

In a private window (Opera): https://dl.dropboxusercontent.com/s/5i3oi2przg85jbr/Screen%20Shot%202021-11-08%20at%203.15.14%20AM.png?dl=0

Firefox 78.5.0esr (Mac); Opera 80.0.4170.63 (Mac).

EDIT: Tested also with Firefox 94.0.1 (Windows) and Chrome 95.0.4638.69 (Windows), with identical results.

Your posts are not accessible without a Facebook account.

Replies from: Vaniver, Vladimir_Nesov
comment by Vaniver · 2021-11-08T23:11:01.895Z · LW(p) · GW(p)

Huh, I see the post plus a big "log in" bar at the bottom on Safari 15612.1.29.41.4 (Mac), and the same without the bar in an incognito tab Chrome 94.0.4606.71 (Mac). These don't overlap with any of the things you tried, but it's strange to me that our results are consistently different.

comment by Vladimir_Nesov · 2021-11-08T08:23:20.865Z · LW(p) · GW(p)

I can no longer see it when not logged in, even though I did before. Maybe we triggered a DDoS mitigation thingie?

Edit: Removed incorrect claim about how this worked (before seeing Said's response).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-08T08:29:14.750Z · LW(p) · GW(p)

No, this is not correct. All of my tests were conducted on a desktop (1080p) display, at maximum window width.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-08T08:37:16.375Z · LW(p) · GW(p)

Yes, sorry, I got too excited about the absurd hypothesis supported by two datapoints, posted too soon, then tried to reproduce, and it no longer worked at all. I had the time to see the page in firefox incognito window on the same system where I'm logged in and in a normal firefox window from a different Linux username that never had facebook logged in.

Edit: Just now it worked again twice, and after that it no longer did. Bottom line: Public facebook posts are not really public, at least today, they are only public intermittently.

comment by Viliam · 2021-11-06T23:03:52.033Z · LW(p) · GW(p)

There is a part of Sequences which I am too lazy to find now, which goes approximately like this: "If you make five maps of the same city, and you make those maps correctly, then the maps should be the same. So if you make five maps of the same city, and you find differences between them (for example, some streets A and B intersect on one map, but run parallel on another map), it means that you made a mistake somewhere, and the maps are not as good as you wish them to be. Nonetheless, you cannot fix this mistake by merely adjusting some of the maps to fit the other ones. The sameness of the maps is a desired outcome... but it must happen naturally, as a result of all maps correctly representing the same city... not artificially, as a result of adjusting the maps to fit each other."

I get a similar feeling from some of your posts (including this one, also the punch bug). It seems to me that you care a lot about being right; and that is a good thing, and it's kinda what this community is trying to be about. And you seem strongly frustrated by people coming to conclusions dramatically different from yours; which indeed means that someone is wrong about something. And I agree that this is frustrating, and it is something that wouldn't happen if we succeeded at our stated goal.

But... it happens. And I worry that if we push against this, we may be "Goodharting" our search for truth. The common agreement should happen as a result of everyone examining the facts carefully and rationally. Not as a result of peer pressure that coming to the common agreement is what rationalists should do. Like, they "should" in the sense that "this is what would automatically happen to perfectly rational agents, per Aumann's theorem", but not "should" in the sense that "they should be directly trying to achieve this".

So, disagreeing with a specific thing seems okay to me. ("You guys said X, but I am convinced that non-X, here is my evidence.") But this kind of meta commentary feels to me like pressure to do the wrong thing. ("You guys disagree with me on X, Y, Z. As aspiring rationalists, we should not be regularly disagreeing on so many things.") Even if I agree that disagreeing on too many things too often is evidence that something went wrong. Still, the only correct way to agree on X, Y, Z, is to separately discuss X and come to a conclusion, discuss Y and come to a conclusion, and discuss Z and come to a conclusion. Not some bulk update like: "shit, it's really bad to disagree on so many things, and Duncan seems to have a lot of support so I guess the wisdom of the crowd is on his side, so from now on I am going to automatically switch my opinion on everything to whatever Duncan says". Convince me by arguments, not by meta-arguments about how disagreement is wrong.

And this is not intended as a defense of any specific things I said that you might disagree with. I am just a stupid human, with limited time and attention, often posting past midnight when I should be sleeping instead. It is plausible that I am wrong at a horribly large amount of things. Sorry for that. But there is also a chance that you might be wrong about some things, or maybe we just misunderstand each other, so my updating to your (perceived) position might also be a mistake. I am also unhappy about us not being better synchronized in our perspectives, but I see it as an inevitable consequence of our imperfections, and I hope it gets better over time... but I am not going to force it.

If you convince me about specific mistakes, I hope I have the ability to change my mind. If you convince me about an embarassing number of mistakes, I might even conclude that it is better for humanity if I simply stop writing, at least until my game improves significantly. But the meta-argument itself is not actionable for me (other than the general exhortation to be more careful, which I believe I am already trying).

Replies from: Duncan_Sabien, SaidAchmiz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T23:57:21.239Z · LW(p) · GW(p)

My sense is that I am disagreeing with (a set of) specific things.

The bulk update that I'm pushing for is not "switch my opinion to everything Duncan says," but "start looking for ways to make the smaller, each-nameable-in-its-own-right slips in rationality happen less often."

I don't think I'm making a meta-argument about disagreement being wrong, except insofar as I'm asserting a belief that LessWrong ought to be for a specific thing, and that, in the case where there is consensus about that thing, other things should be deprioritized.  I'm not even claiming that I'm definitely right about the thing LW ought to be for!  But if it's about that thing, or chooses to become so, then it needs to be less about the other thing.

Replies from: Viliam
comment by Viliam · 2021-11-07T21:45:42.288Z · LW(p) · GW(p)

If we had a consensus about "this comment is more rational, and that comment is less rational", then reminding people to upvote the rational comments and downvote the irrational comments might result in karma scores that everyone would agree with.

(Modulo the fact already mentioned somewhere in this discussion that some comments are seen by more people than other comments, which would still result in more karma for the same degree of rationality.)

(Plus some other issues, such as: what if someone writes a comment containing one rational and one irrational paragraph; should we penalize needlessly long or hard-to-read comments; what if the comment is not quite good but contains a rare and important idea; etc.)

Thing is, I don't believe we have this consensus. Some comments are obviously rational, some are obviously irrational, but there are many where different people have a different opinion.

Technically, this can be measured. Like, find a person you believe to be so rational that you are satisfied with their level of rationality, who comments and votes on LW. Then find a long thread where you both voted, and check how many comments you upvoted/downvoted/ignored the same, and how many times you disagreed (not just upvote vs downvote, but also e.g. upvote vs no vote). My guess is that you overestimate how much your votes would match.

My understanding of your complaint is that people are often voting on comments regardless of their rationality. Which certainly happens. But in a parallel reality where all of us consistently tried our best to really only vote for good arguments... I think you would assume much greater consensus in votes than I would.

Replies from: Vladimir_Nesov, Duncan_Sabien
comment by Vladimir_Nesov · 2021-11-08T02:30:22.852Z · LW(p) · GW(p)

Rationality doesn't make sense as a property of comments. It's a quality of cognitive skills [LW · GW] that work well (and might generate comments). Any judgement of comments according to rationality of algorithms that generated them is an ad hominem equivocation, the comments screen off the algorithms that generated them.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T02:46:23.100Z · LW(p) · GW(p)

Mmm, I think this is a mistake.

I think that you're correct to point at a potential trap that people might slip into, of confusing the qualities of a comment with the properties of the algorithm that generated it.  I think this is a thing people do, in fact, do, and it's a projection, and it's an often-wrong projection.

But I also think that there's a straightforward thing that people mean by "this comment is more rational than that one," and I think it's a valid use of the word rational in the sense that 70+ out of 100 people [LW · GW] would interpret it as meaning what the speaker actually intended.

Something like:

  • This is more careful with its inferences than that
  • This is more justified in its conclusions than that
  • This is more self-aware about the ways in which it might be skewed or off than that
  • This is more transparent and legible than that
  • This causes me to have an easier time thinking and seeing clearly than that

... and I think "thinking about how to reliably distinguish between [this] and [that] is a worthwhile activity, and a line of inquiry that's likely to lead to promising ideas for improving the site and the community."

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-08T03:20:21.874Z · LW(p) · GW(p)

I'm specifically boosting the prescriptivist point about not using the word "rational" in an inflationary way that doesn't make literal sense. Comments can be valid, explicit on their own epistemic status, true, relevant to their intended context, not making well-known mistakes, and so on and so forth, but they can't be rational, for the reason I gave, in the sense of "rational" as a property of cognitive algorithms.

I think this is a mistake

Incidentally, I like the distinction between error and mistake from linguistics, where an error is systematic or deliberatively endorsed behavior, while a mistake is intermittent behavior that's not deliberatively endorsed. That would have my comment make an error, not a mistake.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T22:02:02.394Z · LW(p) · GW(p)

I agree that the consensus doesn't exist.

In part, that's why several of my suggestions depended on a small number of relatively concrete observables (like distinguishing inference from observation).

But also, I think that a substantial driver of the lack of consensus/spread of opinion lies in the fact that the population of LessWrong today, in my best estimation, contains a lot of people who "ought not to be here," not in the sense that they're bad or wrong or anything, but in the sense that a gym ought mostly only contain people interested in doing physical activity and a library ought mostly only contain people interested in looking at books.  There is some number of non-central or non-bought-in members that a given population can sustain, and right now I think LessWrong is holding more than it can handle.

I think a tighter population would still lack consensus in the way you highlight, but less so.

Replies from: Rana Dexsin
comment by Rana Dexsin · 2021-11-11T03:45:22.709Z · LW(p) · GW(p)

FWIW, I'm someone who believes myself to have the occasional useful contribution on LW, but I also have an intuitive sense of being “dangerously non-central” here, with the first word of that expanding to something like “likely to be welcomed anyway, but in a way which would do more collateral damage to community alignment (via dilution) than is broadly recognized in a way that people are willing to act on”. I apply a significant amount of secondary self-restraint on those grounds to what I post, possibly not enough (though my thoughts about what an actually appropriate strategy would be to apply here are too muddled to say that with confidence), and my emotional sense endorses my use of this restraint (in particular, it doesn't cause noticeable feelings of hostility or rejection in either direction).

I'm saying this out loud partly in case anyone else who's had similar first-person experiences would otherwise feel awkward about describing them here and therefore result in a cluster of evidence being missing; I don't know how large that group would be.

comment by Said Achmiz (SaidAchmiz) · 2021-11-06T23:11:08.043Z · LW(p) · GW(p)

There is a part of Sequences which I am too lazy to find now …

“My Kind of Reflection”.

Replies from: Yoav Ravid
comment by tailcalled · 2021-11-06T10:30:33.879Z · LW(p) · GW(p)

I agree that it is important to defend and concentrate the force of rationality.

However, I think the difficulty in the original context that this post is about might be that community drama is hard to address in a good way. One major constraint in understanding any topic is having a high flow of representative or unusually useful information from the topic. But for community drama, this is really hard, because it interferes in people's privacy, it involves events that happened in the past, it involves adding up many small interactions, it may be dependent on social relationships, etc..

As such, I think it's going to be really hard for anyone to prove or disprove the validity of complaints about community dynamics in online writing. The best approach I can think of right now is to just allow people to speak up about their impressions without too much burden of proof, to test if others have the same impressions. Maybe there are other, better approaches, but I don't think standards of argumentation can solve them without doing something about the information bottleneck.

comment by Gordon Seidoh Worley (gworley) · 2021-11-14T02:39:12.364Z · LW(p) · GW(p)

Reading this post I kinda feel like you are failing to take your own advice about gardening in a certain way.

Like it feels good to call people out on their BS and get a conversation going, but also I think there's some alternative version of these two posts that's not about building up a model and trying to convince people that something is wrong that must be done about it, and instead a version that just fights the fight at the object level against particular posts, comments, etc. as part of a long slog to change the culture through direct action that others will see and emulate through a shift in the culture.

My belief here is that you can't really cause much effective, lasting change by just telling people they're doing something that sucks and is causing a problem. Rarely will they get excited about it and take up the fight. Instead you just have to fight it out, one weed at a time, until some corner of the garden is plucked and you have a small team of folks helping you with the gardening, then expanding from there.

If LW is not that place and folks don't seem to be doing the work, then maybe LW is simply not the sort of thing that can be what you'd like it to be. Heck, maybe the thing you'd like to exist can't even exist for a bunch of reasons that aren't currently obvious (I'm not sure about this myself; haven't thought about it much).

My own experience was discovering several years ago that, yeah, rationalists the community actually kinda suck at the basic skills of rationality in really obvious ways on a day to day basis. Like forget big stuff that matters like AI alignment or community norms. They just suck at like figuring out how not to constantly have untied shoelaces and other borning, minor drags on their ability to be effective agents. And that's because they're just human, with no real special powers, only they intellectually know a thing or two about how in theory they could be more effective if only they could do it on a day to day basis and not have to resort to a bunch of cheap hacks that work but also cause constant self harm by repeatedly giving oneself negative feedback in order to beat oneself into the desired shape.

Now of course not all rationalists, there's plenty of folks doing great stuff. But it's for this reason I basically just see LW as one small part that can play a role in helping a person become more fully realized in the world. And a part of that is arguably a place where folks can just kinda suck at being rationalists and others can notice this or not or not notice it for a long time and others go off in anger. Maybe this seems kinda weird, but I hold this sense that LW can only be the thing it's capable of being, because that's all any group can do. For example, I don't expect my Zen community to help me be better at making calibrated bets; it's just not designed for that and any attempt to shove it in probably wouldn't work. So maybe the thing is that the LW community just isn't designed to do the things you'd like it to do, and maybe it just can't do those things, and that's why it seems so impossible to get it to be otherwise.

Likely there is some community that could do those things, but it's hard to see how you could pull that out of the existing rationalist culture by a straight line. Seems more likely to me to require a project on the order of founding LW than on the order of, say, creating LW 2.0.

Replies from: SaidAchmiz, Duncan_Sabien
comment by Said Achmiz (SaidAchmiz) · 2021-11-14T19:20:59.357Z · LW(p) · GW(p)

FWIW, I agree with “direct action”, but that only works if the site moderators / admins are not opposed to that action (and, preferably, even support it). If they are, then “direct action” doesn’t work, and only persuasive posts have any chance of working.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T19:26:23.458Z · LW(p) · GW(p)

Can confirm from experience.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T04:11:20.576Z · LW(p) · GW(p)

I think there's some alternative version of these two posts that's not about building up a model and trying to convince people that something is wrong that must be done about it, and instead a version that just fights the fight at the object level against particular posts, comments, etc. as part of a long slog to change the culture through direct action that others will see and emulate through a shift in the culture.

Instead you just have to fight it out, one weed at a time, until some corner of the garden is plucked and you have a small team of folks helping you with the gardening, then expanding from there.

You say this as if I do not do it, a lot, and get downvoted, a lot.  

If LW is not that place and folks don't seem to be doing the work, then maybe LW is simply not the sort of thing that can be what you'd like it to be. Heck, maybe the thing you'd like to exist can't even exist for a bunch of reasons that aren't currently obvious (I'm not sure about this myself; haven't thought about it much).

This is a totally valid hypothesis imo, and one I keep very close to the forefront.

Likely there is some community that could do those things, but it's hard to see how you could pull that out of the existing rationalist culture by a straight line. Seems more likely to me to require a project on the order of founding LW than on the order of, say, creating LW 2.0.

I agree. I am for that reason not putting all my eggs in the this-working-out basket.  <3

(Appreciated and upvoted, in case my tone is not clear.)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2021-11-16T03:09:39.716Z · LW(p) · GW(p)

You say this as if I do not do it, a lot, and get downvoted, a lot.  

Haha, true, I think you and I have occasionally gotten into it directly with regards to this.

I guess to that point I'm not sure the norms you want are actually the norms the community wants. I know for myself the norms I want don't always seem to be the ones the community wants, but I guess I accept this as different people want different things, and I'm just gonna push for the world to be more how I'd like it. I guess that's what you're doing, too, but in a way that feels more forceful, especially in that you sometimes advocate for stuff not belonging rather than to be allowed but argued against.

This is likely some deep difference of opinion, but I see LW like a dojo, and you have to let people mess up, and it's more effective to let people see the correction in action rather than for it to go away because you push so hard that everyone is afraid to make mistakes. I get the vibe that you want LW to make corrections so hard that we'd see engagement drop below critical mass, like what happened with LW 1.0 by other means, but that might be misinterpreting what you want to see happen (although you're pretty clear about some of your ideas, so I'm not that uncertain about this).

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-16T04:15:23.218Z · LW(p) · GW(p)

TBC, I don't [want] us to drop below critical mass.  The point of disagreement is whether a cleaner standard would result in that or not, and I have a strong suspicion that it would not, in no small part because it wouldn't change the behavior of the high-quality contributors we already have at all, and it would bring in some high-quality contributors who aren't here because the comments are Too Much Headache.

comment by Raymond D · 2021-11-06T15:03:56.333Z · LW(p) · GW(p)

I'd like to throw out some more bad ideas, with fewer disclaimers about how terrible they are because I have less reputation to hedge against.

Inline Commenting

I very strongly endorse the point that it seems bad that someone can make bad claims in a post, which are then refuted in comments which only get read by people who get all the way to the bottom and read comments. To me the obvious (wrong) solution is to let people make inline comments. If nothing else, having a good way within comments to point to what part of the post you want to address feels like a strict win, and given that we already have pingbacks I think letting sufficiently good comments exist alongside the post would also be good. This could also be the kind of thing that a poster can enable or disable, and that a reader can toggle visibility on.

Personal Reputation

I don't have great models for how reputation should or does work on LessWrong. The second of these is testable though - I'd be curious to see what happened if prominent accounts, before commenting, flipped a coin, and in half of all cases posted through a random alt. Of course it may not be a bad thing if respected community figures get more consideration, but it would just be interesting to know how much of an effect it had. There are loads of obvious ways to hedge against this, all themed around anonymisation at different levels, but I think here it's less of a 'can' and more of a 'should', so I'd be curious to hear anyone else's thoughts on that.

Curated Comments

I agree that there are comments that are epistemically not so great. There's some underlying, very complicated question about 'who gets to decide what comments people should read', and I have some democratic instinct which resists any centralisation. But it does feel like some comments are notably higher-effort, or particularly at risk of brigading. I reckon a full prediction market-style moderation system would be a mess, but it seems like it wouldn't be that hard if, when someone made a comment, they could submit it for curation as 'a particularly carefully considered, relevant, and epistemically hygienic response' which, if approved, would be bumped above non-curated comments, with some suitable minor penalty for failed attempts or notes of feedback.

Debate as a model

In formal debate (or at least the kind I did) you distinguish between a point of information and a point of order. When you try to lodge a point of information, the opposing speaker can choose whether they'd like to be interrupted, and you're just interjecting some relevant facts. A point of order, though, is made to the chair, when there's a procedural violation, and it can very much interrupt you. I'm not sure how you'd extend this to lesswrong but it feels like a useful distinction in a similar context.

Replies from: TAG
comment by TAG · 2021-11-08T17:58:02.473Z · LW(p) · GW(p)

I very strongly endorse the point that it seems bad that someone can make bad claims in a post, which are then refuted in comments which only get read by people who get all the way to the bottom and read comments. To me the obvious (wrong) solution is to let people make inline comments. If nothing else, having a good way within comments to point to what part of the post you want to address feels like a strict win, and given that we already have pingbacks I think letting sufficiently good comments exist alongside the post would also be good. This could also be the kind of thing that a poster can enable or disable, and that a reader can toggle visibility on.

You can link to comments, so that is an easy technical solution. As ever , it's mainly a cultural problem: if good quality criticism were upvoted, it would appear at the top of the comments anyway, and bit be buried.

comment by MondSemmel · 2021-11-06T12:08:11.195Z · LW(p) · GW(p)

On existing LW systems, and on how they shape and incentivize discussions here:

  • Moderation-by-moderator is expensive and doesn't scale. So sites like Reddit or Less Wrong use a karma system. Some impacts of this system:
  • Whether a post or comment is good or bad is determined by a single scalar, i.e. karma.
    • So how do you vote on a comment you consider important and valuable for the most part, but which contains one sentence you consider very wrong? Maybe upvote it while explaining your disagreement?
    • Or what do you do with comments that seem high-effort but very wrongheaded? I want to incentivize effort, even if it occasionally produces wrongheaded results; but upvoting would suggest agreement.
    • What if a comment looks correct and receives lots of upvotes, but over time new info indicates that it's substantially incorrect? Past readers might no longer endorse their upvote, but you can't exactly ask them to rescind their upvotes, when they might have long since moved on from the discussion.
    • How much karma a comment gets depends to a significant extent on how many views it gets, i.e. on a) how early it's posted, b) on how much traffic the post will get overall, and c) on whether it's a top-level comment or a nested comment. So you can't 100% distinguish a comment of high quality vs. one which was just seen a lot. Conversely, if a comment has little karma, that may be because people haven't seen it (e.g. because it's new), or because people have seen it and not considered it valuable. How could one tell the difference?
  • There are various tensions between posting one big comprehensive post or comment (e.g. so all discussion is in the same place) vs. several small ones (e.g. so people can vote on or respond to separate parts separately). This is related to both the karma system and the threaded-comments system.
  • Posts vs. comments: Posts are much easier to discover and reference than comments, and conversely some valuable site meta discussions get lost in sub-sub-sub comment threads.
  • Effortful comments just take a ton of time. For example, my comments in this thread have so far easily taken >2h to write.
  • Everywhere on the Internet, New is considered Better (e.g. LW posts get traffic from Google and Hacker News, both of which prefer new content). This has various consequences, like most discussion on a post happening in the first few hours or days.

And so on. Will post suggestions in a separate thread.

comment by hath · 2021-11-07T01:17:29.382Z · LW(p) · GW(p)

A few related thoughts: if we do push farther upon the epistemic hygiene axis, it may be worth writing a Sequence on "Living Epistemically Clean" or something along those lines, so that the standards for discussion are clear and we have a guide to upholding those standards. Specification and implementation, if you will. Such a sequence could potentially also cover, say, implementing TAPs for epistemic hygiene in the outside world. It would probably be useful to have more info on how to address your list of "things your brain tries to do" as a community.

I notice that I value some way of allowing new users to grow accustomed to writing on LW. I imagine that my first post may not be epistemically clean enough to pass the standards you're pointing towards (which isn't a point against your standards) and that others new to the site may be discouraged by, say, a ban. The help reviewing drafts that LW currently offers helps, and that's probably a useful place to focus efforts towards helping people uphold the standards you want to put up. Potentially have multiple levels of users--specific karma amounts or moderator approval would be required to comment in some areas?

Slightly meta: I notice that this post seems to come off as somewhat finger-pointing-towards-moon-like. It works for me, because I see what you're pointing towards, but it looks like some [LW(p) · GW(p)] others don't, and therefore the post comes off as filled with applause lights and no substance. Or I might just be failing their ITT. I'm not entirely sure how to resolve the missed communication, but I figured it might be worth mentioning.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T01:24:54.207Z · LW(p) · GW(p)

I'm working on such a sequence for genpop; I had not considered tweaking it to serve as a bridge to LW particularly.  A good idea.

comment by MondSemmel · 2021-11-06T11:12:08.018Z · LW(p) · GW(p)

Misc. thoughts on the object level:

  • I understand that the community should (and hopefully could) do better, to optimize for a certain culture, that the status quo is Not Good Enough. However, I'm not clear on how widespread you consider this problem to be. My personal impression is that there are <5 controversial posts per month, posts which demand a ton from LW's culture. These are high-stakes events where we noticeably fall short of what is asked of us, and so such occasions could hence benefit a lot from e.g. better culture, or more explicit moderation. The rest of the site, it seems to me, is handled decently well by the karma system. What's your take on that?
    • E.g. regarding several of your explicitly-acknowledged-as-terrible suggestions: I'm not clear on what kind of comment you have in mind that would warrant a temporary ban but which would not get downvoted into oblivion. And similarly, the "team of well-paid moderators" part suggests to me you see a site-wide problem, not just a "we fall short in the high-stakes cases" problem.
    • In any case, if the problem is specifically the high-stakes controversial posts, that opens up a different class of solutions: e.g. a mod could flag a post as controversial, which would also trigger a stricter commenting policy or something.
  • To expand on this impression of mine: I was surprised by this [LW(p) · GW(p)] illustration of how big LW is nowadays.
    • To put this into perspective for a non-representative but popular post: this [LW · GW] post was briefly on Hacker News, got >17k unique pageviews, and now has 93 karma with 55 votes, and 10 comments by 8 unique commenters. So there are orders of magnitude between the number of passive readers, the number of votes, and the number of comments.
    • Conversely, jessicata's post on MIRI [LW · GW] has >8k unique pageviews, 61 karma with 171 votes, and 925 comments by I-will-*not*-count-those unique commenters.
    • I file these not-randomly-chosen examples in the to-me-natural-seeming categories "controversial posts in need of better culture", versus "uncontroversial posts with little active engagement".
  • Regarding your fully-acknowledged-as-terrible suggestions: There is a tradeoff between LW being inviting for newcomers, versus being the best it could be for veterans. Any community desperately needs new blood to survive. And... I don't really think we can demand much more from newcomers (who after all haven't yet had a chance to absorb the culture here)? I occasionally see comments of the form "I'm too intimidated by the high-quality discussions on this site, so I never post myself". I'm a veteran myself, so-to-speak, having registered on the site 8 years ago, and yet I only wrote my first effortful post this year (partly incentivized by the new feedback system [LW · GW]). I am in favor of attempts that make participating here easier, not harder, and would not wish to trade that off for a better culture. (So I'd e.g. much prefer incentives which reward good behavior, rather than punish poor behavior, so as not to discourage newcomers.)

I have more thoughts, including suggestions of my own, but will post those separately.

Replies from: SaidAchmiz, Yoav Ravid
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T18:39:07.394Z · LW(p) · GW(p)

Conversely, jessicata’s post on MIRI has >8k unique pageviews, 61 karma with 171 votes, and 925 comments by I-will-not-count-those unique commenters.

121 147 unique commenters, as of this writing.

EDIT: Method for count is as follows: on GreaterWrong, turn on the anti-kibitzer feature [LW · GW]. Find the lexically-last commenter identifier; in this case, it is ‘EQ’, which is the 121st identifier (26 * 4 + 17 = 121).

EDIT 2: Whoops, I can’t count. Obviously it should be 26 * 5 + 17 = 147.

Replies from: localdeity
comment by localdeity · 2021-11-06T20:32:43.346Z · LW(p) · GW(p)

Hmm, I got 146.  My method was: load the page, use command-F to expand the comments, search the page for "[+]" and click on all of them; then select all, copy, and run:

$ pbpaste | egrep '\[-\]' | sed -E 's/[0-9]+[a-z]+$//' | sort | uniq -c | sort -nr
  62 [-]jessicata
  46 [-]Benquo
  41 [-]ChristianKl
  36 [-]Unreal
  35 [-]Duncan_Sabien
  32 [-]Viliam
  30 [-]habryka
  22 [-]TekhneMakre
  22 [-]AnnaSalamon
  20 [-]Rob Bensinger
  ...

Then pipe the whole thing into wc, yielding 146.  (Also, this method captured 918 comments out of the 924 the page currently reports; there are 8 deleted comments, so I'm not sure exactly what explains the difference; oh well, it seems close enough.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T22:30:37.294Z · LW(p) · GW(p)

Well, for one thing, I made a dumb arithmetic mistake, so my result should’ve been 147, not 121.

That’s actually still off by 2 from yours (because GreaterWrong labels the OP as such, without a letter identifier, resulting in a total of 148). I do not know why this is.

comment by Yoav Ravid · 2021-11-06T12:14:36.885Z · LW(p) · GW(p)

How do you check how much unique pageviews a post got?

Replies from: MondSemmel
comment by MondSemmel · 2021-11-06T13:51:52.336Z · LW(p) · GW(p)

As I tried to link to in the original post, Oliver Habryka posted a screenshot of LW's Google Analytics [LW(p) · GW(p)] page for roughly October 2021. I'm referencing the two topmost linked URLs (rows 2 and 3), plus the "unique pageviews" column.

comment by Gunnar_Zarncke · 2021-11-06T10:50:00.675Z · LW(p) · GW(p)

Recap:  Concentration of force is

At each relevant moment, you want to project locally superior or overwhelming force. having the most [resources] actually present, or perhaps this just means having the right [resources] pointed in the right directions.

Taking this analogy to forum moderation, this seems to mean the ability 

  • to call in a sufficient number of participants
  • of suitable qualification (e.g., moderator or specific skill)
  • on short notice
  • to police a particular post.

Policing in the civilian sense of e.g., the original Metropolitan Police)  

Additionally, information is needed: The ability to detect posts requiring policing, but I understand that the LW moderation team already has this ability. 

comment by MondSemmel · 2021-11-06T10:21:51.811Z · LW(p) · GW(p)

Typo?

In fact, we're honestly well past the 80/20—LessWrong is at least an 85/37 by this point

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-11-06T15:27:08.075Z · LW(p) · GW(p)

I assume that meant "instead of 80% of the value for 20% of the effort, we're now at least at 85% of the value for 37% of the effort", which parses fine to me

comment by moridinamael · 2021-11-06T16:09:27.567Z · LW(p) · GW(p)

One thing we are working on in the Guild of the ROSE is a sort of accreditation or ranking system, which we informally call the "belt system" because it has many but not all of the right connotations. It is possible to have expertise in how to think better and it's desirable to have a way of recognizing people who demonstrate their expertise, for a variety of reasons. Currently the ranking system is planned to be a partly based on performance within the courses we are providing, and party based in objective tests of skill ("belt tests"). But we are still experimenting with various ideas and haven't rolled it out.

comment by samshap · 2021-11-07T02:01:28.857Z · LW(p) · GW(p)

I'm confused.

In the counterfactual where lesswrong had the epistemic and moderation standards you desire, what would have been the result of the three posts in question, say three days after they were first posted? Can you explain why, using the standards you elucidated here?

(If you've answered this elsewhere, I apologize).

Full disclosure: I read all three of those posts, and downvoted the third post (and only that one), influenced in part by some of the comments to that post.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T02:47:31.207Z · LW(p) · GW(p)

The three posts would all exist.

The first one would be near zero, karmawise, and possibly slightly negative.  It would include a substantial disclaimer, up front, noting and likely apologizing for the ways in which the first draft was misleading and underjustified.  This would be a result of the first ten comments containing at least three highly upvoted ones pointing that out, and calling for it.

The second post would be highly upvoted; Zoe's actual writing was well in line with what I think a LWer should upvote.  The comments would contain much less piling on and being real confident and rampant extrapolation; they would be focused mainly on "okay, how do we integrate this new data (which we largely take at face value and assume to be true)?  What are the multiple worlds with which it is compatible?  Which worlds that we previously thought possible have been ruled out by this new information?"

They would be doing a lot of split and commit, in other words.  Most people in the comments there seemed to have at max a single hypothesis, and to be collecting confirmation rather than seeking falsification.

(The support for Zoe would be unchanged; that part is orthogonal to epistemics.)

The third post would be in more moderate vote territory.  It would include a banner update in response to Scott's top comment (I note that Jessica has some substantive disagreement with Scott's top comment; I'm not claiming she should just capitulate!  But it would prominently feature a reintegration of the information about Vassar).  And the comments attempting to distill claims and separate out fact from inference (such as this [LW(p) · GW(p)] and this [LW(p) · GW(p)]) would have come much sooner, and would be among the top three comments.  In general, there would have been an air of "ah, hm, it seems like there's both important information here and also lots of fog and confusion; can we collaborate with Jessica at distilling out some things we can be sure of?"

Replies from: arunto
comment by arunto · 2021-11-07T16:08:51.402Z · LW(p) · GW(p)

"What are the multiple worlds with which it is compatible? Which worlds that we previously thought possible have been ruled out by this new information?"

Thanks for spelling it out like this, that is quite helpful for me. Even though the idea behind it was clear to me before, I intend to implement those two specific questions more into my thinking routines.

comment by spkoc · 2021-11-06T12:21:33.154Z · LW(p) · GW(p)

I guess long-time lurkers/new posters like me are part of the problem(though obviously I assume most online only LW members didn't engage with a California drama post). I still think LW is a great place for discussion and just being exposed to new ideas and good feedback, but I'm probably dragging down the sanity level.

Re fear: I think the SSC situation made it clear that LW and rationalist adjacent spaces are more public than users might think, maybe people are hesitant because they don't want to get twitter blasted or show up as a snap in an NYTimes article two years down the line.

Re concentration of force: I would imagine raw censorship would be really hard and contentious to enforce. Probably attempting to aristocratize/oligarchize the site might work better. Maybe increase the visibility of old-time users and posters, tweak the karma needed to post/vote, highlight high karma account comments over low karma comments. 

There must be some blog post somewhere documenting every attempted antidote to Eternal September syndrome to pick and choose from. Disclaimer, norm and law changes have chaotically unpredictable effects on communities, so who knows what the outcome would be.

A democratic version of this is people being more meta in comments and replies, addressing structural concerns with what people are commenting, as you mention in the post rewriting the original comment in a more rigorous/ironman form. Upside, this is a way to acculturate people into the community in an emotionally positive manner, rather than just by punishment. It's also much more legible and learnable than a comment deletion, which might have all sorts of reasons. Downside, this can make actual discussion really difficult and encourages pedantry which can also be taken too far. It also requires some degree of critical mass of users willing to engage in it.

The utopian version of this to me would be people looking at a post or comment they disagree with, suspending their own opinion on it, and attempting to help the commenter improve their argument in the direction the OP was going.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T22:57:10.031Z · LW(p) · GW(p)

Re concentration of force: I would imagine raw censorship would be really hard and contentious to enforce. Probably attempting to aristocratize/oligarchize the site might work better. Maybe increase the visibility of old-time users and posters, tweak the karma needed to post/vote, highlight high karma account comments over low karma comments.

I think that “attempting to aristocratize/oligarchize the site” might (or might not—I am undecided) have desirable consequences… but I think it would be a mistake to base this on karma scores. (See this old comment of mine [LW(p) · GW(p)] for reasoning.)

comment by localdeity · 2021-11-06T11:55:48.643Z · LW(p) · GW(p)

It's a Trojan horse.  It's just such good-thoughts-wrapped-in-bad-forms that people give a pass to, which has the net effect of normalizing bad forms.

Should this be "bad thoughts wrapped in good forms ... normalizing bad thoughts"?

Replies from: jason-gross
comment by Jason Gross (jason-gross) · 2021-11-06T15:17:45.350Z · LW(p) · GW(p)

No. The content of the comment is good. The bad is that it was made in response to a comment that was not requesting a response or further elaboration or discussion (or at least not doing so explicitly; the quoted comment does not explicitly point at any part of the comment it's replying to as being such a request). My read of the situation is that person A shared their experience in a long comment, and person B attempted to shut them down / socially-punish them / defend against the comment by replying with a good statement about unhealthy dynamics, implying that person A was playing into that dynamic, without specifying how person A played into that dynamic, when it seems to me that in fact person A was not part of that dynamic and person B was defending themselves without actually saying what they're protecting nor how it's being threatened. This occurs to me as bad form, and I believe it's what Duncan is pointing at.

Replies from: romeostevensit
comment by romeostevensit · 2021-11-06T20:44:13.045Z · LW(p) · GW(p)

I see the vision described as something like a community of people who want to do argument mapping together, which involves lots of exposing of tacit linked premises. I think a reason no such community exists (in any appreciable size) is that that mode of discourse is more like discovery rather than creation, as if all of the structure of arguments is already latent within the people arguing and the structure of the argument itself. The intuition then becomes reliable structure->reliable output. Creation, generativity is much messier and involves people surfacing their reactions to things without fully accounting for the reactions others might have (incl. negative), because non predicted reactions are, like, the whole point. There are a large class of persons I would have hedged my comment out more substantially with, but on the basis of past interactions and writing, I consider Aella an adult (in the high-bar 99th% emotional reflective ability sense). I didn't really think about how not having that context would affect how it was perceived.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T20:56:16.808Z · LW(p) · GW(p)

This is an interesting and relevant point.

In particular, it sparks in me a thought like:

Okay, if you're creating your argument and exposing it sort of at the same time, in a high-bandwidth back-and-forth, then you have to have some degree of trust in your conversational partner and audience.  Like, you have to have faith that they won't "gotcha" super hard, that you will be able to revisit and rephrase, that you will be able to change your mind, that if you skip a step they'll ask about it rather than attacking it, that they won't pour a bunch of projections and assumptions onto what you said and then be loath to let them go, etc.

Which I think actually becomes a (weak) argument for higher standards in general?  Because if the discourse is overall cleaner, then it becomes easier to do things like "interpret Romeo's comment charitably, and if you have a problem with it, just cooperatively fill in the cracks."

Whereas if things are slipping all over the place, there's a kind of race-to-the-bottom that makes it harder to extend that charity and good faith in each individual case, which makes people less willing to share their thoughts in the first place, since avoiding the gotchas takes so much effort.

Replies from: romeostevensit
comment by romeostevensit · 2021-11-06T21:57:40.867Z · LW(p) · GW(p)

Right and in my mind canon people are free to respond strongly like 'I think this is a central example of something that is really bad but am having a hard time articulating, can you say more about what you mean?' Because obviously in most communities people would be like wait what? Why would you give the person more ammo after they just said you are bad? To which you'd then have to attempt to point them to the idea that if you are actually confused as badly as they think you might be that would be super valuable to know.

comment by NcyRocks · 2021-11-07T13:18:02.350Z · LW(p) · GW(p)

I commend your vision of LessWrong.

I expect that if something like it is someday achieved, it'll mostly be done the hard way through moderation, example-setting and simply trying as hard as possible to do the right thing until most people do the right thing most of the time.

But I also expect that the design of LessWrong on a software level will go a long way towards enabling, enforcing and encouraging the kinds of cultural norms you describe. There are plenty of examples of a website's culture being heavily influenced by its design choices - Twitter's 280-character limit and resulting punishment of nuance comes to mind. It seems probable that LessWrong's design could be improved in ways that improve its culture.

So here are some of my own Terrible Ideas to improve LessWrong that I wouldn't implement as they are but might be worth tweaking or prototyping in some form.
(Having scanned the comments section, it seems that most of the changes I thought of have already been suggested, but I've decided to outline them alongside my reasoning anyway.)

  • Inline commenting. If a comment responds to a specific part of a post, it may be worth having it presented alongside that part of the post, so that vital context isn't missed by the kinds of people who don't read comments, or if a comment is buried deep under many others.  These could be automatic, chosen by the author, vetoed by the author, chosen by voting, etc. Possibly allow different types of responses, such as verification, relevant evidence/counterevidence, missed context, counterclaims, etc.
  • Multiple voting axes. Having a 1D positive-negative scale obviously loses some information - some upvote to say "this post/comment is well-written", "I agree with this post/comment", "this post/comment contains valuable information", "this post/comment should be ranked higher relative to others", and pretty much any other form of positive feedback. Downvotes might be given for different reasons again - few would upvote a comment merely for being civil, but downvoting comments for being uncivil is about as common as uncivil comments.
    Aggregating these into a total score isn't terrible, but it does lead to behaviour like "upvoting then commenting to point out specific problems with the comment so as to avoid a social motte-and-bailey" like you describe. Commenting will always be necessary to point out specific flaws, but more general feedback like "this comment makes a valuable point but is poorly written and somewhat misleading" could be expressed more easily if 'value', 'writing quality' and 'clarity' were voted on separately.
  • Add a new metadata field to posts and comments for expressing epistemic status. Ideally, require it to be filled. Have a dropdown menu with a few preset options (the ones in the CFAR manual are probably a good start), but let people fill in what they like.
  • Allow people to designate particular sentences in posts/comments as being a "claim", "hypothesis", "conjecture", "conclusion" (possibly linking to supporting claims), "crux", "meta", "assumption", etc., integrating epistemic status into a post's formatting. In my mind's eye, this looks something like Medium's 'highlight' feature, where a part of a post is shown in yellow if enough readers highlight it, except that different kinds of statements have different formatting/signposting. Pressing the "mark as assumption" button would be easier to do and to remember than typing "this is an assumption, not a statement of fact", and I also expect it'd be easier to read.
    These could have a probability or probability distribution attached, if appropriate. 

Most of these would make interacting with the site require extra effort, but (if done right) that's a feature, not a bug. Sticking to solid cultural norms takes effort, while writing destructive posts and comments is easy if due process isn't enforced.

Still, making these kinds of changes right is very difficult and would require extensive testing to ensure that the costs and incentives encourage cultural norms that are worth encouraging. 

comment by Said Achmiz (SaidAchmiz) · 2021-11-06T18:48:47.076Z · LW(p) · GW(p)

But it’s not actually doing the thing, and as far as I can tell it’s not really trying to do the thing, either—not in the way that blue-tribe Americans are actually trying to do something about racism, from pushing for institutional change all the way down to individuals taking personal responsibility for aiding strangers-in-need.

Is this really what you want? The way that blue-tribe Americans are “actually trying to do something about racism” seems to have almost no chance of actually doing anything about racism, nor even to be designed or intended to have any chance of actually doing anything about racism. Mapping this dynamic to your project seems to invalidate the entirety of your post.

Yet the fact that (in the “racism” case) the stated goal, the intended goal, and the actual goal do not coincide, appears not to be an incidental or contingent fact, but rather seems to be integral to the effectiveness and persistence of the actual efforts…

Please note that this is not a nitpick. It seems to me that this point substantively undermines your thesis, and your hopes for solving the problem you describe. (Which is unfortunate, because I largely agree with your diagnosis of the problem.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T18:55:25.637Z · LW(p) · GW(p)

It's possible I should change that section.  The most important piece of it that I do intend is that we've tipped past

"Someone's being racist in the parking lot!  Oh, well, not my problem."

to

"Someone's being racist in the parking lot!  This threatens an aspect of the fabric that I care about, that makes it my problem."

I agree with you that a lot (most?) of what the blue tribe is doing re: racism specifically is counterproductive.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T19:00:42.645Z · LW(p) · GW(p)

I really don’t think that this is an accurate description of what happened (and is happening), in the “racism” case. What’s more, it elides those critical aspects of the situation to which I was trying to point.

However, possibly this is a distracting avenue of discussion at this time, so I am content to let this thread end here. I will only say that I strongly recommend thinking about the analogy between these two situations in greater detail.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T19:40:56.739Z · LW(p) · GW(p)

Personally, I found your highlighting of that paragraph in particular useful, and believe the piece is stronger now that I've removed it, and would not mind if you went into more detail.  This is a tangent/digression, but it's a relevant one, so as long as it's headlined as such I don't think it's distracting.

(This is actually one of my private "rules of engagement" and something I've got an essay in the works about: changes-of-topic are exhausting when they're not self-aware about being changes-of-topic, but if they're clearly identified then that cost goes way down.)

comment by MondSemmel · 2021-11-06T14:23:16.279Z · LW(p) · GW(p)

Meta comments on this post:

I appreciate your commitment to precision, specificity, and disclaimers, to rule out what you don't mean [LW · GW] and to pinpoint exactly what you do mean. I aspire to clear communication, too, and it's obvious to me that you've put orders of magnitude more effort and skillpoints into it than I could ever imagine doing in my own attempts to communicate. (The best I can personally do is write in a longwinded manner, since my attempts at concise language are too easy to misunderstand.)

That said, this has downsides, too. For instance, I'm not a native speaker and my reading speed is slow. By the time I reached the object-level points in this post (which was already split up! originally it was 3/4 meta or setup stuff!) I'd forgotten some of the meta points or didn't understand how they related to the object level.

Also, LW is high context, and sometimes we expect people to know things that they should know but don't, or did know but forgot. For example, only by the last sentence of this post did I vaguely understand what the stag hunt stuff was about. So I'd recommend adding one sentence at the start to explain what the title and Stag Hunt refer to.

(Another minor problem here is that without the concept handle "stag hunt", I could not necessarily find this post in the future, even if I remembered you wrote it and browsed just your essays.)

comment by Vladimir_Nesov · 2021-11-06T19:10:56.345Z · LW(p) · GW(p)

point at small things as if they are important

Taking unimportant things seriously is important. It's often unknown that something is important, or known that it isn't, and that doesn't matter for the way in which it's appropriate to work on details of what's going on with it. General principles of reasoning should work well for all examples, important or not. Ignoring details is a matter of curiosity, allocating attention, it shouldn't impact how the attention that happens to fall on a topic treats it.

general enthusiasm for even rather dull and tedious and unsexy work

This is the distinction between "outer enthusiasm", considering a topic important, and "inner enthusiasm", integrity in working on a topic for however long you decide to do so, even if you don't consider the topic important. Inner enthusiasm is always worthwhile, and equivocation with outer enthusiasm makes it harder to notice that. Or that there should be less outer enthusiasm.

comment by Jason Gross (jason-gross) · 2021-11-06T15:01:44.170Z · LW(p) · GW(p)

Where bad commentary is not highly upvoted just because our monkey brains are cheering, and good commentary is not downvoted or ignored just because our monkey brains boo or are bored.

Suggestion: give our monkey brains a thing to do that lets them follow incentives while supporting (or at least not interfering with) the goal. Some ideas:

  • split upvotes into "this comment has the Right effect on tribal incentives" and "after separating out its impact on what side the reader updates towards, this comment is still worth reading"
  • split upvotes into flair (a la basecamp), letting people indicate whether the upvote is "go team!" or "this made me think" or "good point" or " good point but bad technique", etc
Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-11-11T17:40:31.584Z · LW(p) · GW(p)

I think the second bullet is called the "Slashdot" model where I've heard it after a site that implemented it famously, but I am pretty amused by the first point too. Something like a few layers of vote would be kind of fun because of how frequently I have to split them, like

  • This is correct / some amount incorrect
  • This was a good attempt at being correct / This was an imperfect attempt at being correct
  • This demonstrates good norms / This demonstrates unwanted norms

I'm not advocating this because I haven't thought it out well, but I may return to this in the future.

comment by Richard_Ngo (ricraz) · 2021-11-06T11:48:15.262Z · LW(p) · GW(p)

Strong upvote. Also, I first noticed these types of dynamics at a large scale in the comments on Duncan's Dragon Army proposal (I'm linking to the Medium version since the LW version seems to be gone).

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-06T23:24:29.329Z · LW(p) · GW(p)

That thread (the subset of it that was happening on LW 1.0) was one of the things that convinced me to build LW 2.0 (I was already working on it, but wasn't sure how much I would commit to it). Because that thread was really quite bad, and a lot of it had to do with deep site architecture things that were hard to change.

comment by Slider · 2021-11-08T18:36:14.631Z · LW(p) · GW(p)

The post has a lot of things going on. I am chipping on a it somewhat one spoonful at a time.

I didn't engage with the main drama directly as I am not around there.

It feels like a story about the a stag hunting party that encountered a bear. A bit meatier but a different beast and dangerous. Then some hunters go "nope, I didn't sign up for this" or just flee in abject horror. Those that stay get mauled because a small group can't deal with a full bear.

With a regular stag hunt it elicts frustration. Repeated failures might make starvation creep closer but usually dominate doing nothing. Getting mauled by a bear is actively harmful. And it is more harmful the less people are there to share the hurt. With a group you can carry wounded, or have diluted chance of getting hit by the next swing. Each additional desserter means the ones that stay get actively hurt more.

Maybe the hunting group didn't prepare to actually defend from the prey. Maybe they could have won if everybody suppressed their horror and just followed standard pack hunting guides. Somebody on serious mode gets super annoyed when they thought they went in with a group and end up dueling a bear. And there can be a sense of "hunting is exactly what hunting packs do", "we train up on smaller prey so that we can go after larger ones". Then there is the issue if instead of the village going out to look for the bear the bear visits the village.

comment by Benjamin Spiegel (benjamin-spiegel) · 2021-11-07T00:43:34.182Z · LW(p) · GW(p)

I spend a lot of time around people who are not as smart as me, and I also spend a lot of time around people who are as smart as me (or smarter), but who are not as conscientious, and I also spend a lot of time around people who are as smart or smarter and as conscientious or conscientiouser, but who do not have my particular pseudo-autistic special interest and have therefore not spent the better part of the past two decades enthusiastically gathering observations and spinning up models of what happens...
...
All of which is to say that I spend a decent chunk of the time being the guy in the room who is most aware of the fuckery swirling around me, and therefore the guy who is most bothered by it... I spend a lot of time wincing, and I spend a lot of time not being able to fix The Thing That's Happening because the inferential gaps are so large that I'd have to lay down an hour's worth of context just to give the other people the capacity to notice that something is going sideways.

This thought came to me recently and I wanted to commend you for an excellent job at articulating it. Having the "wincing" experience too many times has damaged my optimistic expectations of others, the institutions they belong to, and society as a whole. It has also conjured feelings of intellectual loneliness. Having this experience and the thoughts that follow from it constitute what might be the greatest emotional challenge that I struggle with today.

comment by Adam Zerner (adamzerner) · 2021-11-11T06:09:41.545Z · LW(p) · GW(p)

Make wildly overconfident assertions that it doesn't even believe (that it will e.g. abandon immediately if forced to make a bet).

This seems like an important failure mode that I think a ton of people, including myself, fall victim to. I can't actually think of a reference post for it though. Does it exist? Really, I think it deserves it's own tag [? · GW]. I recall hearing it discussed in the rationality community, but moreso in passing, not as a focal point of something.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-11T08:11:23.549Z · LW(p) · GW(p)

I guess there may not be a direct, central post, but I did write this [LW · GW] not long ago.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-11-06T19:35:08.521Z · LW(p) · GW(p)
Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like.  Flood the site with training data.

What's so terrible about this idea? I imagine the main way it could go wrong is not being able to find enough people willing to do it / accidentally having too low a bar and being overwhelmed by moderators who don't know what they are doing and promote the wrong norms. But I feel like there are probably enough people on LW that if you put out a call for applications for a very lucrative position (maybe it would be a part-time position for three months, so people don't have to quit their jobs) and you had a handful of people you trusted (e.g. Lightcone?) runing the show, it would probably work.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T19:47:53.965Z · LW(p) · GW(p)

On reflection, it's of a slightly different character than other items on the list.

(Each item on the list is "terrible" for somewhat different reasons/has a somewhat different failure mode.)

For that one, the main reason I felt I should disclaim it is "here's the part where I try to spend tens of thousands of someone else's money," and it feels like that should be something of a yellow flag.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-11-06T19:56:29.809Z · LW(p) · GW(p)

It's only a yellow flag if you are spending the money. If you are uninvolved and e.g. the Lightcone team is running the show, then it's fine.

(But I have no problem with you doing it either)

comment by Said Achmiz (SaidAchmiz) · 2021-11-06T18:56:31.489Z · LW(p) · GW(p)

Standards are not really popular. Most people don’t like them. Half the people here, I think, don’t even see the problem that I’m trying to point at. Or they see it, but they don’t see it as a problem.

I think a good chunk of LW’s current membership would leave or go quiet if we actually succeeded at ratcheting the standards up.

I don’t think that’s a bad thing. I’d like to be surrounded by people who are actually trying. And if LW isn’t going to be that place, and it knows that it isn’t, I’d like to know that, so I can go off and found it (or just give up).

I have expressed more or less this view several times in the past, and have, each time, been told that the policy of Less Wrong, and goals of its moderation team, are directly and explicitly opposed to it.

Do you have some reason to believe that this has changed?

(This is not a rhetorical question. It’s entirely possible that you’ve got information about shifting views of the Less Wrong team, which I am not privy to; but if so, then I think it is necessary to “lay the cards on the table”, as it were. Otherwise, it seems to me that this exercise is pointless, yes?)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T20:48:40.301Z · LW(p) · GW(p)

I do not have specific insider knowledge, no.

I do know that you and I have clashed before on what constitutes adhering to the standards, though I believe (as I believe you believe) that in such situations it should be possible to simply take the conversation up a meta level.

Replies from: Benito, SaidAchmiz
comment by Ben Pace (Benito) · 2021-11-10T05:31:17.010Z · LW(p) · GW(p)

On the topic of standards

On the current margin, I am interested in users taking more risks with individual comments and posts, not less. People take more risk when the successes are rewarded more, and when the failures are punished less. I generally encourage very low standards for individual comments, similar to how I have very low standards for individual word-choice or sentence structure. I want to reward or punish users for their body of contributions rather than pick each one apart and make sure it's "up to standard". (As an example, see the how this [LW(p) · GW(p)] moderation notice is framed as a holistic evaluation of a user's contributions, not about a single comment.)

So yes, Said, I am broadly opposed to substantially increasing the standards applied to each individual comment or paragraph. I am much more in favor of raising the amount of reward you can get for putting in remarkable amounts of effort and contributing great insights and knowledge. I think your support of Eliezer by making readthesequences.com and your support of Gwern with the site re-design are examples of the kind of things I think will make people like Eliezer and Gwern feel like their best writing is rewarded, rather than increased punishment for their least good comments and posts.

I really don't care if someone 'misses the mark' most of the time, if they succeed the few times required on the path to greatness. Users like John Wentworth and Alex Flint produce lots of posts that receive relatively low karma, and also a (smaller) number of 'hits' that are some of my favorite site-content in recent years. I think if they attempted to write in a way that meant the bottom 80% of posts (by karma) didn't get published, then they would not feel comfortable risking the top 20% of posts, as it wouldn't be clear to them that they would meet the new standard, and it wouldn't be worth the risk.

I think one of the successes of LW 2.0 has been in reducing the pain associated with having content not be well received, while also increasing the visibility and reward for contributions that are excellent (via frontpage karma-date-weighted sorting and by curation and published books and more).

"...back in my day physics classes gave lots of hard problems that most students couldn’t do. So there was a lot of noise in particular grades, and students cared as much or more about possibly doing unusually well as doing unusually badly. One stellar performance might make your reputation, and make up for lots of other mediocre work. But today, schools give lots of assignments where most get high percent scores, and even many where most get 100% scores. In this sort of world, students know it is mostly about not making mistakes, and avoiding black marks. There is otherwise little they can do to stand out...the new focus is on the low, not the high, end of the distribution of outcomes for each event or activity."

—Robin Hanson, What is ‘Elite Overproduction’?, August 2021

On the topic of LessWrong as a place to practice rationality

Duncan, it seems to me that you want LessWrong to be (in substantial part) a dojo, where we are together putting in effort to help each other be stronger in our epistemic and instrumental processes.

I love this idea. One time where something like this happened was Jacob's babble challenges last year, where he challenges users to babble 100 ways to solve a given problem: 1 [LW · GW], 2 [LW · GW], 3 [LW · GW], 4 [LW · GW], 5 [LW · GW], 6 [LW · GW], 7 [LW · GW]. Loads of people got involved, and I loved it. There's been a few other instances where lots of people tried something difficult, like under Alkjash's Hammertime Final Exam [LW · GW] and Scott Garrabrant's Fixed Point Exercises [? · GW]. Also lsusr's The Darwin Game [LW · GW] had this energy, as did the winners of my Rationality Exercises Prize [LW(p) · GW(p)].

The above posts are strong evidence to me that people want something like a rationality dojo on LessWrong, a place to practice and become stronger. I think there is a space for this to grow on LessWrong.

To state the obvious, using the risk/reward frame above, I think just punishing people more for not doing their practice would result in far fewer great contributions to the site. But I think it's very promising to reward people more for putting in very high levels of effort into practice, by celebrating them and making their achievements legible and giving them prizes. I suspect that this could change the site culture substantially.

As Duncan know better than most any other person I've met, you don't teach just by explaining, and I think there's real potential for people on LW to practice as well. I'd be quite excited about a world where Duncan builds on the sorts of threads that Jacob and others have made, making rationality exercises and tests for people to practice together on LW, and building up a school of people over the years who gain great skill and produce ambitiously successful projects.

Duncan, it seems on the table to me that you think there's some promise to doing this too, here on LessWrong. Do you think you'd be up for trying something like the threads above (but with your own flavor)? I'm happy to babble ideas together with you this Friday :)

Replies from: SaidAchmiz, Duncan_Sabien, Benito, Slider
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T16:14:57.624Z · LW(p) · GW(p)

Re: standards:

What you say makes sense if, and only if, the presence of “bad” content is costless.

And that condition has (at least) these prerequisites:

  1. Everyone (or near enough) clearly sees which content is bad; everyone agrees that the content is bad, and also on what makes it bad; and thus…

  2. … the bad content is clearly and publicly judged as such, and firmly discarded, so that…

  3. … nobody adopts or integrates the bad ideas from the bad content, and nobody’s reasoning, models, practices, behavior, etc. is affected (negatively) by the bad content; and relatedly…

  4. … the bad content does not “crowd out” the good content, bad ideas from it do not outcompete opposing good ideas on corresponding topics, the bad ideas in the bad content never become the consensus views on any relevant subjects, and the bad reasoning in the bad content never affects the norms for discussion (of good content, or of anything) on the site (e.g., is never viewed by newcomers, taken to be representative, and understood to be acceptable).

If, indeed, these conditions obtain, then your perspective is eminently reasonable, and your chosen policy almost certainly the right one.

But it seems very clear to me that these conditions absolutely do not obtain. Every single thing I listed above is, in fact, entirely false, on Less Wrong.

And that means that “bad” content is far from costless. It means that such content imposes terrible costs, in fact; it means that tolerating such content means that we tolerate the corrosion of our ability to produce good content—which is to say, our ability to find what is true, and to do useful things. (And when I say “our”, I mean both “Less Wrong’s, collectively” and “the participants’, individually”.)

(Unlike your comment, which is, commendably, rife with examples, you’ll note that my reply provides no examples at all. This is intentional; I have little desire to start a fight, as it were, by “calling out” any posters or commenters. I will provide examples on request… but I suspect that anyone participating in this conversation will have little trouble coming up with more than a few examples, even without my help.)

Replies from: TurnTrout, dxu, Benito
comment by TurnTrout · 2021-11-10T16:52:32.115Z · LW(p) · GW(p)

What you say makes sense if, and only if, the presence of “bad” content is costless.

"Iff" is far too strong. I agree that the "if" claim holds. However, I think that what Ben says also makes sense if the bad/high-variance content has costs which are less than its benefits. Demanding costlessness imposes an unnecessarily high standard on positions disagreeing with your own, I think.

Contrasting your position with Ben's, I sense a potential false dichotomy. Must it be true that either we open the floodgates and allow who-knows-what on the site in order to encourage higher-variance moves, or we sternly allow only the most well-supported reasoning? I think not. What other solutions might be available? 

The first—but surely not best—to come to mind is the curation < LW review < ??? pipeline, where posts are subjected to increasing levels of scrutiny and rewarded with increasing levels of visibility. Perhaps there might be some way for people to modulate "how much they update on a post" by "the amount of scrutiny the post has received." I don't think this quite fights the corrosion you point at. But it seems like something is possible here, and in any case it seems to me too early to conclude there is only one axis of variation in responses to the situation (free-wheeling vs strict).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T17:41:53.586Z · LW(p) · GW(p)

Re: other solutions:

I have repeatedly suggested/advocated the (to me, fairly obvious) solution where (to summarize / crystallize my previous commentary on this):

  1. People post things on their personal LW blogs. Post authors have moderation powers on their personal-blog posts.

  2. Things are posted to the front page only if (but not necessarily “if”!) they are intended to be subject to the sort of scrutiny wherein we insist that posts live up to non-trivial epistemic/etc. standards (with attendant criticism, picking-apart, analysis, etc.; and also with attendant downvotes for posts judged to be bad). Importantly, post authors do not have moderation powers in this case, nor the ability to decide on moderation standards for comments on their posts. (In this case a post might be front-paged by the author, or, with the author’s consent, by the mods.)

  3. Posts that go to the front page, are evaluated by the above-described process, and judged to be unusually good, may be “curated” or what have you.

In this case, it would be proper for the community to judge personal-blog posts, that have not been subjected to “frontpage-level” scrutiny, as essentially ignorable. This would go a long way toward ensuring that posts of the “jam-packed with bullshit” type (which would either be posted to personal blogs only, or would go to the front page and be mercilessly torn apart, and clearly and publicly judged to be poor) would be largely costless.

I agree with you that this sort of setup would not quite solve the problem, and also that it would nonetheless improve the situation markedly.

But the LW team has consistently been opposed to this sort of proposal.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-10T18:26:26.697Z · LW(p) · GW(p)

It sounds to me like posting on your High-Standards-Frontpage is a very high effort endeavor, an amount of effort that currently only around 3-30 posts each year have put into them. I've thought of this idea before with the name "LW Journal" or "LW Peer Review", which also had a part where it wasn't only commenters critiquing your post, but we paid a few people full-time for reviewing of the posts in this pipeline, and there was also a clear pass/failure with each submission. (Scott Garrabrant has also suggested this idea to me in the past, as a publishing place for his papers.)

I think the main requirement I see is a correspondingly larger incentive to write something that passes this bar. Else I mostly expect the same fate to befall us as with LW 1.0, where Main became increasingly effortful and unpleasant for authors to post to, such that writers like Scott Alexander moved away to writing on their personal blogs.

(I'm generally interested to hear ideas for what would be a big reward for writers to do this sort of thing. The first ones that come to my mind are "money" and "being published in physical books".)

I do think that something like this would really help the site in certain ways; I think a lot of people have a hard time figuring out what standard to hold their posts to, and having a clearly "high standard" and "lower standard" place would help authors feel more comfortable knowing what they're aiming for in their writing. ("Shortform" was an experiment with a kind of lower-standards place.) But I don't currently see a simple way to cause a lot of people to produce high-effort high-standards content for that part of the site, beyond the amount of effort we currently receive on the highest effort posts each year.

Replies from: Vaniver, Duncan_Sabien
comment by Vaniver · 2021-11-10T18:58:14.061Z · LW(p) · GW(p)

The first ones that come to my mind are "money" and "being published in physical books".

So I think the Review is pretty good at getting good old content, but I think the thing Said is talking about should happen more quickly, and should be more like Royal Society Letters or w/e.

Actually, I wonder about Rohin's newsletters as a model/seed. They attract more scrutiny to things, but they come with the reward of Rohin's summary (and, presumably, more eyeballs than it would have gotten on its own). But also people were going to be writing those things for their own reasons anyway.

I think if we had the Eliezer-curated weekly newsletter of "here are the LW posts that caught my interest plus commentary on them", we would probably think the reward and scrutiny were balanced. Of course, as with any suggestion that proposes spending Eliezer-time on something, I think this is pretty dang expensive--but the Royal Society Letters were also colossally expensive to produce.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T19:31:25.026Z · LW(p) · GW(p)

I would likely do this from my own motivation (i.e. not necessarily need money) if I were given at least one of:

a) guaranteed protection from the badgunk comments by e.g. three moderators willing to be dependably high-effort down in the comments

b) given the power to hide badgunk comments pending their author rewriting them to eliminate the badgunk

c) given the power to leave inline commentary on people's badgunk comments

The only thing holding me back from doing something much more like what Said proposes is "LW comment sections regularly abuse and exhaust me."  Literally that's the only barrier, and it's a substantial one.  If LW comment sections did not regularly abuse and exhaust me, such that every post feels like I need to set aside fifty hours of life and spoons just in case, then I could and would be much more prolific.

(To be clear: some people whose pushback on this post was emphatically not abuse or exhausting include supposedlyfun, Said, Elizabeth, johnswentsworth, and agrippa.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T20:30:10.907Z · LW(p) · GW(p)

a) guaranteed protection from the badgunk comments by e.g. three moderators willing to be dependably high-effort down in the comments

Would you accept this substitute:

“A site/community culture where other commenters will reliably ‘call out’ (and downvote) undesirable comments, and will not be punished for doing so (and attempts to punish them for such ‘vigilante-style’ a.k.a. ‘grassroots’ ‘comment policing’ will themselves be punished—by other commenters, recursively, with support from moderators if required).”

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T20:41:05.760Z · LW(p) · GW(p)

Yes, absolutely.  Thanks for noting it.  That substitute is much more what the OP is pushing for.

EDIT: With the further claim that, once such activity is reliable and credible, its rate will also decrease.  That standards, clearly held and reliably enforced, tend to beget fewer violations in the first place, and that, in other words, I don't think this would be a permanent uptick in policing.

comment by dxu · 2021-11-10T16:53:12.791Z · LW(p) · GW(p)

Strong-upvote. I want to register that I have had disagreements with Said in the past about this, and while I am still not completely sure whether I agree with his frame, recent developments have in fact caused me to update significantly towards his view.

I suspect this is true of others as well, such that I think Said's view (as well as associated views that may differ in specifics but agree in thrust) can no longer be treated as the minority viewpoint. (They may still be the minority view, but if so I don't expect it to be a small minority anymore, where "small" might be operationalized as "less than 1 in 5 people on this site".)

There are, at the very least, three prominent examples that spring to mind of people advocating something like "higher epistemic standards on LW": Duncan, Said, and (if I might be so bold) myself. There are, moreover, a smattering of comments from less prolific commenters, most of whom seem to express agreement with Duncan's OP. I do not think this is something that should be ignored, and I think the site may benefit from some kind of poll of its userbase, just to see exactly how much consensus there is on this.

(I recognize that the LW/Lightcone team may nonetheless choose to ignore the result of any such poll, and for the sake of clarity I wish to add that I do not view this as problematic. I do not, on the whole, think that the LW team should be reduced to a proxy that implements whatever the userbase thinks they want; my expectation is that this would produce worse long-term outcomes than if the team regularly exercised their own judgement, even if that judgement sometimes results in policy decisions that conflict with a substantial fraction of the userbase's desires. Even so, however, I claim that information of the form "X% of LW users believe Y" is useful information to have, and will at the very least play a role in any kind of healthy decision-making process.)

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-10T17:47:41.220Z · LW(p) · GW(p)

I am generally in favor of people running polls and surveys about information they're interested in. 

(Here's a very random one I did [LW · GW], and looking through search I see people have done them on general demographics, nootropics, existential risk, akrasia, and more.)

comment by Ben Pace (Benito) · 2021-11-10T18:10:20.159Z · LW(p) · GW(p)

I'm pretty confused by your numbered list, because they seem directly in contradiction with how scientific journals have worked historically. Here's a quote from an earlier post [LW · GW] of mine:

I looked through a volume of the London Mathematical Society, in particular, the volume where Turing published his groundbreaking paper proving that not all mathematical propositions are decidable (thanks to sci-hub for making it possible for me to read the papers!). My eyes looked at about 60% of the pages in the journal (about 12 papers), and not one of them disagreed with any prior work. There was :

  • A footnote that thanked an advisor for finding a flaw in a proof
  • An addendum page (to the whole volume) that consisted of a single sentence thanking someone for showing one of their theorems was a special case of someone else's theorem
  • One person who was skeptical of another person's theorem. But that theorem by Ramanujan (who was famous for stating theorems without proofs), and the whole paper primarily found proofs of his other theorems.

There were lots of discussions of people's work but always building, or extending, or finding a neater way of achieving the same results. Never disagreement, correction, or the finding of errors.

I think that many-to-most papers published in scientific journals are basically on unhelpful questions and add little to the field, and I'd bet some of the proofs are false. And yet it seems to be very rarely published that they're unhelpful or wrong or even criticized in the journals. People build on the good content, and forget about the rest. And journals provide a publishing house for the best ideas at a given time. (Not too dissimilar to the annual LW Review.)

It seems to me that low-quality content is indeed pretty low cost if you have a good filtering mechanism for the best content, and an incentive for people to produce great content. I think on the margin I am interested in creating more of both — better filtering and stronger incentives. That is where my mind currently goes when I think of ways to improve LessWrong.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T18:28:34.497Z · LW(p) · GW(p)

This seems like a very odd response given the fact of the replication crisis, the many cases of scientific knowledge being forgotten for decades after being discovered, the rise of false or inaccurate (and sometimes quite harmful [LW · GW]!) models, etc.

I think that (in many, or most, scientific fields) people often don’t build on the good content, and don’t forget about the bad content; often, the reverse happens. It’s true that it’s “very rarely published that [bad papers/proofs are] unhelpful or wrong or even criticized in the journals”! But this is, actually, very bad, and is a huge reason why more and more scientific fields are being revealed to be full of un-replicable nonsense, egregious mistakes, and even outright fraud! The filtering mechanisms we have are actually quite poor.

The “papers in scientific journals” example / case study seems to me to yield clear, strong support for my view.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-11T02:04:58.973Z · LW(p) · GW(p)

It's very worthwhile to understand the ways in which academia has died over the last 60 years or so, and part of it definitely involves failures in the journal system. But the axis of public criticism in journals doesn't seem at all to have been what changed in the last 60 years? Insofar as you think that's a primary reason, you seem to be explaining a change by pointing to a variable that has not changed.

In replying to your proposed norms, it's not odd to point out that the very mechanism of labeling everything that's bad as bad and ensuring we have common knowledge of it, was not remotely present when science was at its most productive — when Turing was inventing his machines, or when Crick & Watson were discovering the structure of DNA. In fact it seems to have been actively opposed in the journal system, because you do not get zero criticism without active optimization for it. That is why it seems to me to be strong evidence against the system you propose.

There may be a system that works via criticizing everything bad in public, but when science was most successful it did not, and instead seems to me to be based around the system I describe (a lot of submissions and high reward for success, little punishment for failure).

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T05:52:09.383Z · LW(p) · GW(p)

As a note on balance, I think you addressed the (predicted) costs of raising standards but not the costs of the existing standards.

I know of at least three people that I believe Ben Pace would consider high-quality potential contributors who are not writing and commenting much on LessWrong because the comments are exhausting and costly in approximately the ways I'm gesturing at.

(We have some disagreement but more overlap than disagreement.)

And I myself am quite likely to keep adding essays but am extremely unlikely to be anything other than a thought-dropper, at the current level of you-can-get-away-with-strawmanning-and-projecting-and-bullying-and-various-other-violations-of-existing-nominal-norms-in-the-comments-and-it'll-be-highly-upvoted.

I think that should be part of the weighing, too.  Like, the cost of people feeling cringier is real, but so too is the cost of people who don't even bother to show up, because they don't feel safe doing so.

I also note that I strongly claim that a generally cleaner epistemic pool straightforwardly allows for more experimentation and exploration.  If there's more good faith in the atmosphere (because there's less gunk causing people to correctly have their shields up) then it's actually easier rather than harder to e.g. gesture vaguely or take shortcuts or make jokes or oblique connections.

This is in fact a major driving assumption—that LessWrong could potentially be more like e.g. conversations where Ben and Duncan are just talking, and don't have to fear e.g. a mob of people adversarially interpreting our clumsy first-pass at expressing a thought and never letting it go.

Replies from: Vaniver
comment by Vaniver · 2021-11-10T17:27:51.624Z · LW(p) · GW(p)

because the comments are exhausting and costly in approximately the ways I'm gesturing at.

(We have some disagreement but more overlap than disagreement.)

As I understand Ben Pace, he's saying something like "I want people to take more risks so that we find more gold", and you're replying with something like "I think people will take more risks if we make the space more safe, by policing things like strawmanning."

It seems central to me to somehow get precise and connected to reality, like what specific rules you're suggesting policing (strawmanning? projecting? Everything in the Sequences?), and maybe look at some historic posts and comments and figure out which bits you would police and which you wouldn't. (I'm really not sure if this is in the 'overlap' space or the 'disagreement' space.)

Replies from: dxu, Duncan_Sabien
comment by dxu · 2021-11-10T17:38:28.244Z · LW(p) · GW(p)

Strong-upvote as well for the specificity request; the place where I most strongly expect attempts at "increasing standards" to fail is the point where people realize that broad agreement about direction does not necessarily translate to finer agreement about implementation, and I expect this is best avoided by sharing gears-level models as quickly and as early during the initial discussion as possible. As I wrote in another comment [LW(p) · GW(p)]:

Finally, note that at no point have I made an attempt to define what, exactly, constitute "epistemic violations", "epistemic standards", or "epistemic hygiene". This is because this is the point where I am least confident in my model of Duncan, and separately where I also think his argument is at its weakest. It seems plausible to me that, even if [something like] Duncan's vision for LW were to be realized, there would be still be substantial remaining disagreement about how to evaluate certain edge cases, and that that lack of consensus could undermine the whole enterprise.

(Though my model of Duncan does interject in response to this, "It's okay if the edge cases remain slightly blurry; those edge cases are not what matter in the vast majority of cases where I would identify a comment as being epistemically unvirtuous. What matters is that the central territory is firmed up, and right now LW is doing extremely poorly at picking even that low-hanging fruit.")

((At which point I would step aside and ask the real Duncan what he thinks of that, and whether he thinks the examples he picked out from the Leverage and CFAR/MIRI threads constitute representative samples of what he would consider "central territory".))

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T19:24:24.179Z · LW(p) · GW(p)

I note that I've already put something like ten full hours into creating exactly these types of examples, and that fact sort of keeps getting ignored/people largely never engage with them.

Perhaps you are suggesting a post that does that-and-nothing-but-that?

Replies from: Vaniver
comment by Vaniver · 2021-11-10T21:04:32.026Z · LW(p) · GW(p)

Perhaps you are suggesting a post that does that-and-nothing-but-that?

I think I am suggesting "link to things when you mention them." Like, if I want to argue with DanielFilan about whether or not a particular garment "is proper" or not, it's really not obvious what I mean, whereas if I say "hey I don't think that complies with the US Flag Code", most of the work is done (and then we figure out whether or not section j actually applies to the garment in question, ultimately concluding that it does not).

Like, elsewhere you write:

The standard is: don't violate the straightforward list of rationality 101 principles and practices that we have a giant canon of knowledge and agreement upon.

I currently don't think there exists a 'straightforward list of rationality 101 principles and practices' that I could link someone to (in the same way that I can link them to the Flag Code, or to literal Canon Law). Like, where's the boundary between rationality 101 and rationality 102? (What fraction of rationality 101 do the current 'default comment guidelines' contain?)

Given the absence of that, I think you're imagining much more agreement than exists. Some like the "Double crux" style, but Said disliked it back in 2018 [1] [LW(p) · GW(p)] [2] [LW(p) · GW(p)] and presumably feels the same way now. Does that mean it's in the canon, like you suggest in this comment [LW(p) · GW(p)], or not?

[Edit: I recall that at some point, you had something that I think was called Sabien's Rules? I can't find it with a quick search now, but I think having something like that which you can easily link to and people can either agree with or disagree with will clarify things compared to your current gesturing at a large body of things.]

Replies from: Duncan_Sabien, SaidAchmiz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T22:10:34.919Z · LW(p) · GW(p)

Sabien's Sins is linked in the OP (near the end, in the list of terrible ideas).

I will probably make a master linkpost somewhere in my next four LW essays.  Thanks.

Replies from: Vaniver
comment by Vaniver · 2021-11-11T17:42:07.440Z · LW(p) · GW(p)

Where? Is it the quoted lines?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-11T18:05:02.668Z · LW(p) · GW(p)

Create a pledge similar to (but better and more comprehensive than) this pledge

Replies from: Vaniver
comment by Vaniver · 2021-11-11T18:55:56.979Z · LW(p) · GW(p)

Huh, not sure how I missed that; thanks for pointing it out.

comment by Said Achmiz (SaidAchmiz) · 2021-11-10T21:26:28.036Z · LW(p) · GW(p)

Some like the “Double crux” style, but Said disliked it back in 2018 [1] [2] and presumably feels the same way now.

Indeed, my opinion of “double crux” has not improved since the linked comments were written.

comment by Ben Pace (Benito) · 2021-11-10T05:37:01.753Z · LW(p) · GW(p)

So yes, Said, I am broadly opposed to substantially increasing the standards applied to each individual comment or paragraph. I am much more in favor of raising the amount of reward you can get for putting in remarkable amounts of effort and contributing great insights and knowledge.

After finishing writing, I did have a further note to add on where I actually think I am more open to raising standards.

As well as rewarding people more for their entire body of contributions to LessWrong, I am also more open to negatively judging people more for their entire body of contributions to LessWrong. Compare two users: one who writes a couple of 100+ karma posts per year but who also has occasional very snarky and rude comments, versus one who never writes snarky comments but always kind of doesn't understand the dialogue and muddies the waters, and produces 100 comments each year. I think the latter has far more potential to be costly for the site, and the former has the potential to be far more valuable for the site, even though the worst comments of the former are much worse than the worst comments of the latter.

comment by Slider · 2021-11-10T14:50:41.646Z · LW(p) · GW(p)

To state the obvious, using the risk/reward frame above, I think just punishing people more for not doing their practice would result in far fewer great contributions to the site. But I think it's very promising to reward people more for putting in very high levels of effort into practice, by celebrating them and making their achievements legible and giving them prizes. I suspect that this could change the site culture substantially.

There was the issue with the babble challenges where I felt like effort was not being seen [LW(p) · GW(p)]. "Not knowing which norms are materially important feels capricious.". There is a difference between giving a prize to a valued act and giving valued acts prizes. While it was not a total unmitigated catastrophe I became wary and became suspicious of claims like "hey if you do X I will do Y".

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-10T17:51:49.920Z · LW(p) · GW(p)

Yeah that seems fair. I gave feedback to Jacob at the time that his interpretation of the rules didn't seem like the obvious one to me, and I think the 'streak' framing also meant that missing one week took you down to zero, which is super costly if it's the primary success metric.

Replies from: Slider
comment by Slider · 2021-11-10T18:21:03.335Z · LW(p) · GW(p)

7/7 attendance and 6/7 success resulted in 5 stars. I think the idea was that high cost of missing out would utilise sunk cost to keep the activity going. I am not sur whether bending on rules made it closer to idela or would sticking by the lines and making a fail a full reset done better. Or even if the call between pass and fail was compromised by allowing "fail with reduced concequences". 

comment by Said Achmiz (SaidAchmiz) · 2021-11-06T22:12:02.873Z · LW(p) · GW(p)

Ironically, I considered posting a comment to the effect that I disagree with some parts of your description of “what constitutes adhering to the standards”, but reasoned that it’s rather a moot point until and unless the meta-level issues are resolved…

(I will note that while I do agree that it tends to be possible to go up a meta level, I also think there’s a limit to how much real progress can be made that way. But this is itself already too meta, so let’s table it for now.)

Back to the point: if the Less Wrong team is still opposed to the general approach of “let’s substantially raise our standards, and if that makes many current members leave or go quiet (as surely will happen), well… so be it [or maybe even: ‘good!’]”, then isn’t the answer to “what is to be done?” necessarily “nothing, because nothing is permitted to be done”? In other words, don’t we actually already know that “LW isn’t going to be that place”?

Or is it just that this post is mostly aimed at the LW team, and intended to shift their views (so that they change their as-practiced policy w.r.t. intellectual standards)?

Replies from: Raemon
comment by Raemon · 2021-11-06T22:23:44.055Z · LW(p) · GW(p)

I assumed this post was mostly aimed at the LW team (maybe with some opportunity for other people to weigh in). I think periodically posting posts dedicated to arguing the moderation policy should change is fine and good.

Worth noting that I've had different disagreements you with and Duncan. In both cases I think the discussion is much subtler than "increase standards: yay/nay?". It matters a lot which standards, and what they're supposed to be doing, and how different things trade off against each other.

Replies from: SaidAchmiz, Duncan_Sabien
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T22:25:53.004Z · LW(p) · GW(p)

Worth noting that I’ve had different disagreements you with and Duncan. In both cases I think the discussion is much subtler than “increase standards: yay/nay?”. It matters a lot which standards, and what they’re supposed to be doing, and how different things trade off against each other.

Yes, yes, that’s all fine, but the critical point of contention here is (and previously has been) the fact that increasing standards (in one way or another) would result in many current participants leaving. To me, this is fine and even desirable. Whereas the LW mod team has consistently expressed their opposition to this outcome…

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T23:59:17.378Z · LW(p) · GW(p)

Would you mind quoting, anonymized if necessary?  Mostly because I'm curious whether the summary will seem to me actually match the words.

Replies from: Vaniver, SaidAchmiz
comment by Vaniver · 2021-11-07T06:57:47.357Z · LW(p) · GW(p)

I think this is a fair summary of what we said years ago. I'm not sure how much people's minds have changed on the issue. I think Ben Pace's warning of Said [LW(p) · GW(p)] (you have to scroll to the bottom of the page and then expand that thread) and the related comments are probably the place to look, including habryka's comment [LW(p) · GW(p)] here.

Before checking the moderation list, the posts that come to mind (as a place to start looking for this sort of conversation) were Kensho [LW · GW] (where I think a lot of the mods viewed Said as asking the right questions) and Meta-Discussion from Circling as a Cousin to Rationality [LW · GW] (where I viewed Said as asking the right questions, and I think others didn't).

comment by Said Achmiz (SaidAchmiz) · 2021-11-07T00:35:20.533Z · LW(p) · GW(p)

Sure, I will see if I can locate some of the threads I have in mind. It may take me a day or several before I’ve got the time to do it, though.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T00:38:31.617Z · LW(p) · GW(p)

Seems low-urgency to me.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T00:39:38.515Z · LW(p) · GW(p)

I feel a little shy about the word "aimed" ... I think that I have aimed posts at the LW team before (e.g. my moderating LessWrong post) but while I was happy and excited about the idea of you guys seeing and engaging with this one, it wasn't a stealth message to the team.  It really was meant for the broader LW audience to see and have opinions on.

comment by M. Y. Zuo · 2021-11-06T15:09:02.935Z · LW(p) · GW(p)

Would the StackExchange model work? Granting ranks on the basis of productive contribution, along with privileges, ultimately recruiting moderators from the highest ranks.

comment by Wei Dai (Wei_Dai) · 2021-12-06T22:37:16.248Z · LW(p) · GW(p)

Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.

Maybe we can start with a smaller experiment, like a group of (paid or volunteer) moderators do this for just one post? I sometimes wish that someone would point out all the flaws in my comments so I can tell what I can improve on, but I'm not sure if that won't be so unpleasant that I'd stop wanting to participate (or there would be some other negative consequence). Doing a small experiment seems like a good first step to finding out.

Assuming such experiments go well, however, I'm still worried about possible longer term unintended consequences to having a "high standards" culture. One that I think is fairly likely is that the standards will be selectively/unevenly enforced, against comments/posts that the "standards enforcers" disagree with, making it even more costly to make posts/comments that go against the consensus beliefs around here than it already is. I frequently see such selective enforcement/moderation in other "high standards" spaces, and am worried about the same thing happening here.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-12-07T19:57:34.421Z · LW(p) · GW(p)

I posit that "selective enforcement" is a way that unfairness expresses itself with high standards, but that the overall level of unfairness is approximately constant, i.e. raising standards is just good.  Reduces some unfairnesses, increases others, but meanwhile you actually have clear communication and good discourse.

I can't think of an operationalized experiment yet, but if someone comes up with one, I expect I would bet in the ballpark of 5:1 odds (my $5 to their $1) that an actual increase in standards does not cause an increase in unfairness.  I'd bet at 2:1 odds that it results in a detectable decrease of it.

comment by dspeyer · 2021-11-14T01:17:04.908Z · LW(p) · GW(p)

It is not clear to me what point you're making with your examples.  Have you written an object-level analysis of a failed LW conversation?  I realize that doing that in the straightforward way would antagonize a lot of people, and I recognize that might not be worth it, but maybe there's some clever workaround?  Perhaps you could create a role account for your dark side, post the sort of things you think are welcomed here but shouldn't be, confirm empirically that they are, then write a condemnation of those?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-14T01:25:46.919Z · LW(p) · GW(p)

Have you written an object-level analysis of a failed LW conversation?

I've written at least five or six such, over the years, including a couple down in the comments on this post; it doesn't seem to have much effect.  I suppose I could try making The Ultimate Analysis and just always linking to it, but I'm not sure.

comment by sapphire (deluks917) · 2021-11-08T20:35:20.761Z · LW(p) · GW(p)

All of which is to say that I spend a decent chunk of the time being the guy in the room who is most aware of the fuckery swirling around me, and therefore the guy who is most bothered by it. It's like being a native French speaker and dropping in on a high school French class in a South Carolina public school, or like being someone who just learned how to tell good kerning from bad keming.  I spend a lot of time wincing, and I spend a lot of time not being able to fix The Thing That's Happening because the inferential gaps are so large that I'd have to lay down an hour's worth of context just to give the other people the capacity to notice that something is going sideways.

 

If Duncan is going to make claims like these it is important people are allowed to cite his actual track record. His track record is quite poor. In my understanding, he was a close associate to serial sexual abuser in the rationalist-adjacent community (Brent), for example Duncan was involved in running Brent's burning man group. It is public record he was among the last defenders of said abuser. Duncan will dispute this characterization but I will include the full text he posted on facebook long after everyone else figured Brent out. 

The context for Brent and his relationship with Duncan:

Brent Dill was a long time rationalist community member. For example, he led the structure building project at Berkeley’s Summer Solstice. 

He was also close personal friends with multiple community leaders. No one is going to release records documenting how close they were to Brent, but Duncan was quite close to Brent. For example, Duncan was involved in running Brent's burning man camp (Black Lotus).

Brent was involved in (at least) two relationships that were considered abusive. One of the people who had been involved with Brent, “Persephone”, made a post about abusive behavior on Facebook that did not name Brent explicitly but nevertheless was clear to insiders. At first, Brent apologized for his behavior and this apology seemed to be broadly accepted by the Bay Area rationalists.

Later multiple very serious accusations were made publicly.

Accusation 1

Accusation 2

Accusation 3

Here are some representative quotes the accusations:

Brent had a habit of responding to me saying I wanted to break up, or that I didn’t want to do a scene, with something like “If you deprive me of this thing I want, you’re doing violence to me; please just punch me in the face so it can be universally recognised as violence.” Refusing to punch him and sticking to my guns re. my preferences would go nowhere, and often he’d self-injure to get me to agree to do what he wanted.

My ability to be the engine for this group depends on my confidence. I am confident when I know I have money, sexual access to youthful and attractive women, and true power in my demesne. What can you do?” — Brent

Brent pushed back against that idea. He told me that previous partners had been manipulated, by their therapists, into thinking that he was abusive and that he interpreted me seeking therapy as me losing trust in him.[about Brent]

” There were fewer times, but probably still dozens, that he didn’t ensure I had a safeword when going into a really heavy scene, or disrespected my safeword when I gave it. Safewording was never safe. It routinely led to him complaining, afterwards, about the fact that I’d ended the scene, and was occasionally completely disregarded.” [about Brent]

Here is another summary by Ozy.

Duncan's Response:

It seems clear that Brent made little effort to hide his deranged ideology. Duncan was close to Brent. Despite this, Duncan was one of the last people left defending Brent. I encourage you to read his own words. Here is what he posted on Facebook long after the situation was clear:

"There's a Thing going on in my social circles. Someone I know (who wishes to remain anonymous) recommended that a version of the following statement be posted by the person at the center of the Thing. I don't think that person is likely to follow that advice. I imagine that they're pretty overwhelmed, whether they're guilty or innocent or something in between. I'm posting it here myself, instead, putting words in their mouth, to see how people respond. For instance, I can imagine people saying that it makes sense, or that it's not enough, or that it's manipulative, or that it's good but sets up bad incentives, etc. I wonder if a statement like this would be seen as meaningful, in this whole situation, or if it would simply be confirming evidence to both sides. I'm curious to hear your reactions. I am unlikely to respond to any of them. Again, this is me, a third party who knows everyone involved reasonably well, IMAGINING words that they might say, in response to prompting from another anonymous third party. None of this is secretly a sock puppet campaign, for instance. The people involved can't see this post or your replies to it. (I'm trying to figure out subtle social stuff, and NONE of them need the stress of watching us throwing a bunch of hypotheticals back and forth when their lived experience is real and present-to-them and traumatic. But at the same time, I think the rest of us HAVE to be able to discuss these things, and not to let our knee-jerk reactions run the show.) You're allowed to be emotional in your response, if you have one. You don't have to try to adhere to my usual standard of rationality. You can say things that you don't fully endorse or can't fully defend (and I will defend you from others attacking those things, though they're welcome to disagree with them). But avoid escalation/accusations/flame wars on this hypothetical thread; if things get too tribal or too fight-or-flight I'll just delete them."

-------------------------------------------------------- A statement from an imaginary version of Brent:

"Two of the women I have dated believe I have abused them. Others might feel the same. From my point of view, I think the story is more complex, and there's a lot of difficult-to-predict and difficult-to-understand stuff going on with consent and power dynamics and people asking you to do things in unusual contexts and people processing trauma. However, I agree that I hurt them, and I agree that their present pain is at least half on my shoulders. I have tried repeatedly to atone and apologize, and been unable, in part because our history understandably makes it difficult for them to let me get close enough to do so. I'm not adding a public apology here, because that just sets up a weird dynamic. But I regret what happened, did not want them to be where they are now, and would do things differently given a time machine. Here are their statements [link]. Here is mine [link]. If you are thinking of dating me, this is information you deserve to have. I don't think all of what's written there is true, but it's all believed by those who wrote it, and that counts for something even if facts are uncertain. I don't think these stories disqualify me from being a good romantic partner, or an upstanding member of society. I do think they provide evidence about my ability to tell where the line is, or to distinguish between what my partners seem to me to want in the moment versus what they will endorse having wanted in the future. If you're uncertain about your ability to stand your own ground, or susceptible to pressure and confusion, you shouldn't date me. If you think I'm an abuser, you absolutely shouldn't date me. But I don't think that all people fall into those buckets, and I don't think the answer to my past is to preemptively make everyone else's decisions for them in the future."

-------------------------------------------------------- Two things to add (since, again, I don't plan on responding much to comments): 1) 

I (Duncan) do think there remains genuine uncertainty about matters of fact and blame. I think that the statements of the women are entirely accurate insofar as they honestly represent the pain and trauma experienced, and what was going on for them both in the past and now. I don't think they're exaggerating what it felt like to go through what they went through. I think they deserve trust, care, support, and protection, and that they are acting in honest defense of future women who they want to protect from similar experiences. AND YET it still seems to me, given my present state of knowledge (which includes private conversations with all involved parties at various points in time), that all of the data admit of multiple explanations, not all of which require malice, and that it's my moral obligation to not throw away those explanations in which the cause is [tragedy and confusion and it's-hard-to-communicate-around-sex-and-power and people-often-mispredict-how-they-will-respond-to-things] as opposed to [overt intent-to-harm or sociopathic disregard for others]. I agree 100% that unintended harm is STILL HARM, and that risky behavior is STILL RISKY even when people consent, and that it's reasonable to take concrete action to prevent the future from resembling the past when the past caused damage. This is not a call for "no action." But we can take preventive action without incorrectly vilifying people, and I don't yet have sufficient reason to believe that vilification is the right direction to move in.

2) If you ever find yourself in a position like the one described by the women involved in this situation, and you reach out to me, I will come for you, I will get you out, and I will 100% respect your autonomy and sovereignty as I do so. I have done this in the past and I will continue to do it in the future. I don't have to know who's right and who's wrong and whose fault it is to simply help create space for people who desperately need it.

You can make up your own mind on whether Duncan should be making these grandiose claims about his ability to model social situations. It is harder to cite but Duncan is also famously combative even on relatively unimportant topics. I am not saint either. I have made a lot of mistakes too. But I don't go around saying "All of which is to say that I spend a decent chunk of the time being the guy in the room who is most aware of the fuckery swirling around me".  I am really sorry for the ways I fucked up. People can do better, I am trying to do better. But doing better is going to require some level of humility. The track record of Duncan's ideology is not good. Duncan needs to be taking a very different approach. 


 

Replies from: Benito, SaidAchmiz, agrippa
comment by Ben Pace (Benito) · 2021-11-08T21:12:46.209Z · LW(p) · GW(p)

Duncan hosted 2+ long FB threads at the time where a lot of people shared their experiences with Brent, and I think were some of the main ways that Berkeley rationalists oriented to the situation and shared information, and overall I think it was far better than the counterfactual of Duncan not having hosted those threads. I recall, but cannot easily find, Kelsey Piper also saying she was surprised how well the threads went, and I think Duncan's contributions there and framing and moderation were a substantial part of what went well.

It is public record he was among the last defenders of said abuser. Duncan will dispute this characterization but I will include the full text he posted on facebook long after everyone else figured Brent out.

Just to share my impression, I think it's false to say "long after everyone else figured Brent out". I think it's more accurate to say that it was "shortly after it became very socially risky to defend Brent", but I think a lot of people in the threads confessed to still being quite disoriented, and I think providing a defense of Brent was a positive move to happen in the dialogue, even while I think it was not the true position. I don't think Duncan punished anyone for disagreeing with him which is especially important, I think he did a pretty credible job of bringing it up as a perspective to engage with while not escalating in a fight.

Replies from: Benito, Duncan_Sabien
comment by Ben Pace (Benito) · 2021-11-08T21:16:08.440Z · LW(p) · GW(p)

For a bit more primary source, here's some Duncan quotes from his post atop the second FB thread:

I focused yesterday's conversation on what I suspected would be a minority viewpoint which might be lost in a rush to judgment. I wanted to preserve doubt, where doubt is often underpreserved, and remind people to gather as much information as they could before setting their personal opinions. I was afraid people would think that helping those who were suffering necessarily meant hurting Brent—that it was inextricably a zero-sum game.

and

I want to reorient now to something that I underweighted yesterday, and which is itself desperately important, and which I hope people will participate in with as much energy:

Are there concrete things I can do to help Persephone?

Are there concrete things I can do to help T?

Are there concrete things I can do to help other people in a similar boat?

Are there concrete things I can do to stop others from ending up in that same boat?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T21:53:07.793Z · LW(p) · GW(p)

I also had nothing to do with the Burning Man group (have never been to Burning Man, came out to a beach in San Francisco once along with [a dozen other people also otherwise not involved with that group] to see a geodesic dome get partially assembled?) and the confidence with which that user asserts this falsehood seems relevant.  

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T22:18:57.513Z · LW(p) · GW(p)

I also note that I was lodging a defense of "not leaping straight to judgment" far more than I was lodging a defense of Brent, i.e. I think the text above is much more consistent with "Duncan wants us to track multiple possible worlds consistent with the current set of observations" than with "Duncan has a strong and enduring preference for believing one of those worlds."

This is exactly the sort of nuance that is precious, and difficult to maintain, and I am still glad I tried to maintain it even though it ultimately turned out that the null hypothesis (Brent is an abuser) proved correct.

Replies from: Benito, tomcatfish
comment by Ben Pace (Benito) · 2021-11-08T22:22:18.938Z · LW(p) · GW(p)

Yeah that's why I added the primary source, I went and read it and then realized that was what you were doing.

comment by Alex Vermillion (tomcatfish) · 2021-11-11T17:16:47.977Z · LW(p) · GW(p)

To add the voice of someone who is not a "well known" or community landmark, the reading I got was one of pedantry, not defense. I read it in the same way as you might say "Wow, it would be great if we had 10,000 apples, but I am not sure that our 4+x apples sums to that many. Let's keep open the possibility that x is 4 or 30".

Hopefully tacking this on makes it

  1. Easy for you to see how people might read this
  2. Easy for other people to share that same support (or to disagree, I just think the former is more likely here)
Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-11T17:26:29.053Z · LW(p) · GW(p)

I'll take it.  =)

Like, the person adversarially quoting me above wants me to be seen as a rape apologist, and I'll take "pedant" over that.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-11-11T17:34:28.553Z · LW(p) · GW(p)

Haha, I was being pedantic when I said that. Replace it with "You were being lawful, which I respect the hell out of" for a better compliment

comment by Said Achmiz (SaidAchmiz) · 2021-11-08T21:59:13.685Z · LW(p) · GW(p)

This quoted material has increased my respect for Duncan. Thank you for posting it.

comment by agrippa · 2021-11-09T00:47:18.496Z · LW(p) · GW(p)

Maybe there is some norm everyone agrees with that you should not have to distance yourself from your friends if they turn out to be abusers, or not have to be open about the fact you were there friend, or something. Maybe people are worried about the chilling effects of that.

If this norm is the case, then imo it is better enforced explicitly. 

But to put it really simply it does seem like I should care about whether it is true that Duncan and Brent were close friends if I am gonna be taking advice from him about how to interpret and discuss accusations made in the community. So if we are not enforcing a norm that such relationships should not enter discussion then I am unclear about the basis of downvoting here.

Replies from: Duncan_Sabien, Benito
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T02:06:43.665Z · LW(p) · GW(p)

Some facts relevant to the question of whether we were close friends:

  • We spent a grand total of well under 200 total hours in each other's company over the years 2014 - 2018 (the estimate is deliberately generous) with the bulk of that estimated time coming from a month of me mostly-by-myself using tools in his garage, but him occasionally coming out to work on his geodesic dome.
  • We did not at any point embark on any large projects together.
  • We did not at any point go on trips together, or have sleepovers, or schedule "let's go grab dinner together."  We played Magic: the Gathering together once just the two of us (maybe five or six times with multiple others).
  • We did not at any point owe each other money.
  • We did spend a decent chunk of those 200 hours engaged in online discussion, often about norms and models of how-communities-work, typified by the Affordance Widths [LW · GW] post.
  • We did not describe each other as friends, either to each other or to third parties.
  • To the extent that we would occasionally discuss heavy or sensitive or emotional topics, I spent well over half of that time aggressively challenging and disagreeing with his models and perspectives, and a large number of third parties can verify this.

The word "friend" is super motte-and-bailey vulnerable; people have an extremely wide range of what they mean by it.  It's certainly reasonable under that very wide umbrella for e.g. somebody at CFAR to have said something like "Oh, yeah, Brent and Duncan are friends" based on seeing us chat at CFAR workshop afterparties, or something?  I invited him to an early Dragon Army experiment weekend along with 30 other people, for instance, though I did not invite him to join the experiment proper (and he very explicitly wanted to join it).

But I both was and continue to be objectively much more of a friend to one of the published victims, who has not believed that my interactions with Brent should cause them to trust me less or think that others should, either.  I won't summon that person here but if somebody absolutely must check I would ask that person to reach out to you, which they would likely do as a favor to me.

And even though I'm 2-5x more that-person's-friend than I was Brent's, I still wouldn't describe my relationship to that person with a word as strong as "friend."  We are friendly acquaintances, occasional allies.  We have some baseline trust.  I doubt they would let me know if they were in the hospital.  I doubt I would let them know if I was in the hospital.

So even though "friend" is defensible, "close friend" is objectively false.

As to questions of distancing yourself from people if they turn out to be abusers, I last spoke with Brent in person maybe a week and a half after the Medium posts went up, and last spoke with him online a few months after that (after substantially changing my relationship in ways that were publicly discussed; that post is from October but it was originally shared in a group of some ~50 rationalists not long after the situation blew up).  I haven't spoken to or heard from Brent since some time in 2018  EDIT: COVID messed with my sense of time, FB tells me I blocked Brent late in 2019; we weren't chatting much in the lead-up, though.

People often like to say "X is relevant" when they expect that it will support their prior belief, but then X is strangely not relevant once it turns out to be contra their expectations.

Replies from: agrippa
comment by agrippa · 2021-11-09T02:29:31.310Z · LW(p) · GW(p)

Great, thanks.

comment by Ben Pace (Benito) · 2021-11-09T02:17:30.951Z · LW(p) · GW(p)

Yeah, I don't act by that norm, and I did update negatively on the judgment of people I knew who supported Brent in the community. (I don't think of Duncan centrally in that category.)

comment by Ruby · 2021-11-06T20:04:50.955Z · LW(p) · GW(p)

Ah, yep. Fixed!

Replies from: MondSemmel
comment by MondSemmel · 2021-11-07T09:58:48.857Z · LW(p) · GW(p)

This is a weird orphaned comment, with some weird technical details: it has a "Show previous comment" button, and when I open that previous comment and click its link [LW(p) · GW(p)], its "See in context" button doesn't work. Something maybe went wrong with a mod action?

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-11-11T17:30:10.755Z · LW(p) · GW(p)

If you mean "[Ruby's comment] is a weird orphaned comment", then I can tell you that there is a deleted top-level comment which Ruby's comment is a reply to. I can see this through the greaterwrong viewer, so you can verify it there if you would like.

comment by Slider · 2021-11-07T18:28:39.282Z · LW(p) · GW(p)

You are knowingly rude about calling people stupid.

You might be less knowingly rude when you use "autistic" as a synonym for stupid in "do not have my particular pseudo-autistic special interest" and somewhat half of connotation of "in a way that could be made clear to an autistic ten-year-old."

This is not cool as it was not cool at this occasion [LW(p) · GW(p)]

I do hope that you do not think that arguing for higher standard makes for an acceptable break in basic hospitality norms and I don't think you are claiming to be a cool person above the law.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T20:19:59.134Z · LW(p) · GW(p)

I was not using autistic as a synonym for stupid, and literally never have.  I do not think that autistic is a synonym for stupid.

I (extremely) agree with you that doing so is and would be rude, bad, unwelcoming, and a violation of basic hospitality norms.

Your reply indicates that I should do a little more work to show that what I was gesturing at with those comments was that "understandable to an autistic ten-year-old" seems, to me, to be a positive standard.  Your reply is evidence that I'm failing to rule out [LW · GW] the interpretation "stupid."

I have been a teacher for many decades, and have taught about half a dozen literal actual autistic ten-year-olds.  My fiancé is also autistic, and writes a blog for autists trying to navigate modern society.  I wrote a 650,000 word fanfic prominently featuring a twelve-year-old autistic OC as one of the protagonists.  I believe that I am on the spectrum, and use the term pseudo-autistic special interest literally and denotatively.

In many ways, "autistic ten-year-old" is my central archetype model of what it is to be human.  I believe that a culture which presents a literal actual autistic ten-year-old with all sorts of confusions and unpleasant surprises is failing in an extremely basic way.  I furthermore think that there are all sorts of terrible things that happen when people leave things implicit and speak falsehoods under the justification that everybody knows what we really mean.

I'm pointing at the category as "an example of people who are important, and whose needs are valid, and who are something like a canary in the coal mine for whether or not we're succeeding at all, and noting that the processes by which we would meet those needs are in fact good for everyone anyway" in the same way that you might say "this building needs to be accessible to a quadriplegic."

I apologize for the fact that that isn't coming through.  I appreciate your comment above, and the things it's standing for.  I think I just took for granted that obviously people wouldn't think it was a synonym for stupid, what with the overrepresentation of that demographic right here on the site (well, not ten-year-olds, but).

(Would appreciate but do not insist on a further reply from you, if you're willing.)

Replies from: Yoav Ravid, Slider, Slider
comment by Yoav Ravid · 2021-11-08T05:07:44.109Z · LW(p) · GW(p)

FWIW, your intention was clear to me when I read the post. And when I read Slider's comment before you commented it was clear to me that he got it wrong (I decided to let you reply, though).

comment by Slider · 2022-01-04T18:21:26.989Z · LW(p) · GW(p)

I found a ugly mechanic manifested in here me where I applied all kinds of inversions of normal modes of writing. Now that I got the bad out of my system and there has been a cool off period, time to go try to make repair of my damages that I have caused.

The post is talking about organising against mindkilly activity and a thing that did stick out for me was that it was a bit accusatory. The analysis of that accusatoriness didn't exactly go well and I am going to try to restep that process where it seemed to first go wrong.

My "spew out inkia nd words fast" responce was triggered as I thought a process similar to somebody trying to organise a witchtrial foremost needs to quickly be put down. In Power Buys You Distance From The Crime [LW(p) · GW(p)] I was worried about a situation where somebody wants urgent action based on simplistic models and that urgency has a tendency to keep those models simple and suggest simple remedies. I fear that peopes sentiments of "post convinced me of something and I don't what" is similar to a sentiment "yeah, I want to burn whitches. I don't know what they are but I want to burn them". It possible to get people behind a plan of "burnt witch leads to good harvest next year" but the urgency to take some action is not substitute for the plan to actually work.

The discussion by now includes some more dialogue. I was supposed to be out of it but it starts to become relevant by just the repetition of the topic. The year old recommendation was that "How this happened?" is way more justifiable question than "Whos life we should make diffcult based on this?". And I am reading a sentiment of "These people shall not make our life difficult by their accusations" and this post to be more about rallying people to defend a form of activity. In the airplane crash analog a pilot might be opionated that just because there was a crash they don't want to make their piloting procedures any more complex and might be fanning flames for scutinty of air traffic control so that focus keeps away from pilot procedures.

Letting airplanes fly with unexplained casualties woud be a traversty so it is warranted to launch detailed investigations when people are mysteriously lost. We also don't just ban all air traffic when there is a casualty. To implement a fix before the investigation produces a causal story of what happened is not likely to help.

Now to the local twist.

In the original comment I thought that I presented clear enough of a situaution where there is an A and B and the B gets calle A and that is disrespectful. This seems to not have been the case. A further example of this could be calling cricket croquette. Cricket is a perfectly respectable sport. Croquette is a perfectly respectable sport. Mixing the two can be disrespectful to them both or shows a degree of ignorance.

However it seems that what it was taken for whas that there is a "bad" or "contaminated" A and a neutral B and calling B an A is disrespectful because of A being an insulting thing. Referring to excess material as "trash" or "nuclear waste". Calling loaners thiefs.

Elsewhere there was an issue whether mentioning psychopathy and autism in the same breath or being somehow mutually involved whether that is okay. I could also take the example of mixing up blindness and deafness when a person says they are blind that somebody starts to raise their voice. These kinds of situation could be read to exhibit both the "cricket" and "nuclear waste" kind of insult depending on attitudes of the various conditions.

I was not using autistic as a synonym for stupid, and literally never have.  I do not think that autistic is a synonym for stupid.

I (extremely) agree with you that doing so is and would be rude, bad, unwelcoming, and a violation of basic hospitality norms.

Seems to read that "stupid" is contaminated and can't be used in a neutral sense.

This is not super relevant if the concept of "stupid" or anything like it is ever invoked. I would guess this could be categorised as moot until that.

I do think that the mentions of "autistic" as positive standard reveals or brings it so that there is no mere flavour differences dealt with. If player C is good at cricket and player D is good at croquet and therefore I conclude that player C is a better sportsman that does place cricket and croquet on an uneven evaluation ground. Cricket is harder or excelling in it is more valuable than the other sport. And maybe somebody could argue that this evaluation can be made in a way that is not unduly discriminatory like there is no magical guarantee that all sports are equally virtous. But it could also be argued that there is no need to compare apples to oranges.

So if "autistic" is used in the bar as an intensifier then to that extent the "stupid" or something problematic for same kinds of reasons does enter the picture.

comment by Slider · 2021-11-07T21:22:30.950Z · LW(p) · GW(p)

I do not think you intend malice.

I do think that a purely technical or literal-minded meaning would have just used "autistic special interest". Given that you identify to be on the spectrum that might in fact be the case. Differentiating between pseudo-autistic and actually autistic could be done for the motive of avoiding negative connotations. I hold it in high probability that your mind is doing a mini-dodge of negative connotation and you are suffering from a very mild case of internalized ablism.

I do not think that "pseudo-autistic" is a very technically expressive term. You could have people that are on the lighter end of the spectrum. However using the word "pseudo" would have tones and implications that those persons are not "really" autistic (as in false rather than shallow). I would be interesting to hear the case about using "autistic" vs "pseudo-autistic" and I would find it surprising if the case that there would be a constructive literal use for "pseudo-autistic" could be made.

I do think that the mechanism how "literal actual autistic ten-year-old" is a good proof of high quality of understanding if the autism aspect represents additional challenges to communication. Having an autistic audience can have the advatages that literal statements sink in more easily and impact of details doesn't get glossed out. These could make "literal actual autistic ten-year-old" easier to pass than "ten-year-old" to pass. If the problem or standard doesn't have to do with literalness or any other associated autistic trait then invoking that is improper.

For example having a statement like "have rules that a 10 year-old foreigner would understand" would be insulting if the rule is not expected to be heavily culture depedent and could be proper and non-insulting if the danger is cultural misunderstandings.

My reading of this parent comment reinforces and confirms the impression that you are using "autistic" here as an improper weakness indicator which in the mental realm makes it into the constellations of synonyms for stupid. In this kind of context "stupid" itself might not be a irrelevant characteristic to draw and even if it was it would only dip to the rudeness introduced in "I spend a lot of time around people who are not as smart as me."

Working with autistic people is not a very stellar statistical guarantee of respectful attitudes towards autistic people. Yes, they do better than average and yes it directly benefits them in their activities. However there are structural reason why they are sometimes worse. And for the reasons I don't care about your black friends when it comes to racism I don't care about your autistic friends when dealing with harmful impact and enforcement of neurotypical values.

Replies from: dxu, Duncan_Sabien
comment by dxu · 2021-11-07T22:06:41.267Z · LW(p) · GW(p)

I am not the author of the original post, and as such I am rather freer in my ability to express pushback to criticism of said post, without invoking social friction-costs like "But you might only be saying that to defend yourself", etc.

So, to wit: I think you are mostly-to-entirely mistaken in your construal of the sentences in question. I do not believe the sentences in question carry even the slightest implication that the word "autistic" is synonymous with, evocative of, or otherwise associated with the word "stupid". Moreover, since I am not the post's author, I don't have to hedge my judgement for fear of double-counting evidence; as such I will state outright that I consider your interpretation of the quoted sentences, not just mistaken, but unreasonable.

Let's take a look at the sentences in question. The first:

I spend a lot of time around people who are not as smart as me, and I also spend a lot of time around people who are as smart as me (or smarter), but who are not as conscientious, and I also spend a lot of time around people who are as smart or smarter and as conscientious or conscientiouser, but who do not have my particular pseudo-autistic special interest and have therefore not spent the better part of the past two decades enthusiastically gathering observations and spinning up models of what happens when you collide a bunch of monkey brains under various conditions.

There is no way to replace the word "autistic" here (without or without the "pseudo-" prefix) in a way that makes sense; doing so irrevocably modifies the sentence's meaning in a way that simply does not at all parse in the context of everything else the article is arguing for. "[People] who do not share my [stupid/pseudo-stupid] special interest" is nonsensical; there is little-to-no wiggle room for interpretation here, and certainly not for interpretations like "I believe you are using 'autistic' as a synonym for 'stupid'".

The remaining two sentences use the word in question as part of the same noun phrase, so I will group them and their treatment together:

If there are Special Cool People™ who are above the law, be explicit about that fact in a way that could be made clear to an autistic ten-year-old.

Give up, and admit that we're kinda sorta nominally about clear thinking and good discourse, but not actually/only to the extent that it's convenient and easy, because either "the community" or "some model of effectiveness" takes priority, and put that admission somewhere that an autistic ten-year-old would see it before getting the wrong idea.

And again, in neither of these two sentences is can the phrase "autistic ten-year-old" be substituted with the phrase "stupid ten-year-old" without entirely and irrevocably modifying the meaning of the sentences in question. The qualifier "autistic" is performing actual semantic work in those sentences; in particular it is requesting that certain pieces of social information be conveyed in a way that is meets a particular standard of legibility, and using the hypothetical autistic ten-year-old as a measuring stick for that standard. Conversely, there is no corresponding standard for the phrase "stupid ten-year-old", and given this I again consider it unreasonable to suppose that the word "autistic"—which, again, is doing real semantic work in those sentences—is being used as a synonym for a word that cannot do the same work even in principle.


You may note that this comment is rather strong in its pushback. This is intended. I believe that your comment, separately and in addition to the object-level misunderstandings it presents, is an instance of a more general trend that I strongly dislike, and would prefer to see less of. The trend in question is imputing intention to others where there is none; the moment you claim to have the ability to read the mind of your interlocutor (and moreover to do so with enough precision to identify subconscious, harmful intentions) is the moment you introduce a dimension to the conversation that, in my view, should not be there. I believe both of your initial comments fall afoul of this, but especially your second one, which contains allegations such as

Differentiating between pseudo-autistic and actually autistic could be done for the motive of avoiding negative connotations.

and

I hold it in high probability that your mind is doing a mini-dodge of negative connotation and you are suffering from a very mild case of internalized ablism.

I consider both of these examples of epistemically corrosive behavior—and in this particular case that impression is amplified by the fact that you then proceed to inject at least mildly political undertones ("And for the reasons I don't care about your black friends when it comes to racism") based on your imagined interlocutor's implicit intentions. I am very strongly against the notion that people should take offense to the meaning they project onto someone else's sentences, and on that front I think your comment scores terribly. Strong downvote.

Replies from: Slider
comment by Slider · 2021-11-07T23:44:50.914Z · LW(p) · GW(p)

Thank you for explaining your vote, I have upvoted it.

Your analysis that trying to reword "pseudo-autistic" to stupid is indeed correct that it can't really be done.

I have a different bone to pick about it and that I need to separately tell what the bone is speaks of my communication being inadequate. Fine expressions to me would be "autistic special interest" or "intense special interest". The danger there is not to dilute the meaning anything near a "hobby", special interests are a separate thing and the latter is what is went after.

The bone is more with the use of "pseudo". If you are a bit homosexual you are not "pseudohomosexual" or "pseudoheterosexual". You can be bisexual or homocurious, those are nearer to being actual things. But "pseudohomosexuality" is not a thing but a weird adhoc construction attempt. You don't do weird "its not gay if the balls are not touching" games is gay is something non-negative. You can be neurotypical, you can be autistic, but rather than being "pseudo-autistic" you have aspergers or you are a high functioning autist or you are a person on the autism spectrum.

Anyway on the other thread it has surfaced that the main motivation was to avoid self-identification.

I do take note that I did incorrectly fail to read that the rules contain social information rather that they are especially clear. Flavourings like "10-year-old with ADHD would bother to read" or "10-year-old with dyslexia would not misconstrue" would be fine and relevant. "10-year-old girl would get" would be improper. And the sensible rephasing would be "even those that do not intuitively get social situations would heed and employ".

For those that are I tried to figure out what was happening and I found that the phrase happening at an emphasis, the topic being about exclusion and shooting down excuses pattern made the categories cross over. Hate words are said in a particular kickful way and the text was going for that kind of "sink in" tempo. Now that I could locate and particularise it provides a accident account even if does not provide excuses. I also noticed a kind of sense that "I should speak up" even if it is hard and even if it is a bit inaccurate, that there was a looming danger that the impulse was too diffuse and had too much inferential distance that if I explained it at full length I would get lost assembling too big of a entity. Thus it felt as say something or hold your peace and holding my peace felt really bad. I notice this triggers a kind of excuse to skip over steps which is probably dangerous in the same sense as "going for the win" but "getting the hint out there" seems less terrible of a beast.

I would like my carrots for sticking my neck out there and aknowledging that since its only a probabilty I might not actually be telepathic. I thought also that being open what my "game" is would make it less hidden, manageable and not be in the shadow if implications and connotations.

If the fallacy that I am invoking via which I express why I reject the thing can be done without summoning the spectre of politics I am all ears. Like does it have a name?

Replies from: Duncan_Sabien, dxu, Slider
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-08T00:05:20.228Z · LW(p) · GW(p)

Carrot awarded; I strong upvoted dxu's defense of the norms but I also have strong upvoted your post here (and reiterate once more that I like and agree with the underlying thing that was motivating you).

comment by dxu · 2021-11-08T03:01:20.263Z · LW(p) · GW(p)

Thank you for your response. After reading it, I'm much more sympathetic to your cause, and in particular it has caused me to strongly update against the hypothesis that you were speaking in bad faith. I have upvoted accordingly.

(To be clear, this was not my dominant hypothesis at the time I wrote my initial reply to you, but it did possess non-trivial probability mass, and I think it's safe to say it no longer does.)

Replies from: Slider
comment by Slider · 2021-11-08T16:26:26.270Z · LW(p) · GW(p)

Do you mean that you roled back the strong downvote? Or do you mean you upvoted the revelation of the background?

Do you think "inject at least mildly political undertones" took place and do you still think it is epistemically corrosive? Does "inject at least mildly political undertones" impute intention to others?n (I am trying to understand under what theory I did wrong by finding instances of the theory trying to understand what parts of the rule phrase "imputing intention to others where there is none" mean (I am "revealing my game" because I want to emphasise dealing with confusion over demands for consistency))

comment by Slider · 2021-11-09T11:20:46.856Z · LW(p) · GW(p)

I thought it a little more and I am going to get partially unrependant [LW · GW].

There is intensifier cross-talk confusion going on but that was not the whole reason or the main reason I was acting. I made on error reflecting on it that I latched on to the first error that came to mind and thought "of thats what happened and that is why it went wrong". I am still wrong in those parts but there were contributions from things I was also actually seeing (aka still believe in (even if I didn't have awareness of them before)).

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-09T18:22:20.674Z · LW(p) · GW(p)

I reaffirm, as I have tried to a couple of times, that I think the thing you're pulling for is good.  And as I note in the OP, if I or other readers are unable to see the importance of the distinctions you're making: that might mean that there's nothing there, but it's also a real possibility that you see things we don't.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-07T21:58:33.155Z · LW(p) · GW(p)

I feel like you're missing me with "And for the reasons I don't care about your black friends when it comes to racism I don't care about your autistic friends when dealing with harmful impact and enforcement of neurotypical values."

I think you perceived me mentioning my autistic students and partner, and my autistic character, as an attempt to persuade you of something?  In particular, it seems like you read me as saying "this impact can't be harmful, because my intentions are good" or "this impact can't be harmful, because these people would have noticed."

Which is not what I was attempting to convey.  Obviously those things are orthogonal to questions of harmful impact and enforcement of neurotypical values.

I do note, though, that just because someone has declared something to have a certain harmful impact, or just because someone has claimed that something is tantamount to enforcement of neurotypical values, doesn't mean it is.  People are highly trustworthy when speaking to their own direct experiences but not particularly trustworthy when extrapolating out to "therefore, this impacts the population thusly."

You're raising valid hypotheses, and as I noted above, the very existence of your reaction is evidence that something could be improved.

But I still don't buy that the [harmful impact] (to the extent that it exists) has its roots in my actions versus having its roots in other people's preconceptions and projections.  And I simply disagree that I'm enforcing neurotypical values, except I guess insofar as I'm validating that there is a difference between autists and non-autists (a fuzzy, population-level statistical one, not one that allows particularly accurate predictions on the level of individuals).

i.e. this raises my sense that I ought to change something, to see more of the impact I'd like to see in the world.  It doesn't nearly as much raise my sense that I have done something wrong, and need to change my attitudes or fundamental policies.

I'm open to arguments on that, though.  And I reiterate appreciation for the thing you're standing in defense of—it is good and worth defending.

As for things I'll be changing immediately—I buy your argument that "pseudo" is not the right term to use, here, but I don't yet have a replacement that avoids signaling greater confidence in my own being-on-the-spectrum nature than I actually have.  There's something in the vein of appropriation that I was trying to dodge by not claiming that my [thing] is in the set of autistic special interests.

So I guess at the moment I'll have to use a longer sentence rather than a short phrase.

Replies from: Slider, Slider
comment by Slider · 2021-11-09T11:12:01.328Z · LW(p) · GW(p)

I do know that asked about it and since I asked I should wait for answer to that. I thought about it and elsewhere the balance of having to do the cognitive work gets lobsided so for the interest of getting things done sharing on what brain cycles already have been sacrificed for pushes thins easier forward.

Hypothesis A: You think that I am seeing things that are not there and therefore semi-randomly opening random facts. "See nothing under the jacket, nothing up the sleeves". I am annoyed as my specific worry doesn't get addressed as I have trouble expressing/pinpointing it.

Making long-reaching speculations on info that is available: Why the bar was expressed as smashing 10-year old and autistic together isn't an abstract conceptual one but it is abstracted from many particular students. Getting an autism diagnosis can get tricky and while autism doesn't have a onset or offset, identifiability or diagnoasability varies. So a 10 year-old that is know to be an autist at the time they are 10-year-old is likely to be obviously and strikingly autistic and is likely to be high support needs. (For the reference one can think of taking all the 25 or 80 year olds with diagnosis (or whatever boundary one wants to use for "actually being") and ask were they were and what they were doing when they were 10). Working for a long time with such people might make the challenges very concrete. This leads to this being a very stark image and memory.

So when communicating that stark image can seem very simple and likely the word delineates a very delicate pattern. But not everybody is a support provider, or particularly knows about neurotypes. Would LW be an environment that would be expected to have autism conccepts generally known?

I know that a communication option I am about to use is not near anywhere near the one used. One way of expressing a very demanding accessibility requirement would be to say "So that every goddamn retard gets it". To not be needlessly hostile we can drop a pure intensifier cursy words and we can replace a technical synonym to get "So that everyone even with a learning disability gets it". (There is some cross agitation in my brain going on with italics and being at a punchline place favouring message intention to be intense). We can think that learning disability people are valuable and respectable and all. And that is the condition you get labeled with if you are clinically stupid.

If nothing we wanted to communicate was a demanding standard for the understandability the previous paragraph would be towards that direction. So either there is additional aspects or there is incorrect borrowing of meaning. There is the possibility that the information is social in nature and the bar is meant to be set especially on the social front. Examples of things of pure legibility would be phrases like "in no uncertain terms","black on white in big letters". However there is the shadow side if we mean "struggling in life", "struggling in social circles" to mean "less socially able" the way "less wrong" is supposed to point to being correct. (So does that make Less Wrong a peer-support group for people that are incorrect?)

So when I am reading the article seeing that there is otherization in very near proximity to use autism circle concepts (pseudo-autistic special interest). I have a memory that this [LW(p) · GW(p)] happened and things like that occasionally happening. I also at the time feel like the issue at hand doesn't have to do with autism per se but comes as a off-topic dangling at a punchline time that feels it could be an intensifier. Those are things that others than me could also see.

This guessing game would provide that a very specific memory was being referenced. That memory reference is not very legible to the audience. It is being presented to an audience that sometimes gets harassed for being overtly pedantic.

I do think part of this phenomena might be that the hyperfocus makes the bearability of the social situations different. That is sensory and social overload in a situation which one percieves to be important and so rackets up attention could be especially draining. Like those that with special interest with Star Trek might find it too bothersome to discuss things with fans that can't even quote every episode verbatim, there could be a conception and argument that is LW the forum for people with intense interest in rationality to deepen and endulge in that special interest. Then there could be a worrying current of neurotype discrimination where "you can't try hard enough to become sufficiently interested. As neurotypicals you are not wired to have these conversations so you need not apply". I personally think this doesn't sound promising for LW. I do think that such an arrangement can pull some things off that more inclusive arrangements could not. And I think the frustration with "screwy people" might have a shade of "reverse ablism", disablism? (and I am playing with fire here canvasing out emotional possiblities). Edit: thinks of the "brokenly disorganised" to be exhibiting a pathological neurotype and thinking that the autistic neurotype is healthy, so is just an instance of ablism,

comment by Slider · 2021-11-08T17:57:47.968Z · LW(p) · GW(p)

I do note that it also works in mirror that just because someone has declared that they are not doing something doesn't mean that they are not doing the thing.

So if not "this impact can't be harmful, because my intentions are good" or "this impact can't be harmful, because these people would have noticed." was meant, what was meant? Outside of the frowned upon telepathy I am at a loss for relevance. So I am asking rather than assuming.

You seem to be convinced that "pseudo" is not communicating what it is supposed to communicate and that dropping the link to autism is not worth the damage it does in establishing the point. And you seem to think that should you find yourself in a situation where you have an impulse to use the short phrase you should just bother to write a longer sentence.

I also note you are not reworking to existing article to use a longer sentence instead.

Trying to read the article and take it seriosly/literally it does raise a question do you realise how extreme/harsh line it is doing.

This was in the list of terrible ideas

Publish a set of absolute user guidelines (not suggestions) and enforce them without exception instead of enforcing them like speed limits.  e.g. any violation from a new user, or any violation from an established user not retracted immediately upon pushback = automatic three-day ban.  If there are Special Cool People™ who are above the law, be explicit about that fact in a way that could be made clear to an autistic ten-year-old.

It doesn't say "retracted after discussion" or "retracted fast". And it also doesn't say "upon establishing it happened" or "after credible claim" or "on balance of evidence". Now this is a terrible idea and I guess part of it is to not rely on "suggestion guidelines". It says "immediately upon pushback". At the beginnig of this comment-thread that as of this writing stands at -10 karma I linked to an instance where a hospitality norm was non-centrally violated in a oneline comment and it was addressed by spelling out an assumtion of unfamliarity with norms, that it is unwanted and why it is unwanted. That user edited out the most eggrecious of the unhospitality maybe on the balance that it was not needed to convey the message. One would think that ideally atleast that line would extend for those that write longer messages and which are more central to the core userbase.

I do realise that I am sounding like some bad mechanics over at the mindkilly side. Maybe I actually am, but my goal is not to control productions/writings authored by others. I see that somebody wants to attain a high standard and wants help even in the small details. I would think that ideally when people make mistakes they would actively go hunt out tips that they are in the wrong "say oops" and correct on the their own accord. Not because they anticipate a social backlash. Not because there is a threat of a ban. "But sometimes there's a mountain there, and it's kind of wild that you can't see it.". Do you want help in climbing this particular mountain?

 

And I simply disagree that I'm enforcing neurotypical values, except I guess insofar as I'm validating that there is a difference between autists and non-autists (a fuzzy, population-level statistical one, not one that allows particularly accurate predictions on the level of individuals).

Is this relevantly different from calling autists stupid? I think I want to tease out and explicate taken very literally "that there is a difference between autists and non-autists" could be value neutral and the kind of "social information vs general information" kind of that is going on here [LW(p) · GW(p)]. But it feels like taking it in the sense that there is a single linear axis where the two groups don't have the same median and other statistical properties would also be justified (from able to not able, from competent to non-competent). Does this kind of distinction fall out of the scope of

I (extremely) agree with you that doing so is and would be rude, bad, unwelcoming, and a violation of basic hospitality norms.

Replies from: Slider
comment by Slider · 2021-11-08T18:16:01.131Z · LW(p) · GW(p)

I ended up still wanting to dumb a scenario even if it is a bit mindready. I am doing a dirty trick of making a separate comment of tanking negative karma.

In the movie Idiocrazy, the protagonist at one point is faced with the challenge of fixing farming. The population is using energy drinks to water the crops. The protagonist thinks they would be better served by using water for irrigation.

-"you are killing the plants by poisoning them"

-"No but this has got electrolytes which is what plants crave" [points at massive billboard]

-"No, but if you would just try it..."

-"But we have always done it this way. Its common knowledge everybody knows that electrolytes are good for crops"

Being the head of agricultural sector in some sense makes that the most compent farmer around. But that is distinct from being right. I guess the current lingo fashion would be to say that instead of indirect arguments of reliablity talk about gear-level models on what is the impact exposing to water vs exposing to electrolytes. And in order to do this cleanly one needs to suppress the "knowledge" that electrolytes help plants.

Like working all your life around plants doesn't guarantee good croppping working all your life with and towards autists doesn't guarantee good attitude. The gear spinning doesn't care where you have been lurking.