Posts

Overconfidence is Deceit 2021-02-17T10:45:04.276Z
CFAR Participant Handbook now available to all 2020-01-03T15:43:44.618Z
Willing to share some words that changed your beliefs/behavior? 2019-03-23T02:08:37.437Z
In My Culture 2019-03-07T07:22:42.982Z
Double Crux — A Strategy for Resolving Disagreement 2017-01-02T04:37:25.683Z

Comments

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-22T02:12:37.271Z · LW · GW

I'm feeling demoralized by Ben and Scott's comments (and Christian's), which I interpret as being primarily framed as "in opposition to the OP and the worldview that generated it," and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.

Both Scott's and Ben's thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on.  I am strong upvoting them both.

But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they're three separate conversations, rather than it being clear that I said "X" and Ben said "By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X" and Scott said "And I have a lot of thoughts about X' and X''."

Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don't want to have to defend against them, but feel like if I don't, they will in fact go down as points against those assertions and proposals.  People will take them as unanswered rebuttals, without noticing that approximately everything they're specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there's a difference between "what I assert Policy X will degenerate to, given [a, b, c] about the human condition" and "Policy X."

(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)

And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott's proposed conversations while they seem to me to be masquerading as my conversation.  Like, as long as they are registering as direct responses, when they seem to me to be riffs.

I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs.  I reiterate that they seem, to me, to contain large amounts of useful insight.

I think that I would even deeply enjoy engaging with them right here.  They're certainly on topic in a not-even-particularly-broad-sense.

But I am extremely tired of what-feels-to-me like riffs being put on [my idea's tab], and of the effort involved in separating out the threads.  And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me.  I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.

EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I'm requesting.  I think I would take that more meaningfully if I were less tired to start with.  But also it being "a response to Scott's model of Duncan's beliefs about how epistemic communities work, and a couple of Duncan's recent Facebook posts" just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay).  Our disagreement is not where (I think) Ben and Scott think that it lies.

I don't know what to do about any of that, so I wrote this comment here.  Epistemic status: exhausted.

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-21T08:12:16.824Z · LW · GW

Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm?

Almost everyone's response to COVID, including institutions, to the tune of many preventable deaths.

Almost everything produced by the red tribe in 2020, to the tune of significant damage to the social fabric.

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-19T17:37:27.554Z · LW · GW

Your claims about the ramifications of my policy are straightforwardly false, because you have misunderstood / mischaracterized / strawmanned the policy I am advocating.

You are failing to pass the ITT of the post, and to take seriously its thesis, and thus your responses are aimed at tangents rather than cruxes.  The objections you are raising are roughly analogous to "but if you outlaw dueling, then people will get killed in duels when they refuse to shoot back!"

I explicitly request that you actually try to pass the ITT of the post, so that we can be in a place where our disagreement is actually useful.  Or, if you'd rather have this other, different conversation (which would be fine), at least acknowledge that you are changing the subject, and riffing rather than directly responding.

(The riff being something like, "instead of discussing the policy Duncan's actually proposing, I'd like to discuss the ramifications of a likely degeneration of it, because I suspect his proposal would degenerate in practice and what we would see as a result is X.")

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-19T07:23:32.479Z · LW · GW

You're still missing the thesis.  Apologies for not having the spoons to try restating it in different words, but I figured I could at least politely let you know.

Edit: a good first place to look might be "what do I think is different for me, Christian, than for people with substantially less discernment and savviness?"

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-18T22:37:22.276Z · LW · GW

I think you're underweighting a crucial part of the thesis, which is that it doesn't matter what the candidate secretly knows or would admit if asked.  A substantial portion of the listeners just ... get swayed by the strong claim.  The existence of savvy listeners who "get it" and "know better" and know where to put the hedges and know which parts are hyperbole doesn't change that fact.  And there is approximately never a reckoning.

Comment by Duncan_Sabien on Overconfidence is Deceit · 2021-02-18T07:38:52.232Z · LW · GW

locally seem fairly costly

This seems highly variable person-to-person; Nate Soares and Anna Salamon each seem to pay fairly low costs/no costs for many kinds of disgust, and are also notably each doing very different things than each other.  I also find that a majority of my experiences of disgust are not costly for me, and instead convert themselves by default into various fuels or resolutions or reinforcement-rewards.  There may be discoverable and exportable mental tech re: relating productively to disgust-that-isn't-particularly-actionable.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-16T02:13:35.105Z · LW · GW

One last point for Zack to consider:

I just ... don't see how obfuscating my thoughts through a gentleness filter actually helps anyone?

You could start by thinking "okay, I don't understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here?  Maybe there's something real where he's pointing, even if I don't see the lines of cause and effect?"

Plus, it's disingenuous and sneaky to act like what's being requested here is that you "obfuscate your thoughts through a gentleness filter."  That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-16T01:27:47.546Z · LW · GW

Hm.  For the record, I find this thought to be worth chewing on, so thank you.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-16T01:23:12.481Z · LW · GW

maybe the one giving offense should be nicer, but maybe the one taking offense shouldn't have taken it personally?

 

So, by framing things as "taking offense" and "tone policing," I sense an attempt to invalidate and delegitimize any possible criticism on the meta level.  To start out with the hypothesis "Actually, Zack's doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback" already halfway to being dismissed.

I'm not "taking offense."  I'm not pointing at "your comment made me sad and therefore it was bad," or "gosh, why did you use these words instead of these slightly different words which I'm arbitrarily declaring are better."

I'm pointing at "your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I've engaged with you."  You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.

This is not about "tone policing."  This is about the fundamental thrust of the engagement. "You're wrong, and I'mm'a prove it!" vs. "I don't think that's right, can we talk about why?"

Eric Rogstad (who's my mental exemplar of the virtue I'm pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick.  Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they're better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.

(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you're being adversarial because you genuinely believe that's the best way forward.  I think that's what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre.  I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it's virtuous so that you don't have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)

clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them

... conveniently ignoring, as if I didn't say it and it doesn't matter, my point about context being a real thing that exists.  Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn't get to be, and decided to pretend they didn't see it so they could still have their fun.

Exploring that line of discussion is potentially interesting!

I defy you to say, with a straight face, "a supermajority of rationalists polled would agree that the hypothesis which best explains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation."

I'm more motivated, etc.

It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I'm immoral and irrational.  EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.

The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders.  If all someone knows is that one voted for Trump, one's private dismay and internal reservations do nothing to stop the norm shift.  You can't rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.

And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months.  I guess I'll try again in the summer.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T22:47:44.768Z · LW · GW

Agreement with all of the above.  I just don't want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T22:06:25.050Z · LW · GW

If you're going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.

I do not see how you could be applying charity neutrally/symmetrically, given the above comment.

I'm applying the standard "treat each statement as meaning what it plainly says, in context."  In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false.  In context, my comment says if you're about to assert that something is true of everyone without fail, you're something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).

Of the actual times that actual humans have made assertions about what's universally true of all people, I strongly wager that they've been wrong 1000x more frequently than they've been right.  Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.

I actually can't produce an assertion about all human actions that I'm confident is true.  Like, I'm confident that I can assert that everything we'd classify as human "has a brain," and that everything we'd classify as human "breathes air," but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven't yet been able to think of something that everyone, without fail, definitely does.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T21:37:20.773Z · LW · GW

Note that near-universals are ruled out by "everyone without fail."  I am in fact pointing, with my "helpful tip," at statements beginning with everyone without fail.  It is in fact not the case that any of the examples Zack started with are true of everyone without fail—there are humans who do not laugh, humans who do not tell stories, humans who do not shiver when cold, etc.

This point is not the main thrust of my counterobjection to Zack's comment, which was more about the incentives created by various styles of engagement, but it's worth noting.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T21:35:03.566Z · LW · GW

My downvote here is not for TAG holding the hypothesis that the rationalist/LW bubble might be bad in various ways (this is an inoffensive hypothesis to hold, in my culture) but rather for its method of sly insinuation that tries to score a point without sticking its neck out and making a clear and falsifiable claim.

If I can be shown that I've misread TAG, I'll remove the downvote.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T21:21:25.205Z · LW · GW

I mean the willful misunderstanding of the actual point I was making, which I still maintain is correct, including the bit about many orders of magnitude (once you include the should-be-obvious hidden assumption that has now been made explicit).  

The adversarial pretending-that-I-was-saying-something-other-than-what-I-was-clearly-saying (if you assign any weight whatsoever to obvious context) so as to make it more attackable and let you thereby express the performative incredulity you seemed to want to express, and needed more license for than a mainline reading of my words provided you.

I also object to "would be very bad" in the subjunctive ... I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback.  Smacks of "I apologize IF I offended anybody," when one clearly did offend.  This interaction has certainly taken my barely-sufficient-to-get-me-here motivation to "try LessWrong again" and quartered it.  This thread has not fostered a sense of "LessWrong will help you nurture and midwife your thoughts, such that they end up growing better than they would otherwise."

I would probably feel more willing to believe that your nitpicking was principled if you'd spared any of it for the top commenter, who made an even more ambitious statement than I (it being absolute/infinite).

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T20:07:55.905Z · LW · GW

You're neglecting the unstated precondition that it's the type of sentence that would be generated in the first place, by a discussion such as this one.  You've leapt immediately to an explicitly adversarial interpretation and ruled out meaning that would have come from a cooperative one, rather than taking a prosocial and collaborative approach to contribute the exact same information.

(e.g. by chiming in to say "By the way, it seems to me that Duncan is taking for granted that readers will understand him to be referring to the set of such sentences that people would naturally produce when talking about culture and psychology.  I think that assumption should be spelled out rather than left implicit, so that people don't mistake him for making a (wrong) claim about genuine near-universals like 'humans shiver when cold' that are only false when there are e.g. extremely rare outlier medical conditions."  Or by asking something like "hey, when you say 'a sign' do you mean to imply that this is ironclad evidence, or did you more mean to claim that it's a strong hint?  Because your wording is compatible with both, but I think one of those is wrong.")

The adversarial approach you chose, which was not necessary to convey the information you had to offer, tends to make discourse and accurate thinking and communication more difficult, rather than less, because what you're doing is introducing an extremely high burden on saying anything at all.  "If you do not explicitly state every constraining assumption in advance, you will be called out/nitpicked/met with performative incredulity; there is zero assumption of charity and you cannot e.g. trust people to interpret your sentences as having been produced under Grice's maxims (for instance)."

The result is an overwhelming increase in the cost of discourse, and a substantial reduction in its allure/juiciness/expected reward, which has the predictable chilling effect.  I absolutely would not have bothered to make my comment if I'd known your comment was coming, in the style you chose to use, and indeed now somewhat regret trying to take part in the project of having good conversations on LessWrong today.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-15T18:29:02.239Z · LW · GW

If [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], can you [blame A for championing PR]?

Nope, given that condition.  But also the "if" does not hold.  You're incorrect that [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], and I note as a helpful tip that if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-14T17:58:33.351Z · LW · GW

This seems true to me but also sort of a Moloch-style dynamic?  Like "yep, I agree those are the incentives, and it's too bad that that's the case."

Comment by Duncan_Sabien on “PR” is corrosive; “reputation” is not. · 2021-02-14T03:56:23.069Z · LW · GW

I think another way to gesture at the distinction here is whether your success criteria is process-based or outcome-based.

If you're "trying to do PR," then you're sort of hanging your hopes on a specific outcome—that people will hold you in high regard, say good things about you, etc.  This opens you up to Goodharting, and various muggings and extortions, and sort of leaves you at the mercy of the most capricious or unreasonable member of the audience.

Whereas if you're "trying to be honorable" (or some other similar thing), you're attempting to engage in methods and processes that are likely to lead to good outcomes, according to your advance predictions, and which tend to produce social standing as a positive side effect.  But you're not optimizing for the social standing, except insofar as you're contributing to a good and healthy society existing in the first place (and then slotting into it).

I see this (the thing I'm describing, which may or may not be as closely related to the thing Anna's describing as I think it is) as sort of analogous to whether you do something like follow diplomatic procedures or use NVC (process-based), or do whatever-it-takes to make sure you don't offend anybody (outcome-based).  One of these is sort of capped and finite in a way I think is important, and the other is sort of infinitely vulnerable.

Comment by Duncan_Sabien on In My Culture · 2021-01-12T00:08:42.216Z · LW · GW

FWIW, I would be willing to cut it, if it makes the cut overall, such that the essay is shorter and primarily about the core concept and includes only enough Duncan-specific stuff to get that core concept across.

Comment by Duncan_Sabien on In My Culture · 2020-12-21T19:59:31.153Z · LW · GW

(I strong upvoted this comment because it is wise.)

Comment by Duncan_Sabien on Words and Implications · 2020-10-02T07:58:08.527Z · LW · GW

This largely rings true to me but is missing one (in my opinion) absolutely crucial caveat/complication:

Most people (including, as experience has repeatedly confirmed, the vast majority of rationalists/LWers) will do "Ask what physical process generated the words. Where did they come from? Why these particular words at this particular time?" wrong, by virtue of being far too confident in the first answer that their stereotype centers generate, and not accounting for other-people's-minds-and-culture-being-quite-different-from-their-own.

Comment by Duncan_Sabien on CFAR Participant Handbook now available to all · 2020-01-14T23:00:56.729Z · LW · GW

As for the Understanding Shoulds section, that's another example of the document being tailor-made for a specific target audience; most people are indeed "taking far too seriously" their "utterly useless shoulds," but the CFAR workshop audience was largely one pendulum swing ahead of that state, and needing the next round of iterated advice.

Comment by Duncan_Sabien on CFAR Participant Handbook now available to all · 2020-01-14T22:59:37.834Z · LW · GW

Emailing CFAR is the best way to find out; previously the question wasn't considered in depth because "well, we're not selling it, and we're also not sharing it." Now, the state is "well, they're not selling it, but they are sharing it," so it's unclear.

(Things like the XKCD comic being uncited came about because in context, something like 95% of participants recognized XKCD immediately and the other 5% were told in person when lecturers said stuff like "If you'll look at the XKCD comic on page whatever..." In other words, it was treated much more like an internal handout shared among a narrowly selected, high-context group, than as product that needed to dot all of the i's and cross all of the t's. I agree that Randall Munroe deserves credit for his work, and that future edits would likely correct things like that.)

Comment by Duncan_Sabien on CFAR Participant Handbook now available to all · 2020-01-14T19:19:41.197Z · LW · GW

Emailing people at CFAR directly is the best way to find out, I think (I dunno how many of them are checking this thread).

Comment by Duncan_Sabien on CFAR Participant Handbook now available to all · 2020-01-08T19:27:07.698Z · LW · GW

Note that this handbook covers maybe only about 2/3 of the progress made in that private beta branch, with the remaining third divided into "happened while I was there but hasn't been written up (hopefully 'yet')" and "happened since my departure, and unclear whether anyone will have the time and priority to export it."

Comment by Duncan_Sabien on CFAR Participant Handbook now available to all · 2020-01-04T05:29:03.125Z · LW · GW

I don't know the answer; the team made their decision and then checked to see if I was okay with it; I wasn't a part of any deliberations or discussions.

Comment by Duncan_Sabien on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T01:44:19.783Z · LW · GW

What I meant by the word "our" was "the broader context culture-at-large," not Less Wrong or my own personal home culture or anything like that. Apologies, that could've been clearer.

I think there's another point on the spectrum (plane?) that's neither "overt anti-intellectualism" nor "It seems to me that engaging with you will be unproductive and I should disengage." That point being something like, "It's reasonable and justified to conclude that this questioning isn't going to be productive to the overall goal of the discussion, and is either motivated-by or will-result-in some other effect entirely."

Something stronger than "I'm disengaging according to my own boundaries" and more like "this is subtly but significantly transgressive, by abusing structures that are in place for epistemic inquiry."

If the term "sealioning" is too tainted by connotation to serve, then it's clearly the wrong word to use; TIL. But I disagree that we don't need or shouldn't have any short, simple handle in this concept space; it still seems useful to me to be able to label the hypothesis without (as Oliver did) having to write words and words and words and words. The analogy to the usefulness of the term "witchhunt" was carefully chosen; it's the sort of thing that's hard to see at first, and once you've put forth the effort to see it, it's worth ... idk, cacheing or something?

Comment by Duncan_Sabien on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T18:12:30.830Z · LW · GW

I agree that you've said this multiple times, in multiple places; I wanted you to be able to say it shortly and simply. To be able to do something analogous to saying "from where I'm currently standing, this looks to me like a witchhunt" rather than having to spell out, in many different sentences, what a witchhunt is and why it's bad and how this situation resembles that one.

My caveats and hedges were mainly not wanting to be seen as putting words in your mouth, or presupposing your endorsement of the particular short sentence I proposed.

Comment by Duncan_Sabien on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T16:39:55.792Z · LW · GW

I note that we, as a culture, have reified a term for this, which is "sealioning."

Naming the problem is not solving the problem; sticking a label on something is not the same as winning an argument; the tricky part is in determining which commentary is reasonably described by the term and which isn't (and which is controversial, or costly-but-useful, and so forth).

But as I read through this whole comment chain, I noticed that I kept wanting Oliver to be able to say the short, simple sentence:

"My past experience has led me to have a prior that threads from you beginning like this turn out to be sealioning way more often than similar threads from other people."

Note that that's my model of Oliver; the real Oliver has not actually expressed that [edit: exact] sentiment [edit: in those exact words] and may have critical disagreements with my model of him, or critical caveats regarding the use of the term.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-22T05:31:28.080Z · LW · GW

(I expect the answer to 2 will still be the same from your perspective, after reading this comment, but I just wanted to point out that not all influences of a CFAR staff member cash out in things-visible-in-the-workshop; the part of my FB post that you describe as 2 was about strategy and research and internal culture as much as workshop content and execution. I'm sort of sad that multiple answers have had a slant that implies "Duncan only mattered at workshops/Duncan leaving only threatened to negatively impact workshops.")

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-22T04:12:03.743Z · LW · GW

On reading Anna's above answer (which seems true to me, and also satisfies a lot of the curiosity I was experiencing, in a good way), I noted a feeling of something like "reading this, the median LWer will conclude that my contribution was primarily just ops-y and logistical, and the main thing that was at threat when I left was that the machine surrounding the intellectual work would get rusty."

It seems worth noting that my model of CFAR (subject to disagreement from actual CFAR) is viewing that stuff as a domain of study, in and of itself—how groups cooperate and function, what makes up things like legibility and integrity, what sorts of worldview clashes are behind e.g. people who think it's valuable to be on time and people who think punctuality is no big deal, etc.

But this is not necessarily something super salient in the median LWer's model of CFAR, and so I imagine the median LWer thinking that Anna's comment means my contributions weren't intellectual or philosophical or relevant to ongoing rationality development, even though I think Anna-and-CFAR did indeed view me as contributing there, too (and thus the above is also saying something like "it turned out Duncan's disappearance didn't scuttle those threads of investigation").

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-22T03:32:44.879Z · LW · GW

In general, if you don't understand what someone is saying, it's better to ask "what do you mean?" than to say "are you saying [unrelated thing that does not at all emerge from what they said]??" with double punctuation.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T19:39:58.346Z · LW · GW

They do. The distinction seems to me to be something like endorsement of a "counting up" strategy/perspective versus endorsement of a "counting down" one, or reasonable disagreement about which parts of the dog food are actually beneficial to eat at what times versus which ones are Goodharting or theater or low payoff or what have you.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T19:23:05.926Z · LW · GW

I'm not saying that, either.

I request that you stop jumping to wild conclusions and putting words in people's mouths, and focus on what they are actually saying.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T10:43:19.666Z · LW · GW

That's not the question that was asked, so ... no.

Edit: more helpfully, I found it valuable for thinking about rationality and thinking about CFAR from a strategic perspective—what it was, what it should be, what problems it was up against, how it interfaced with the rest of society.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T10:33:42.484Z · LW · GW

I'm reading the replies of current CFAR staff with great interest (I'm a former staff member who ended work in October 2018), as my own experience within the org was "not really; to some extent yes, in a fluid and informal way, but I rarely see us sitting down with pen and paper to do explicit goal factoring or formal double crux, and there's reasonable disagreement about whether that's good, bad, or neutral."

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T10:10:16.311Z · LW · GW

Historically, CFAR had the following concerns (I haven't worked there since Oct 2018, so their thinking may have changed since then; if a current staff member gets around to answering this question you should consider their answer to trump this one):

  • The handbook material doesn't actually "work" in the sense that it can change lives; the workshop experience is crucial to what limited success CFAR *is* able to have, and there's concern about falsely offering hope
  • There is such a thing as idea inoculation; the handbook isn't perfect and certainly can't adjust itself to every individual person's experience and cognitive style. If someone gets a weaker, broken, or uncanny-valley version of a rationality technique out of a book, not only may it fail to help them in any way, but it will also make subsequently learning [a real and useful skill that's nearby in concept space] correspondingly more difficult, both via conscious dismissiveness and unconscious rounding-off.
  • To the extent that certain ideas or techniques only work in concert or as a gestalt, putting the document out on the broader internet where it will be chopped up and rearranged and quoted in chunks and riffed off of and likely misinterpreted, etc., might be worse than not putting it out at all.
Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T10:04:20.470Z · LW · GW

[Disclaimer: have not been at CFAR since October 2018; if someone currently from the org contradicts this, their statement will be more accurate about present-day CFAR]

No (CFAR's mission has always been narrower/more targeted) and no (not in any systematic, competent fashion).

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T10:01:26.878Z · LW · GW

In case no one who currently works at CFAR gets around to answering this (I was there from Oct 2015 to Oct 2018 in a pretty influential role but that means I haven't been around for about fourteen months):

  • Meditations on Moloch is top of the list by a factor of perhaps four
  • Different Worlds as a runner up

Lots of social dynamic stuff/how groups work/how individuals move within groups:

  • Social Justice and Words, Words, Words
  • I Can Tolerate Anything Except The Outgroup
  • Guided By The Beauty Of Our Weapons
  • Yes, We Have Noticed The Skulls
  • Book Review: Surfing Uncertainty
Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T09:50:51.277Z · LW · GW

I'd be curious for an answer to this one too, actually.

Comment by Duncan_Sabien on We run the Center for Applied Rationality, AMA · 2019-12-21T09:45:07.272Z · LW · GW
that CFAR's natural antibodies weren't kicking against it hard.

Some of them were. This was a point of contention in internal culture discussions for quite a while.

(I am not currently a CFAR staff member, and cannot speak to any of the org's goals or development since roughly October 2018, but I can speak with authority about things that took place from October 2015 up until my departure at that time.)

Comment by Duncan_Sabien on CFAR: Progress Report & Future Plans · 2019-12-19T07:22:31.755Z · LW · GW

This is a good guess on priors, but in my experience (Oct 2015 - Oct 2018, including taking over the role of a previous burnout, and also leaving fairly burnt), it has little to do with ops capacity or ops overload.

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-13T05:47:53.251Z · LW · GW

Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say "My whole objection is that Gordon wasn't bothering to (and at this point in the exchange I have a hypothesis that it's reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior)."

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-13T05:44:19.954Z · LW · GW

I'm not sure I'm exactly responding to what you want me to respond to, but:

It seems to me that a declaration like "I think this is true of other people in spite of their claims to the contrary; I'm not even sure if I could justify why? But for right now, that's just the state of what's in my head"

is not objectionable/doesn't trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it's not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it's self-aware about being a belief that may or may not match reality?

Which makes me re-evaluate my response to Gordon's OP and admit that I could have probably offered the word "think" something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to "oops, the objection was fundamentally inappropriate").

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-12T08:34:55.065Z · LW · GW

I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I've ever seen Gordon do it), it's tantamount to trying to erase another person/another person's experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that's not backed by reality.

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-12T08:32:46.114Z · LW · GW
when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It's only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn't bothering to (I believe as a cover for not being able to).

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-12T00:42:13.684Z · LW · GW

Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said "of course you can"). i.e. it's not out of respect for my preferences that that information is not being brought in this thread.

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-12T00:21:46.179Z · LW · GW

I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what's going on in those other people's heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it's a move that Gordon endorses, on reflection, and it's the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection.

I'm comfortable with just leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move." Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that's the crux.

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-11T22:05:19.224Z · LW · GW

I explicitly reject Gordon's assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.

Comment by Duncan_Sabien on G Gordon Worley III's Shortform · 2019-09-11T21:07:21.585Z · LW · GW

I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words.

(The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.)

I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).