Musings on Double Crux (and "Productive Disagreement")

post by Raemon · 2017-09-28T05:26:01.246Z · score: 23 (8 votes) · LW · GW · 72 comments


    Observations So Far
    Filtering Effects
    Insufficient Shared Trust
  Possible Pre-Requisites for Progress
  Building Towards Shared Norms

Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.

Double Crux has been making the rounds lately (mostly on Facebook but I hope for this to change). It seems like the technique has failed to take root as well as it should. What's up with that?

(If you aren't yet familiar with Double Crux I recommend checking out Duncan's post on it in full. There's a lot of nuance that might be missed with a simple description.)

Observations So Far

Double Crux seems hard to practice, for a few reasons.

Filtering Effects

Insufficient Shared Trust

That last point is one of the biggest motivators of this post. If the people I most respect can't productively disagree in a way that leads to clear progress, recognizable from both sides, then what is the rationality community even doing? (Whether you consider the primary goal to "raise the sanity waterline" or "build a small intellectual community that can solve particular hard problems", this bodes poorly).

Possible Pre-Requisites for Progress

There's a large number of sub-skills you need to productively disagree. To have public norms surrounding disagreement, you not only need individuals to have those skills - they need to trust that each other have those skills as well.

Here's a rough list of those skills. (Note: this is long, and it's less important that you read the whole list than that the list is long, which is why Double Cruxing is hard)

(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)

Building Towards Shared Norms

When smart, insightful people disagree, at least one of them is doing something wrong, and it seems like we should be trying harder to notice and resolve it.

A rough sketch of a norm I'd like to see.

Trigger: You've gotten into a heated dispute where at least one person feels the other is arguing in bad faith (especially in public/online settings)

Action: Before arguing further:


Comments sorted by top scores.

comment by deluks917 · 2017-09-28T13:06:22.838Z · score: 28 (9 votes) · LW(p) · GW(p)

I am genuinely confused by the discourse around double crux. Several people I respect seem to think of DC as a key intellectual method. Duncan (curriculum director at CFAr) explicitly considers DC to be a cornerstone CFAR technique. However I have tried to use the technique and gotten nowhere.

Ray deserves credit for identifying and explicitly discussing some of the failure modes I ran into. In particular DC style discussion frequently seems to recurse down to very fundamental issues in philosophy and epistemology. Twice I have tried to discuss a concrete practical issue via DC and wound up discussing utility aggregation; in these cases we were both utilitarians and we still couldn't get the method to work.

I have to second Said Achmiz's request for public examples of double crux going well. I once asked Ray for an example via email and received the following link to Sarah Constantin's blogpost . This post is quite good and caused me to update towards the view that DC can be productive. But this post doesn't contain the actual DC conversation, just a summary of the events and the lessons learned. I want to see an actual, for real, fully detailed example of DC being used productively. I don't understand why no such examples are publicly available.

comment by hamnox · 2017-09-28T17:49:58.386Z · score: 21 (9 votes) · LW(p) · GW(p)

whperson's comment touches on why examples are rarely publicized.

I watched Constantin's Double-Crux, and noticed that, no matter how much I identified with one participant or another, they were not representing me. They explored reciprocally and got to address concerns as they came up, while the audience gained information about them unilaterally. They could have changed each other's minds without ever coming near points I considered relevant. Double-crux mostly accrues benefits to individuals in subtle shifts, rather than to the public in discrete actionable updates.

A good double-crux can get intensely personal. Double-crux has an empirical advantage over scientific debate because it focuses on integrating real, existing perspectives instead of attempting to simultaneously construct and deconstruct a solid position. On the flip side, you have to deal with real perspectives, not coherent platforms. Double-crux only integrates those two perspectives, cracked and flawed as they are. It's not debate 2.0 and won't solve the same problems that arguments do.

comment by Zvi · 2017-09-29T18:45:17.308Z · score: 16 (5 votes) · LW(p) · GW(p)

I also watched Constantin's Double-Crux, and feel that most of my understanding of how the process works comes from that observation rather than any posts including Duncan's. I also agree that her post of results, while excellent, does not do the job of explaining the process that was done by watching the process live. I wonder to what extent having an audience made the process unfold in a way that was easier to follow; on the surface both of them were ignoring us, and as hamnox says they were not trying to respond to our possible concerns, but I still got the instinctive sense that having people watching was making the process better or at least easier to parse.

The topic of that Crux was especially good for a demonstration, in that it involved a lot of disagreements over models, facts and probabilities. The underlying disagreements did not boil down to questions of philosophy.

I do think that finding out that the difference does boil down to philosophy or epistemology is a success mode rather than a failure mode - you've successfully identified important disagreements you can talk about now or another time, and ruled out other causes, so you don't waste further time arguing over things that won't change minds. It's an unhint: You now think you're worse off than you thought you were before, but you're actually better off than you actually were.

It also points to the suggestion that if you're frequently having important disagreements that boil down to philosophy, perhaps you should do more philosophy!

comment by Conor Moreton · 2017-09-29T19:51:41.687Z · score: 7 (2 votes) · LW(p) · GW(p)

Strong agreement that identifying important root disagreements is success rather than failure. If people on opposite sides of the abortion debate got themselves boiled all the way down to virtue ethics vs. utilitarianism or some other similar thing, this would be miles better than current demonization and misunderstanding.

comment by spiralingintocontrol · 2017-09-29T21:05:43.592Z · score: 14 (7 votes) · LW(p) · GW(p)

For me, the world is divided into roughly two groups:

1. People who I do not trust enough to engage in this kind of honest intellectual debate with, because our interests and values are divergent and all human communication is political.

2. Close friends, who, when we disagree, I engage in something like "double crux" naturally and automatically, because it's the obvious general shape of how to figure out what we should do.

The latter set currently contains about two (2) people.

This is why I don't do explicit double crux.

comment by ozymandias · 2017-09-28T14:45:16.062Z · score: 14 (5 votes) · LW(p) · GW(p)

I feel like, as a contrarian, it is my duty to offer to double-crux with people so they get some practice. :P When I've moved up to the East Bay interested people should feel free to message me.

comment by Zvi · 2017-09-29T18:14:13.473Z · score: 6 (2 votes) · LW(p) · GW(p)

I too volunteer to double-crux with people to let them and myself get practice, either in-person in NYC or online, and encourage others to also reply and add their names to such a list.

comment by Qiaochu_Yuan · 2018-01-09T07:34:25.726Z · score: 12 (3 votes) · LW(p) · GW(p)

I find that I never double crux because it feels too much like a Big Serious Activity I have to Decide to engage in or something. The closest I've gotten is having a TAP where during disagreements I try to periodically ask myself what my cruxes are and then state them.

comment by whpearson · 2017-09-28T10:15:45.542Z · score: 12 (4 votes) · LW(p) · GW(p)

I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don't expect good faith from all the other viewers.

But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.

You touch on this with

instead of continuing to argue in public where there's a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.

How will this norm spread?

We need public examples for people to have an idea of what good looks like.

Unless we can hide it away in a culture where it is okay to be wrong about things, or somehow anonymise it, so you can't tell who is being wrong, it doesn't seem like it would scale.

comment by Zvi · 2017-09-29T19:02:08.414Z · score: 12 (4 votes) · LW(p) · GW(p)

We need public examples, agreed. I think this under-sells the difficulty here.

In an argument or discourse worth having, a lot of the beliefs feeding in are going to be things that are:

A) Hard to state with precision, or that require the sum of a lot of different claims.

B) Involve beliefs or implications that risk getting a very negative reaction on the internet. There are a lot of important facts about the world you do not want to be seen endorsing in public, as much as we wish it were not so.

C) Involve claims that you do not have a social right to make.

D) Involve claims you can't provide well-articulated evidence for, or can't without running into some of A-C.

In my experience, advanced actually-changing-minds discussions are very hard to follow and very easy to misconstrue. They involve saying things that make sense in context to the particular person you're talking to, but that often on the surface make absurd, immoral or taboo claims.

I still think trying to do this is Worth It. I would start by trying to think harder about what topics we can do this on in public, that dodge these problems while still being non-trivial enough to be worthwhile.

comment by Raemon · 2017-09-28T18:12:48.958Z · score: 6 (2 votes) · LW(p) · GW(p)

There'd likely be a multi-step plan, which depends on whether your goals are more "raise the sanity waterline" or "build an intellectual hub that makes rapid progress on important issues."

Step 1: Practice it in the rationality community. Generally get people on board with the notion that if there's an actually-important disagreement, that people try to resolve it. This would require a few public examples of productive disagreement and double crux (I agree that lack-of-those is a major issue).

Then, when people have a private dispute, they come back saying "Hey this is what we talked about, this was what we agreed on, and this is any meta-issues we stumbled upon that we think others should know about re: productive disagreement."

Step 2: Do that in semi-public places (facebook, other communities we're part of, etc), in a way that let's nearby intellectual communities get a sense of it. (Maybe if we can come up with clear examples and better introduction articles, it'd be good to share those). The next time you get into a political argument with your uncle, rather than angrily yell at each other, try to meet privately and talk to each other and share it with your family. (Note: I have some uncles for whom I think this would work and some for whom it definitely wouldn't)

(This will require effort and emotional labor that may be uncomfortable)

Step 3: After getting some practice doing productive disagreement and/or Double Crux in particular with random people, do it in somewhat higher stakes environment. Try it when a dispute comes up at your company. (This may only work if you have the sort of company that already at least nominally values truthseeking/transparency/etc so that it feels like a natural extension of the company culture rather than a totally weird thing you're shoving into it)

Step 4: A lot of things could go wrong in between steps 1-3, but afterwards basically make deliberate efforts to expand it into wider circles (I would not leap to "try to get politicians to do it" and the like. Instead, try to invoke it in places where there isn't so much social penalty for changing minds. (In the world where this works, I think it works by raising so sanity waterline so high that politicians fall underneath it, not by trying to get politicians to jump on board)

comment by magfrump · 2017-09-28T17:39:42.248Z · score: 8 (3 votes) · LW(p) · GW(p)

My first thought on reading the post on double crux was that it's not clear to me how much value it adds beyond previous ideas about productive disagreement. If I'm already thinking about the inferential distance and trying to find a place where I agree with my conversational partner to start from, then building from there, I'm not sure what extra value the idea of cruxes has and I'm not sure what circumstances I could use double crux that the naive "find a shared starting point and go from there" doesn't work.

Obviously a large component of not doing anything that at least looks like double crux is captured by a lot of what you write about not having a trusted conversational partner, but again I feel like this prevents any kind of real conversation and I'm not sure how it's specific to double crux.

comment by Raemon · 2017-09-28T18:17:14.425Z · score: 7 (2 votes) · LW(p) · GW(p)

One important thing is that Doublecrux is not about finding a "shared starting point" (or at least, depends a lot on what you mean by shared-starting-point and I expect a lot of people to get confused). You're looking for a shared concrete disagreement, and a related-but-different pattern is more like look for what things we agree on so we can remember we're on the same side which doesn't necessarily build the skill of productively, thoroughly resolving disagreements.

I do think most of the time, if things are going well, that you'll have constructed your belief systems such that you've already clearly identified cruxes, or when debating you proactively share "this is probably my crux" in a way that makes the Double Crux be a natural extension out of the productive-disagreement-environment. (i.e. when I'm arguing with CFAR-adjaecent-rationalists, we rarely say "let's have a double crux to resolve this" but we often construct the dialog in a way that has DC thoroughly embedded in its DNA, to the point where it's not necessary to do it explicitly

comment by magfrump · 2017-09-28T18:50:42.496Z · score: 7 (3 votes) · LW(p) · GW(p)

I'm imagining a hierarchy of beliefs like:

school uniforms are good (disagreement)


school uniforms reduce embarrassment (empirical disagreement, i.e. the crux)

which is good because

I care about the welfare of students (agreement)

If I find the point of agreement and try to work toward the point of disagreement, I expect to come across the crux.

If my beliefs don't live in this hierarchy, I'm not sure how searching for a crux is supposed to help (aside from telling me to build the hierarchy, which you could tell me directly). If my beliefs already live in this hierarchy, I'm not sure how searching for a crux does more than exploring the hierarchy.

So I feel like "double crux" is sitting on top of another skill, like "build an inferential bridge," which is actually doing all the work. Especially if you are just using the "DNA" of the technique, it feels like everything being written about double crux is obscuring the fact that you're actually talking about building inferential bridges. Maybe my takeaway should be something like "the double crux is the way building an inferential bridge leads to resolving disagreements," and then things like the background of "genuinely care about your conversational partner's model of the world" filters through a chain like:

double crux is useful


double crux is about a disagreement I care about

it's use comes from letting me

connect the disagreement to explicit belief hierarchies


explicit belief hierarchies are good for establishing mutual understanding

So I'm starting to see double crux as a motivational tool, or a concept living within hierarchies of belief, rather than a standalone conceptual tool. But I'm not sure how this relates to the presentation of it I'm seeing here.

comment by Raemon · 2017-09-28T19:01:44.857Z · score: 4 (1 votes) · LW(p) · GW(p)

Part of my point with the post is that I think Double Crux is just one step in a long list of steps (i.e. the giant list of background skills necessary for it to be useful). I think it's the next step a chain where every step is necessary.

My belief that Double Crux is getting overloaded to mean both "literally finding the double crux" and "the entire process of productive disagreement" may be a bit of a departure from it's usual presentation.

I think your current take on it, and mine, may be fairly similar, and that these are in fact different from how it's usually described.

comment by Raemon · 2017-09-28T09:34:42.427Z · score: 7 (3 votes) · LW(p) · GW(p)

Some Meta-Data:

This took me about 5 hours to write.

My primary goal was to get as many thoughts down as I could so I could see them all at once, so that I could then think more clearly about how they fit together and where to go from there.

A second goal was to do that mindfully, in a way that helped me better think about how to think. What was my brain actually doing as it wrote this post? What could I have done instead? I'll be writing another post soonish exploring that concept in more detail.

A third goal was to prompt a conversation to help flesh out the ideas here and see what I was missing. What I realized after I had finished was that I hadn't made the post to be very "user-friendly" - it's basically a brain dump, cleaned up slightly. I didn't make much effort to turn that into the sort of examples that introduce ideas clearly, nor even think about who my actual audience was. But I'd already spent 5 hours on it and it seemed like that might take another 5 hours to make useful headway on.

comment by whpearson · 2017-09-28T10:17:13.115Z · score: 9 (5 votes) · LW(p) · GW(p)

Datapoint: I'm okay with brain dumps.

comment by gjm · 2017-09-28T11:56:38.755Z · score: 3 (1 votes) · LW(p) · GW(p)

Me too, especially when (1) their authors acknowledge them as such and (2) there isn't any sign of a general tendency for everyone to post brain dumps all the time when a modest expenditure of effort would let them get their thoughts better organized.

comment by Raemon · 2017-09-28T19:10:12.300Z · score: 4 (1 votes) · LW(p) · GW(p)

Later on I'll be wanting to post brain dumps all the time, but I think the rate at which this will come to pass will roughly coincide with "people move their off-the-cuff posts to personal pages and then opt into the personal pages of people whose off-the-cuff posts they like"

comment by SilentCal · 2017-09-28T17:34:54.979Z · score: 6 (2 votes) · LW(p) · GW(p)

This makes me want to try it :)

Would anyone else be interested in a (probably recurring if successful) "Productive disagreement practice thread"? Having a wider audience than one meetup's attendance should make it easier to find good disagreements, while being within LW would hopefully secure good faith.

I imagine a format where participants make top-level comments listing beliefs they think likely to generate productive disagreement, then others can pick a belief to debate one-on-one.

comment by Chris_Leong · 2017-09-28T21:35:55.865Z · score: 5 (2 votes) · LW(p) · GW(p)

I see the technique of double-crux as being useful, although there will not always be a double crux. Sometimes people will have a whole host of reasons for being for something and merely convincing them to change their view on any one of them won't be enough to shift their view, even if they are a perfectly rational agent. Similarly, I don't see any reason why two people's cruxes have to overlap. Yet it practise, this technique seems to work reasonably well. I haven't thought enough about this to understand it very well yet.

comment by Raemon · 2017-09-28T23:00:53.147Z · score: 7 (2 votes) · LW(p) · GW(p)

Yeah - in the lengthy Double Crux article it's acknowledged that there can be multiple cruxes. But it's important to find whatever the most important cruxes are, instead of getting distracted by lots of things that sound-like-good-arguments but aren't actually the core issue.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T06:39:05.775Z · score: -4 (6 votes) · LW(p) · GW(p)

My take on “why isn’t Double Crux getting more uptake”:

This ‘Double Crux’ thing seems like a complicated technique/process/something, with:

  • benefits that are nothing close to manifestly clear from the description

  • no clear, public examples of anyone using it (much less, successfully)

  • no endorsements from anyone whose opinion I respect (like Scott Alexander or Eliezer—or perhaps Eliezer did endorse it? but then I guess I wouldn’t ever know about it; such is the downside of using Facebook…)

There does not seem to be any reason why I should pay attention it. That it’s not getting uptake seems to require little explanation; it’s the default outcome that I would expect.

(Also, it comes from CFAR, which is an anti-endorsement. This probably wouldn’t matter if all, or even any, of the above three things were different; but as is, for me, it’s the only thing influencing my inclination to really look deeply into the whole matter, and that influence is in the downward direction…)

comment by Elizabeth (pktechgirl) · 2017-09-29T01:46:54.580Z · score: 17 (7 votes) · LW(p) · GW(p)

[Note from the Sunshine Regiment] A lot has happened in this thread, I'm going to comment at second-to-top level so this gets as seen as possible while keeping its context.

In a nutshell

Yes, there is an obligation to be prosocial here.

There's a lot of room for debate on what prosocial means and what trades-offs are worth it. This Guide To Comments is a start but insufficient. We welcome input from people as we figure this out.

I'm really torn on the particular comment "Also, it comes from CFAR, which is an anti-endorsement". I want it to be as cheap as possible to criticize the in-group on Less Wrong, because so many other forces are making it expensive. So let's be be very clear that

sharing a negative opinion is not in and of itself anti-social.

But as several people have pointed out, this opinion was shared in a way that generated a lot of unnecessary friction. A simple "I think that..." or "...for me" would have done a great deal to resolve this problem.

The mod team is in private contact with Said over this issue.

comment by Raemon · 2017-09-28T09:59:07.537Z · score: 11 (3 votes) · LW(p) · GW(p)

Yeah - I actually think by far the biggest reason Double Crux hasn't caught on is because no one has written a post optimized for getting it to catch on (Duncan's post is instead optimized for making sure that the people that get it actually get the whole thing, and I think it requires you to trust that it's worth the effort)

Up until last week, I actually thought Double Crux is a pretty straightforward concept (or at least, one that builds directly from ideas that are already common among educated people).

You could summarize Double Crux like this:

I. Ray Attempts to Explain Double Crux

Often times, smart people end up talking past each other, or trying to score social points, or otherwise arguing in a way that doesn't accomplish anything. This results in people wasting years arguing pointlessly, and moreover, at least half of those people spend years being wrong about stuff they could have talked through and figured out.

Double Crux is a technique to help short-circuiting those pointless and arguments, and instead figure out useful things together. Specifically, it is the first step of having a useful disagreement: figuring out what concrete thing you disagree about that you can potentially just go and check to see if it's true.

The steps are:

1. Shift into a mindset where you're in a collaborative truthseeking endeavor, rather than a debate where you're trying to score points
2. While in that mode, figure out what would actually, seriously make you change your belief (while your partner does the same for themselves). This is your Crux
3. Together, try to find a concrete thing you both disagree on, that would change both your minds depending on whether it was true. (i.e. if you could run an experiment and the world turned out one way, I'd change my mind, and if it turned out the other way, you'd change your mind). This is the Double Crux. (I actually think the phrase "Shared Crux" is a bit clearer)
4. If the Double Crux isn't something you can easily check in the real world, see if you can find a related feature of the world that's at least evidence about whether the Crux is true.

This isn't really especially original. "Make sure you're not talking past each other, figure out what you're actually disagreeing about, figure out a way to test it empirically" is something people have been doing since way before CFAR.

In my opinion, the value-add is mostly giving it a name, operationalizing it, and specifically claiming that people should be doing this all the time, whenever a major disagreement happens that's important to resolve, instead of arguing in circles.

II. But, maybe this is harder?

Last week, a couple people argued with me that this is in fact fairly hard, you can't really learn how to do it except by watching skilled people do it, and reading a couple-paragraph description of it isn't nearly enough. It's more like an artform than an easily learned technique. I'm unsure about that (I have vague plans with those people to talk it over in more detail later).

Right now I'm writing this mostly to provide better background for people who haven't been following all the discussions of Double Crux lately (most of which have, yes, been on Facebook. This post is my attempt to change that)

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T10:58:55.417Z · score: 9 (4 votes) · LW(p) · GW(p)

Right now I'm writing this mostly to provide better background for people who haven't been following all the discussions of Double Crux lately (most of which have, yes, been on Facebook. This post is my attempt to change that)

I certainly appreciate that.

Let me offer a couple of suggestions, that would, at least, help you explain it to me (and perhaps to others? but that’s as may be):

1. Extensions, not intensions.

I’d really like an actual, live (by which, of course, I mean “online”) example of people using this Double Crux business. Like, actually for-real (and not, say, as a demonstration example, arranged for the purpose of showing off the technique).

Is it even doable online? In a forum / blog context? Perhaps at least in chat? Or is it only something that can be done in person? (If so, that makes it of limited use, at least, to the LW audience—useful though it may be to your local, meatspace, community of rationalists!)

2. Applicability.

Someone recently said to me, of Double Crux (I am quoting from memory): “it seems like a decent attempt to solve a problem that almost never happens”. He meant, I think, something like—most of the time, when people (even rationalists) disagree or argue or otherwise fail to see entirely eye to eye on a matter, it is not in a way that would be solved by identifying some key fact about which they differ.

How would you characterize the class of situations in which Double Crux is applicable? How often do you think such situations come up (in comparison to, say, the category of “all disagreements that occur between people”, or even “all disagreements that occur between rationalists”)? Could you, again, point to several (at least three) real, live examples—publicly perusable by your readers here—of disagreements which Double Crux would cut through?

This second point seems to me to be of the highest importance, especially because you say:

In my opinion, the value-add is mostly giving it a name, operationalizing it, and specifically claiming that people should be doing this all the time, whenever a major disagreement happens that's important to resolve, instead of arguing in circles.

But in fact, Double Crux’s applicability is very limited in scope; or else I really understand nothing about it. So—explain! :)

comment by Conor Moreton · 2017-09-28T16:09:18.151Z · score: 12 (3 votes) · LW(p) · GW(p)

I'd be willing to do an asynchronous attempt to double crux about whether the problems that motivated the creation of double crux ever happen. We could then post the results as a public example. My understanding is that the person who said that to you misunderstands the problems that are trying to be solved, because they definitely happen all the time in my experience.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T16:17:01.869Z · score: 7 (2 votes) · LW(p) · GW(p)

Well, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing…

But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?

comment by Conor Moreton · 2017-09-28T16:46:26.042Z · score: 7 (3 votes) · LW(p) · GW(p)

Yeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.)

The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. "Ohhhhh, wait, hang on—I think I would agree with what you're saying if I thought that we couldn't expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get."

Another is the issue of burden of proof, which I think I've read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I'm remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up "declaring victory" prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it's always been done, and it's on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it's on the backwards troglodytes to prove that spanking is net beneficial, the debate won't ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they're already expecting of their own beliefs, and can make that known to the other person.

Some other situations, off the top of my head:

  • You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you're really uncomfortable and criticize my road rage, and we're trying to converge on whether it was actually right that I did what I did. Double Crux seems like a good tool for each of us to get to the bottom of our implicit models and make them available to the other person.

  • You and I are living together in a house, and we have some sort of agreement about the cleanliness of the common spaces, and we keep clashing over it such that I feel judged and you feel defected on, and to some extent (given that each of us has our own frame) we're both right. Double Crux (or at least the generators that caused Double Crux to be invented) seems like a useful tool for helping us keep the argument on track à la "under what circumstances would you agree my mess was permissible/under what circumstances would I agree I'd been too cavalier" (such that we can feel confident things will be different in the future because our models now converge), versus having it spiral off into "you're a dick/you're a slob," which isn't crucial to our disagreement in the same way.

  • You and I are trying to decide how to divide a chunk of value (e.g. $10000 we were given in a grant, or our work hours over the next month) and we strongly disagree to the point that there's sort of a zero-sum game (e.g. I need all of my hours and some of yours to accomplish my plan, and the same is true in reverse for your plan). We could resolve this through rank, or we could resolve it in a social pressure game, or we could just fight and sink everything, but through Double Crux or something like it it seems likely that we can come closer to understanding why the other person is so confident that their use of resources is better, and once we both have identical overlapping models of both sides it seems likely that we can act strategically in a coordinated fashion to choose the best tradeoff.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T17:44:23.442Z · score: 14 (7 votes) · LW(p) · GW(p)

Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!

That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).

This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.

comment by Conor Moreton · 2017-09-28T20:13:28.435Z · score: 4 (0 votes) · LW(p) · GW(p)

Hmmm. Maybe there's something in here about the difference between "Double-Crux-like" and "formal Double Crux"? On reflection, after you said you're more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the "spirit" of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it's the thing that generated Double Crux and not something that's owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.

comment by clone of saturn · 2017-09-29T08:16:12.168Z · score: 6 (5 votes) · LW(p) · GW(p)

Hello, I'm the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.

On questions like "what is the longitude of Tokyo" or "who starred in the original Star Wars," two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special "rationality technique" or something CFAR can take credit for inventing.

On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn't actually connected to the real motivations behind their disagreement ("if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good"), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.

Double Crux does seem applicable to questions where the answer can't simply be looked up, where the disagreement is strictly confined to the empirical level and doesn't touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.

comment by Conor Moreton · 2017-09-29T16:34:14.167Z · score: 4 (1 votes) · LW(p) · GW(p)

I note there's something in here that I'm reading as a pseudofallacy—it's the same reason why Mythbusters is terrible, and it goes like "I can only think of these three outcomes, and therefore those are the most likely outcomes."

This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot.

That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data.

But there's a particular sort of ... audacity? ... in taking one's own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence "this thing that a lot of smart people say is useful just isn't."

In your shoes, I'd say something like what I said in my Focusing post, which is "this thing that is useful for a lot of people isn't useful for me or the people around me." That seems more solidly justified and epistemically sound, and enriches an onlooker's understanding of the situation rather than creating crosswise narratives.

In particular, as I tried to do with Focusing, I'd make a genuine attempt to learn Double Crux (from the people who know what they're talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven't done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who's skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans.

(By the way, it's completely fine to have not done A + B + C. People have higher priorities. But I personally think that in a rationalist community like Less Wrong, we have a responsibility to not claim things are false or useless or stupid until we've actually attempted to falsify them, not just scanned through our own experiences for confirming evidence. If I were in your shoes and I didn't think Double Crux was useful and I also didn't intend to do A + B + C, I'd caveat my suspicions of its relative uselessness heavily by pointing out that I was using Stereotypes rather than Rigor, and I want people on Less Wrong to call for and socially reinforce that sort of standard.)

Will probably add that to my list of posts to write this month.

Also, am willing to do the thing that's been suggested over and over in this thread, and do a Double Crux with you on the usefulness/uselessness of Double Crux, including doing the motions unilaterally while you do whatever you feel like. I could use more practice with Double Cruxing in a not-fully-cooperative environment, since it seems like a plurality of the important debates happen with people who aren't willing to enter the Double Crux frame anyway.

comment by clone of saturn · 2017-09-29T21:13:29.277Z · score: 6 (7 votes) · LW(p) · GW(p)

You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.

Yes, I don't actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I'd be very happy to see some real rigor, but I'm not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was "supposed to be doing better science later," meaning better than self-reports and personal impressions. How much later is later?)

I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I'm going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don't gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.

I have not in fact attended a CFAR workshop and don't intend to, for reasons that might get me in trouble with the "Sunshine Regiment" if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I'm happy to try it with you if you'd like.

comment by Conor Moreton · 2017-09-30T18:33:37.311Z · score: 4 (1 votes) · LW(p) · GW(p)

I disagree with your claim that I "reflexively characterized all skepticism as hostile." I have reread my own comment and I do not think that's a fair or accurate synopsis.

I believe you are overstating your claim that "there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy" and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don't think you could actually find thousands of people in the intersection of "smart" and "endorses homeopathy").

My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they're being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc.

In short, I wasn't asking you to be less skeptical, I was asking you to add one more person to your list of people you're skeptical of—yourself.

I'm attempting to point out that your claim "Double Crux seems like an attempt to solve a problem that almost never happens" seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren't necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying "I am experiencing this-or-that physiological distress which requires some form of treatment" or "I am having this-or-that medical problem which is lowering my quality of life" without reference to their thoughts on what would fix it. It does not seem that you are updating away from "the problems that Double Crux purports to solve are rare" and toward "those problems are rare in my experience but reliably common for large numbers of people."

I'm attempting to point out that your statement "I can see one of three things happening" was made in such a way as to imply that there are no other likely things that might happen, and that you're considering your ability to generate hypotheses or scenarios or predictions to be likely sufficient and near-complete. It's like when Myth Busters say "Well, we failed to recreate claim X, and therefore claim X is impossible!" That whole paragraph was setting up strawmen and false dichotomies and ignoring giant swaths of possibility.

I didn't feel like you really addressed any of the thrust of my previous reply, which was something like "If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?"

It does not look, based on your comments thus far, like you're sincerely asking that question. Again, that's fine—it could simply be that it's not worth your time. Or it could be that you're asking that question and I just haven't noticed yet, and that's fine because it's in no way your job to appease some rando on the internet, and my endorsement is not your goal.

But the issue I have, at least, has nothing to do with your opinion on Double Crux. It has to do with the public impression you're leaving, of how you're forming and informing it. You're laying claim to explicitly prosocial behavior on the basis of continuing skepticism, and I simply don't believe you're living up to the ideals you think you are. I think Less Wrong has (or ought have) a higher standard than the one you're visibly meeting. The difference between solving the Emperor's Clothes problem and just being a contrarian is evidence and sound argument.

comment by zulupineapple · 2019-08-26T10:20:43.651Z · score: 1 (1 votes) · LW(p) · GW(p)

Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.

"If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?"

Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.

comment by Elizabeth (pktechgirl) · 2017-09-30T04:01:14.427Z · score: 3 (1 votes) · LW(p) · GW(p)

What makes you think describing why you personally won't go to a workshop would get you in trouble?

comment by clone of saturn · 2017-09-30T08:49:22.302Z · score: 5 (2 votes) · LW(p) · GW(p)

I suspect I'm already being more confrontational than you'd prefer, and I don't want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR's rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?

comment by lahwran · 2017-09-28T08:03:19.217Z · score: 4 (5 votes) · LW(p) · GW(p)

Also, it comes from CFAR, which is an anti-endorsement.

this seems like intentionally rude wording to me.

(edited - this is all I ever meant.)

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T08:47:35.130Z · score: 8 (6 votes) · LW(p) · GW(p)

I admit that I’m puzzled by your comment. What is it that you think I might be hiding, or that I might wish to (plausibly) deny…? I thought I’d made myself reasonably clear, but if some part of my comment’s meaning seems obscure to you, I’d be glad to clarify…

(As a side note, and more generally, I’d like to note my very strong distaste for any community / site discourse norms that required commenters to hold to “prosocial wording” at all times. There is a difference between respectfulness and common decency, on the one hand, and on the other, this sort of stifling tone policing.)

comment by gjm · 2017-09-28T11:43:29.962Z · score: 11 (4 votes) · LW(p) · GW(p)

I agree: it doesn't read at all like an attack hidden behind plausible deniability, it reads like an attack that isn't hidden at all.

But what's it for?

Unless you think there are a lot of LW-adjacent people who regard "X comes from CFAR" as evidence against X being useful (my guess is that there are not, though there are probably a fair few who think "X comes from CFAR" is no evidence to speak of that it actually is useful), it's not doing anything to resolve Raemon's curiosity about why the technique hasn't become popular. (I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)

And, if in fact doublecruxing's CFAR origins are a problem for any reason, it's not like there's much anyone can actually do about them.

The immediate impression I get from your remark about CFAR is this: "Said Achmiz really doesn't like CFAR, and he wants everyone to know it, so much so that he puts anti-CFAR jabs into comments where they add nothing and probably serve only to antagonize people who might otherwise listen more willingly to what he's saying". It's the same feeling I get from the similar jabs some people like to make at one another's political or (ir)religious positions. I think they (and I am very much including yours here) tend to push discussions in the direction of tribal warfare (are you on Team CFAR-is-Good or Team CFAR-is-Bad?) and make them less productive.

There absolutely should not be any sort of obligation to be "prosocial" here. And if you wrote a post about why you think CFAR does more harm than good, I would read it with interest and probably upvote it. (My main reservation would be that communities like this tend to spend too much time discussing themselves and not enough time discussing actual issues, and this might be heading in the same direction.) But, while I'm not sure I can endorse the specific complaint lahwran made, I very much do endorse a slightly different one: your comment about CFAR was gratuitously rude and largely irrelevant, and what you wrote would have been better without it.

comment by ozymandias · 2017-09-28T14:54:08.086Z · score: 12 (7 votes) · LW(p) · GW(p)

I am concerned about a fairly mild anti-CFAR comment getting this much criticism. I do think "part of the reason I haven't adopted double crux is that I don't trust CFAR" is a relevant comment. Even if it wasn't, I worry that motivated reasoning will cause people to be far more upset about criticism of respected rationalist organizations than they are of other institutions, and for this to lead to a dynamic where people are quiet about their feelings about CFAR for fear of being dogpiled. This seems harmful both as a community norm and to CFAR itself.

comment by gjm · 2017-09-28T15:19:57.134Z · score: 13 (6 votes) · LW(p) · GW(p)

To be clear, I am not complaining about SA's comment because it's anti-CFAR. I'm pretty skeptical about CFAR myself; I wouldn't go as far as SA does, but the fact that CFAR recommends something doesn't seem to me very good evidence for it.

I'm complaining about SA's comment because it seems to me irrelevant, un-called-for, and likely to annoy or upset some readers (of whom I am not one) with no offsetting benefit to make it worth while.

But I very much hope that no one feels unable to criticize CFAR or MIRI or any other entity for fear of being dogpiled, and (as one alleged dog in the alleged pile) promise that if I see such dogpiling happening to someone for relevant criticism then I will be right there on the barricades defending them.

comment by lahwran · 2017-09-28T15:22:22.208Z · score: 9 (3 votes) · LW(p) · GW(p)

I'm actually confused that you think my comment was bad - I was thinking the same thing you ended up saying.

comment by gjm · 2017-09-28T15:30:53.553Z · score: 3 (1 votes) · LW(p) · GW(p)

I'm confused too. I don't think your comment was bad, though as I wrote I'm not sure I could quite endorse the exact complaint it originally made.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T12:55:04.646Z · score: 3 (3 votes) · LW(p) · GW(p)

Unless you think there are a lot of LW-adjacent people who regard "X comes from CFAR" as evidence against X being useful

I do think that, in fact. (Caveat: I don’t know about “a lot”; I couldn’t speak to percentages of the user base or anything. Certainly not just me, though.)

If you took my comment as merely a political jab, feel free to ignore it. I am not certainly not interested in discussing CFAR-in-general in this thread (though would be happy to discuss it elsewhere). But that part of my comment was fully intended to be as substantive and on-point as the rest of it.

There absolutely should not be any sort of obligation to be "prosocial" here.

I think that it might be productive for the moderation team to comment on this point in particular. It seems like this might be a genuine difference in expectations between segments of the user base, and between the moderators and some of said segments.

(I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)

Thank you.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T13:03:00.988Z · score: 2 (1 votes) · LW(p) · GW(p)

Here’s a more general comment re: the relevance of my aside—not about this issue in particular, but this general class of things.

I have, quite a few times in the past, had the experience of bringing up something like this, and having the responses of other participants or potential-participants in the discussion be split along lines as follows:

Some people: That was unnecessary! And irrelevant. No one else feels this way, why bring your grudge into this unrelated matter?

Other people: Thank you for saying that. I, too, feel this way, and agree that this is highly relevant, but didn’t want to say anything.

Those in the first category are usually oblivious to the existence and the prevalence of those in the second.

So yes, I think that it is not only absolutely permissible, but indeed mandatory, to insert just such asides into just such discussions. If there’s no uptake—well, then I simply drop the matter. Saying it once, or at least once in a long while, is sufficient; I have no problem changing the subject. But pervasive silence in such cases is how echo chambers form.

comment by gjm · 2017-09-28T15:28:44.272Z · score: 5 (3 votes) · LW(p) · GW(p)

I can very well believe that remarks like this get exactly those sorts of comments, but I don't think the existence of the Other People is good evidence that the remarks are a good idea. All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.

If your opinion is that CFAR is a fraud or a scam or just inept and want to reassure others who hold similar views, then make a post actually about that explaining why you think that. It'll be far more effective in showing those people that they have allies, it'll provide a venue for others who agree to explain why (and for those who disagree to explain why, which should also be important if we're trying to arrive at the truth), and it'll have some chance of persuading others (which at-most-marginally-relevant jabs will not).

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T15:43:06.520Z · score: 12 (6 votes) · LW(p) · GW(p)

If going to the effort of writing a whole post about a concern is a prerequisite to ever mentioning the concern at all, then I think that’s an entirely unreasonable barrier, and certain to create a chilling effect on discussions of that concern. I oppose such a policy unreservedly.

All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.

I thought that “and the concern in question is relevant to the current discussion” was implied. But consider it now stated outright! Append that, mentally, to what I said in the grandparent. (Certainly, as I made clear in the parallel thread, I think that the CFAR issue is relevant to this discussion.)

comment by gjm · 2017-09-28T17:05:54.346Z · score: 14 (3 votes) · LW(p) · GW(p)

Perhaps I wasn't clear: I don't think you are, or should be, forbidden to mention your opinions of / attitude to CFAR if you aren't willing to make a whole post explaining them. That would be crazy.

What I do think (which seems to me much less crazy) is this: 1. If, as you say three comments upthread from here, you feel that you have an obligation to say bad things about CFAR in public so that LW2 doesn't become a pro-CFAR echo chamber, then what you've done here is not a very effective way of doing it, and writing something more substantial would be much more effective. And: 2. Dropping boo-to-CFAR asides into discussions of something else is likely to do more harm than good (even conditional on CFAR being bad in whatever ways you consider it bad; in fact, probably more so if it is) because its most likely effect is to make fans of CFAR defensive, people who dislike CFAR gloaty, and people who frankly don't care much about CFAR annoyed at having what seem like political rivalries injected into otherwise-interesting discussions.

Of course, what's ended up happening is that there's been a ton of discussion and you may end up expending as much effort as if you'd written a whole post about why you are unimpressed by CFAR, but without the actual benefits of having done so. For the avoidance of doubt, that wasn't my intention, and I doubt it was anyone else's either, but it's not exactly a surprising outcome either; gratuitously inflammatory asides tend to have such consequences...

comment by lahwran · 2017-09-28T16:26:17.757Z · score: 9 (2 votes) · LW(p) · GW(p)

Very enthusiastic +1 to this. I also don't want to have a policy (that, empirically, I currently have, I guess?) of making people who say things like what you said, end up having to defend their views for hours in replies.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T15:50:58.151Z · score: 7 (4 votes) · LW(p) · GW(p)

Replying to your edit:

  1. General request to all commenters: when editing a post to change wording or conrent, please retain the original wording / content, if existing replies to your comment reference it or depend on it in any way. Doing otherwise destroys the coherence of comment threads, and makes them less useful to later readers.

  2. Re: the edited comment: it baffles me that you perceive that sentence as not only rude, but so rude that it could only be intentional—given that I chose my words carefully, to avoid explicit abuse or impoliteness! How could I have phrased my comment instead, in your opinion, that would’ve upgraded it at least to the level of “unintentionally rude” (“actually polite” is probably too much to hope for), without losing the meaning?

I am dismayed by the discourse norms that such comments imply. :(

comment by lahwran · 2017-09-28T16:19:41.275Z · score: 10 (2 votes) · LW(p) · GW(p)

I am surprised at 2, and want to retract my comment and make this whole subthread not able to hurt me any more. I'm feeling a lot of social disapproval at my having posted the comment, and my update from it is to just not make comments like that, which I think is a good outcome for your preferences about discourse norms. I can't stand social disapproval like this, and I feel an urgent need to change however will make it go away the fastest - on most sites, that's "delete my comment, never post another one like it".

Though actually, I have 4 points now. But I still acutely feel your disapproval of my having expressed disapproval at you, and want to just take it back and let you talk how you want.

(meta: it's quite scary for me to try to be honest about this. I feel urge to reply with my actual feelings in the interest of truth seeking, but normally would just be silent.)

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T17:26:48.698Z · score: 6 (2 votes) · LW(p) · GW(p)

Upvoted. I regret that my comments had this effect on you (though do not regret making them). I hope that you will continue to comment no less earnestly than you’ve done so far, and encourage you to do so.

which I think is a good outcome for your preferences about discourse norms

My discourse norms are honesty, integrity, and truth.

comment by lahwran · 2017-09-28T18:50:57.206Z · score: 9 (2 votes) · LW(p) · GW(p)

I regret that my comments had this effect on you (though do not regret making them).

I like this. My approval drives would lead to a chilling effect on truth-seeking if everyone tried to white-box optimize them when having conversations, and I don't endorse that; I'd rather people hurt me a bit than fail at truth-seeking. I wish I had a better way to defend myself from the hurt of social disapproval, though; eg, disowning a comment.

My discourse norms are honesty, integrity, and truth.

I endorse those.

comment by gjm · 2017-09-28T17:09:27.930Z · score: 3 (0 votes) · LW(p) · GW(p)

Strongly agree on #1 (with obvious exceptions if the original wording reveals trade secrets, libels people likely to bring legal action, etc; but in thoses cases you should still describe what used to be there even if you can't preserve it).

On #2, I can't share SA's bafflement. What isn't rude about saying that a particular organization is so useless that when, attempting to do its job, it recommends doing a thing, that's evidence against the value of doing it?

I guess it's not rude if you know there's no one around who belongs to, or identifies strongly with, that institution. But that's not very likely in these parts. Otherwise: what baffles me is how anyone would expect that not to be rude.

(To be clear: "Rude" is not the same thing as "bad" or "wrong". Sometimes being rude is a good thing. Sometimes it is a necessary evil. I am not claiming that no one should ever be rude.)

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T17:23:23.552Z · score: 3 (1 votes) · LW(p) · GW(p)

You seem to be using “rude” in such a way that the property of rudeness can attach to claims on the basis of their propositional content only. That, to me, is a very strange usage.

It seems to me that either you must think that there’s nothing necessarily wrong with being rude; or, you must think that certain claims simply cannot be made, certain propositions simply cannot be expressed—regardless of their truth value (if they are not trade secrets or so on).

I disagree with the latter, and prefer a word usage that makes the former false (else the word “rude” becomes largely useless.)

comment by Raemon · 2017-09-28T17:51:24.826Z · score: 8 (2 votes) · LW(p) · GW(p)

It's too late to accomplish this by this point, but the response I had planned for your CFAR comment (I actually had it planned before lahwren responded), which I didn't have time to write before going to bed, was something like:

"I had an initial negative reaction and urge to downvote when I saw the CFAR comment, but I quickly noticed that most of that was coming from a place of tribal emotions (i.e. 'must defend my people!') which I didn't endorse. I briefly considered trying to respond in a more careful way that got to the heart of the issue, but it seemed like the "yay CFAR? / boo CFAR?" was basically a distraction. There may be a time/place for it but this isn't it.

I'd prefer if people didn't end up having a giant discussion about "is CFAR good/bad?" and instead stuck to discussion of Double Crux as a technique."

comment by Raemon · 2017-09-28T17:55:33.095Z · score: 8 (3 votes) · LW(p) · GW(p)

Having said that, in light of your other comment about wanting to see a public Double Crux, "should CFAR be positive or negative evidence of a technique's validity" is precisely the sort of question that Double Crux is for, and I'd be interested in doing a public DC on it with you if you're up for it (normally I'd suggest skype but since part of the point is to produce something easy for others to consume, chatlog could be fine)

(that said, I'm fairly busy in the next 30 hours or so. I might be up for it Friday night or over the weekend though)

(Edit: it looks like some other people also offered something like this, I don't think it's especially important I be involved, but think it'd probably be valuable in any case)

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T18:25:21.555Z · score: 3 (1 votes) · LW(p) · GW(p)

I agree with you re: the grandparent, and I appreciate the offer re: the Double Crux.

I am, sadly, unlikely to be able to take you up on it; my “commenting on or about an internet forum” time budget is already taken up by this flurry of activity here on LW 2.0.

Instead, I’d just like to reiterate my request / suggestion that you folks find some way to be able to point readers to pre-existing, publicly viewable examples of the technique being used. I think much hinges on that, at this point. Offering, when questioned, to demonstrate Double Crux, by way of trying to debate whether Double Crux is any good, is all very well, but—it simply doesn’t scale!

comment by Raemon · 2017-09-28T18:30:47.622Z · score: 7 (1 votes) · LW(p) · GW(p)

Doesn't scale, but seems like it should happen at least once. (tongue sort of but not entirely in cheek). Then you can just link to it the second time.

The problem is that Double Crux is best conducted in ways that aren't very amenable to publicizing (i.e. a private walk where people feel free-er), so there needs to be some attempts to do a public one at a time when:

- it's high enough stakes that it matters so you can see people using the technique for real
- it's low enough stakes that it's okay to publicly share it without you having to worry about "looking good" during the discussion
- it's convenient to record in some way

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T18:51:31.177Z · score: 6 (2 votes) · LW(p) · GW(p)

The problem is that Double Crux is best conducted in ways that aren't very amenable to publicizing (i.e. a private walk where people feel free-er)

Well, as I say elsewhere in these comments—that does make it of rather limited utility to much of the LW readership!

comment by Raemon · 2017-09-28T19:24:36.415Z · score: 6 (1 votes) · LW(p) · GW(p)

I agree, which is why I think noticing that there's an opportunity to do a public one (i.e. now) is something that should be treated as a valuable opportunity that's worth treating differently than arguing-on-the-internet-qua-arguing-on-the-internet.

(I also think arguing "should 'created by CFAR' be positive or negative evidence" is at least slightly less meta-sturbatory than "let's double crux about double crux")

comment by Conor Moreton · 2017-09-28T23:09:43.661Z · score: 7 (2 votes) · LW(p) · GW(p)

Strong agree that it's both true that "the lack of an example to point to produces justified skepticism" and that "that's partly unfair because that skepticism and other 'too busys' keep feeding into no one taking the time to create said example."

comment by gjm · 2017-09-28T19:06:00.329Z · score: 3 (0 votes) · LW(p) · GW(p)

Yes, I think things can be rude on the basis of their propositional content. (But not only their propositional content.) If I state that you are very unintelligent, and I say it in the presence of you or of your friends, then I am being rude. I can do it in extra-rude ways ("Said is a total fucking moron") or in less-rude ways ("I have reason to think that Said's IQ is probably below 90") but however you slice it it'll be rude.

(For the avoidance of doubt, of course I do not in fact think any such thing.)

I do, indeed, think there is nothing necessarily wrong with being rude. As I said: Sometimes being rude is a good thing, and sometimes it's a necessary evil. All else being equal, being rude is usually worse than not being rude, but many other things may outweigh the rudeness.

I don't see that this makes the word "rude" largely useless, and I'm not sure why it should. If you mean it makes it meaningless then I strongly disagree (I take it to mean something like "predictably likely to make people upset", though for various reasons that isn't exactly right). If you mean it makes it unactionable then again I disagree; it just means that acting on the knowledge that something is rude is more complicated than just Not Doing It. (If you want to upset someone, which there may be good reasons for though usually there aren't, then rudeness is beneficial. If you don't but other things are higher-priority for you than not upsetting people, then you weigh up the benefits and harms, as always.) If you mean something other than those and the above hasn't convinced you that my way of using "rude" isn't useless, then you might want to explain further.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T22:33:09.973Z · score: 1 (2 votes) · LW(p) · GW(p)

Indeed I meant “meaningless”, or perhaps “encompassing many disparate meanings under the umbrella of one word; attempting to refer to unrelated concepts as if they are the same or closely clustered; failing to cleave reality at the joints”.

I find it quite unnatural to apply the word “rude” as you do, and, to be extra clear, will certainly never mean anything like this when I use the word.

My takeaway here is that if you tell me that something is “rude”, I have not really gained any information about what you think of the thing, nor will I take you to have made any kind of definite claim about the thing, nor even do I know whether you’re attempting to ascribe positive valence to it or negative. (This is, to my mind, an unfortunate consequence of using words in strange ways, though of course you are free to use words as you please.)

I suppose I will have to remember, should you ever describe my comments as “rude” henceforth, to reply with something like—“Ok, now, what actually do you mean by this? ‘Rude’, yes, which means what…?”.

comment by gjm · 2017-10-01T00:12:35.533Z · score: 3 (0 votes) · LW(p) · GW(p)

I am confused. (And also, apparently, confusing, which I regret.)

If I say something is rude then you learn that in my opinion it is likely to upset or offend a nontrivial fraction of people who read it. (Context will usually indicate roughly which people I think are likely to be upset or offended.)

How is that no information? How have I made no definite claim?

(It is true that merely from the fact that I call something rude you cannot with certainty tell whether I am being positive about it or negative. The same is true if I call something large, ingenious, conservative, wooden, complex, etc., etc., etc. I don't see how this is a problem. For the avoidance of doubt, though, most of the time when I call something rude I am being negative about it, even if I think that the rudeness was a necessary evil.)

My use of the word "rude" doesn't seem to me particularly nonstandard or strange. It's more or less the same as definition 5a in the OED, which is "Unmannerly, uncivil, impolite; offensively or deliberately discourteous". (The OED has lots of definitions, because "rude" does in fact have lots of meanings. It can e.g. sometimes mean "unrefined" or "vigorous".)

Clearly you are dissatisfied with my usage of the word "rude". Perhaps you might tell me yours; it is still not clear to me either what it is or why it might be better than mine. From what you say above, it seems that you want it used in such a way that "X is rude" strictly implies "X is morally wrong", but if that's really so then I'm unable to think of any meaning that does this while coming anywhere near the specificity that "rude" usually has. (At least for those who have moral systems not entirely based around not giving offence, which I am pretty sure includes both of us.)

comment by Duncan_Germain · 2017-09-28T16:27:25.779Z · score: 0 (3 votes) · LW(p) · GW(p)

Re: "it comes from CFAR, which is an anti-endorsement."

I find that a large majority of people who have a moderate-to-strong negative opinion of CFAR have either a) never subjected that opinion to falsification or b) not checked in since forming the opinion a long time ago.

Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field.

Most of the updates come in one of the following forms:

  • Ah, okay, CFAR's made significant improvements along this axis that I was right to criticize it on.

  • Ah, okay, CFAR is aware that this attribute that it has isn't ideal; I thought they were proceeding in ignorance but in fact they're making a cost-benefit decision and while I might disagree with their weighting I am less concerned that they're blind or stupid.

  • Ah, okay, this criticism I had was based on assumptions that are simply false, or on information that is simply inaccurate, and while CFAR maybe deserves some blame for imperfect image management and creating-or-allowing-others-to-create those impressions, the problem I thought existed literally doesn't exist.

Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions, I'm happy to make double crux motions unilaterally from my end as we do so, and then you'd have at least half of a public instance of double crux. (I won't insist that you use the frame yourself until you're at least convinced that it has potential.)

I do note that my mainline prediction for "this doesn't work or doesn't happen" is something like "Said claims that it's not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models." That seems fair and plausibly correct, but if that's the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T17:38:57.664Z · score: 10 (3 votes) · LW(p) · GW(p)

Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions

I am not opposed to this, per se, but—

I do note that my mainline prediction for "this doesn't work or doesn't happen" is something like "Said claims that it's not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models." That seems fair and plausibly correct, but if that's the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.

I’m afraid I have to object to this. The following aren’t equivalent:

It’s not worth my time an attention to engage with you, right now, in this context and fashion


It’s not worth my time to re-examine [“repair”? why “repair”? this seems to assume the outcome!] my impression of CFAR

Nor are these equivalent:

I am unwilling to engage with you about this [whether “here and now” or “anywhere, ever”]


I subject my views on this topic to no falsification of any kind [or do you hold that discussing the matter with you is the only possible way to gain accurate information or insight into your organization’s nature / activities / whatever?]

That said, I am willing to devote some effort to this.

Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field.

I believe you.

(I do not, however, think that this is as informative as it may seem, for various reasons which may perhaps come up in our discussion.)

Before we get deeper into this topic, may I ask—these interactions wherein you’ve convinced people to “come over to your side”, so to speak—have they taken place in person, or online? If the latter, are any public records of this available? (To be clear, I do not ask this because I doubt what you say about having persuaded people—I really do not.)

comment by Duncan_Germain · 2017-09-28T18:00:03.746Z · score: 4 (1 votes) · LW(p) · GW(p)

I appreciate pretty much everything about your reply up above.

Agreement that there was a false equivalency re: right now vs. ever.

Agreement that my phrasing presupposed an outcome (though that makes sense when you take the context of "the guy talking is the curriculum director at CFAR"). I predict that outcome, optimistically, but in fact the actual target should be and is "investigate" not "repair."

Unfortunately for the goal of record-keeping and evidence-creation, most of those interactions have taken place in person. I could generate stories about what they're like, but a better option seems to be "start taking notes now when they happen, and ask permission to make said notes public with reasonable anonymity."

Thanks for responding 100% positively/exactly as I would hope a LWer would respond. I'd love it if you let me know if I myself am not living up to that standard, as you gently did above.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T18:18:27.601Z · score: 7 (2 votes) · LW(p) · GW(p)

Thank you for the kind words.

Re: the previous interactions: that no notes from them are available is not the problem, nor would have notes help in any meaningful way. (Plus—and I really hate to be so blunt about this, but—notes can say whatever the note-taker, or even the note-poster-to-a-public-website, wants them to say! I’m not seriously suggesting falsification of anecdotal evidence, and as I say below, this is not really my primary concern here, but from the appearance-of-propriety perspective, having notes is not a great situation.)

No, the reason I asked about whether the cited interactions took place in person is certainly not disbelief or lack of evidence; and it is only in lesser part the desire to examine the interactions and see what I can conclude from them. The real reason is that an interaction in person is tremendously different from an interaction via a web forum (like this one)!!

These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!). The point, in any case, is that viewed in light of these differences, your track record of convincing nay-sayers, while undoubtedly real, should be much less persuasive, even to yourself, than you imply it to be.

It would be very different if you could point us to an online exchange, where you, and a serious and thoughtful interlocutor, took the time to compose comments and replies back and forth—the paradigmatic example of such, around here, being the Yudkowsky–Hanson “AI Foom” debate. (Ah, but how did that one turn out, eh?)

comment by SilentCal · 2017-09-28T18:32:07.289Z · score: 2 (0 votes) · LW(p) · GW(p)

These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!

I request this enumeration, if your offer extends to interlopers and not just Duncan.

(The differences I can think of are instant vs asynchronous communication, nonverbal+verbal vs. verbal only, and speaking only to one another vs. having an audience. But I don't see why these are *inevitably* so profound and far-reaching.