Doublecrux is for Building Productspost by Raemon · 2019-07-17T06:50:26.409Z · score: 30 (8 votes) · LW · GW · 13 comments
Intractable Disagreements Some intractable disagreements are fine Some disagreements are not fine Anarchy One person in charge? Other workers don't actually understand minimalism The lead designer is wrong The lead designer is right, but other major stakeholders think she's wrong What if there was a process by which either Alice would update or Bob would update, that both Alice and Bob considered fair? What if there's no boss? If "regular debate" is working for you, cool. What's Doublecrux again? Desiderata that I personally have for such a process: Components of Doublecrux – Cognitive Motions vs Attitudes Using language for truthseeking, not politics Trigger Action Plans Summary None 13 comments
2 years ago, CFAR's doublecrux technique [LW · GW] seemed "probably good" to me, but I hadn't really stress tested it. And it was particularly hard to learn in isolation without a "real" disagreement to work on [LW · GW].
Meanwhile, some people seemed skeptical about it, and I wasn't sure what to say to them other than "I dunno man this just seems obviously good? Of *course* you want to treat disagreements like an opportunity to find truth together, share models, and look for empirical tests you can run?"
But for people who didn't share that "of course", that wasn't very helpful.
For the past two years I've worked on a team where big disagreements come up pretty frequently, and where doublecrux has been more demonstrably helpful. I have a clearer sense of where and when the technique is important.
Some intractable disagreements are fine
If you disagree with someone on the internet, or a random coworker or something, often the disagreement doesn't matter. You and your colleague will go about their lives, one way or another. If you and your friends are fighting over "Who would win, Batman or Superman?", coming to a clear resolution just isn't the point.
It might also be that you and your colleague are doing some sort of coalition-politics fight over the overton window, and most of the debate might be for the purpose of influencing the public. Or, you might be arguing about the Blue Tribe vs Red tribe as a way of signaling group affiliation, and earnestly understanding people isn't the point.
This makes me sad, but I think it's understandable and sometimes it's even actually important.
Such conversations are don't need to be doublecrux shaped, unless both participants want them to be.
Some disagreements are not fine
I mean "product" here pretty broadly – anything that somebody is actually going to use. It could be a literal app or widget, or an event, or a set of community norms, or a philosophical idea. You might literally sell it or just use it yourself. But I think there is something helpful about the "what if we were coworkers, how would we resolve this?" frame.
The important thing is "there is a group of people collaborating on it" and "there is a stakeholder who cares about it getting built."
If you're building a website, and one person thinks it should present all information very densely, and another person thinks it should be sleek and minimalist... somehow you need to actually decide what design philosophy to pursue. Options include (not necessarily limited to)
- One person is in charge
- Two or more people come to consensus
- People have domain specializations in which one person is in charge (or gets veto power).
To start with, what's wrong with the "everyone just builds what seems right to them and you hope it works out" option? Sometimes you're building a bazaar, not a cathedral, and this is actually fine. But it often results in different teams building different tools at cross purpose, wasting motion.
One person in charge?
In a hierarchical company, maybe there's a boss. If the decision is about whether to paint a bikeshed red or blue, the boss can just say "red", and things move on.
This is less straightforward in the case of "minimalism" vs "high information density."
First, is the boss even doing any design work? What if the boss and the lead designer disagree about aesthetics? If the lead designer hates minimalism they're gonna have a bad time.
Maybe the boss trusts the lead designer enough to differ to them on aesthetics. Now the lead designer is the decision maker. This is an improvement, but just punts the problem down one level. If the lead designer is Just In Charge, a few things can still go wrong:
Other workers don't actually understand minimalism
"Minimalist websites" and "information dense websites" are designed very differently. This filters into lots of small design decisions. Sometimes you can solve this with a comprehensive style guide. But those are a lot of work to create. And if you're a small startup (or a small team within a larger company), you may not have have the resources for that. It'd be nice if your employees just actually understood minimalism so they could build good minimalist components.
The lead designer is wrong
Sometimes the boss's aesthetic isn't locally optimal, and this actually needs to be pointed out. If lead-designer Alice says "we're building a minimalist website" it might be important for another engineer or designer to say "Alice, you're making weird tradeoffs for minimalism that are harming the user experience."
Alice might think "Nah, you're wrong about those tradeoffs. Minimalism is great and history will bear me out on this." But Alice might also respect Bob's opinion enough to want to come to some kind of principled resolution. If Bob's been right about similar things before, what should Alice and Bob do, if Alice wants to find out she's wrong – if and only if she's actually wrong, and that her minimalist aesthetic is harming the user experience.
The lead designer is right, but other major stakeholders think she's wrong
Alternately, maybe Bob thinks Alice is making bad design calls, but Alice is actually just making the right calls. Bob has rare preferences that don't overlap much with the average user, that shouldn't necessitate a major design overhaul.
Initially, this will look the same to both parties as the previous option.
If Alice has listened to Bob's complaints a bunch, and Alice generally respects Bob but thinks he's wrong here, at some point she needs to say "Look Bob, we just need to actually build the damn product now, we can't rehash the minimalism argument every time we build a new widget."
I think it's useful for Bob to gain the skill of saying "Okay. fine." Let go of his frustration and embrace the design paradigm.
But that's a tough skill. And meanwhile, Bob is probably going to spend a fair amount of time and energy being annoyed about having to build a product they're less excited about. And sometimes, Bob's work is less efficient because he doesn't understand minimalism and keeps building site-components subtly incompatible with it.
What if there was a process by which either Alice would update or Bob would update, that both Alice and Bob considered fair?
You might just call that process "regular debate." But the problem is that regular debate just often doesn't work. Alice says "We need X, because Y". Bob says "No, we need A, because B", and they somehow both repeat those points over and over without ever changing each other's mind.
This wastes loads of time, which could have been better spent building new site features if they were able to do it faster.
Even if Alice is in charge and gets final say, it's still suboptimal for Bob to have lower morale and keep making subtly wrong widgets.
And even if Bob understands that Alice is in charge, it might still be suboptimal for Bob to feel like Alice never really understood exactly what Bob's concerns were.
What if there's no boss?
Maybe your "company" is just two friends in a basement doing a project together, and there isn't really a boss. In this case, the problem is much sharper – somehow you need to actually make a call.
You might solve this by deciding to appoint a decision-maker – change the situation from a "no boss" to "boss" problem. But if you were just two friends making a game together in their spare time, for fun, this might kinda suck. (If the whole point was to make it together as friends, a hierarchical system may be fundamentally un-fun and defeat the point)
You might be doing a more serious project, where you agree that it's important to have clear coordination protocols and hierarchy. But it nonetheless feels premature to commit to "Alice is always in charge of design decisions." Especially if Bob and Alice both have reasonable design skills. And especially if it's early on in the project and they haven't yet decided what their product's design philosophy should be.
In that case, you can start with straightforward debate, or making a pros/cons list, or exploring the space a bit and hoping you come to agreement. But if you're not coming to agreement... well, you need to do something.
If "regular debate" is working for you, cool.
If "just talking about the problem" is working, obviously you don't have an issue. Sometimes the boss actually just says "we're doing it this way" and it doesn't require any extensive model sharing.
If you've never run into the problem of intractable-disagreement while collaborating on something important, this blogpost is not for you. (But, maybe keep it in the back of your mind in case you do run into such an issue)
But working on the LessWrong team for about 1.5 years, I've run into numerous deep disagreements, and my impression is that such disagreements are common – especially in domains where you're solving a novel problem. We've literally argued a bunch about minimalism, which isn't an especially unique design decision. We've also had much weirder disagreements about integrity and intellectual progress and AI timelines and more.
We've resolved many (although not all) of those disagreements. In many cases, doublecrux has been helpful as a framework.
What's Doublecrux again?
If you've made it this far, presumably it seems useful to have some kind of process-for-consensus that works better than whatever you and your colleagues were doing by default.
Desiderata that I personally have for such a process:
- Both parties can agree that it's worth doing
- It should save more time than it costs (or produce value commensurate with the time you put in)
- It works even when both parties have different frames or values
- If necessary, it untangles confused questions, and replaces them with better ones
- If necessary, it untangles confused goals, and replaces them with better ones
- If people are disagreeing because of aesthetic differences like "what it beautiful/good/obviously-right", it provides a framework wherein people can actually change their mind about "what is beautiful and good and right."
- Ultimately, it lets you "get back to work", and actually build the damn product, confident that you are going about it the right way.
[Many of these goals were not assumptions I started with. They're listed here because I kept running into failures relating to each one. Over the past 2 years I've had some success with each of those points]
Importantly, it's not necessarily needed for such a process to answer the original question you asked. In the context of building a product, what's important is that you figure out a model of the world which you both agree on, which informs which actions to take.
Doublecrux is a framework that I've found helpful for the above concerns. But I think I'd consider it a win for this essay if I've at least clarified why it's desirable to have some such system. I share Duncan's belief that it's more promising to repair or improve doublecrux than to start from scratch [LW · GW]. But if you'd rather start from scratch, that's cool.
Components of Doublecrux – Cognitive Motions vs Attitudes
There are two core concepts behind the doublecrux framework:
- A set of cognitive motions:
- Looking for the cruxes of your beliefs, and asking what empirical observations would change your mind about them. (Recursing until you find a crux you and your partner both share, the "doublecrux")
- A set of attitudes
- Epistemic humility
- "maybe I'm the wrong one"
- Good faith
- "I trust my partner to be cooperating with me"
- Belief that objective reality is real
- "there's an actual right answer here, and it's better for each of us if we've both found it"
- Earnest curiosity
Of those, I think the set of attitudes is more important than the cognitive motions. If the "search for cruxes and empirical tests" thing isn't working, but you have the four attitudes, you can probably find other ways to make progress. Meanwhile, if you don't each have those four attitudes, you don't have the foundations necessary to doublecrux.
Using language for truthseeking, not politics
But I think the cognitive motions are helpful, for this reason: much of human language is by default politics rather than truthseeking. "Regular debate" often reinforces the use of language-as-politics, which actives brain modules that are optimizing to win, which involves strategic blindness. (I mean something a bit nuanced by "politics" here, beyond scope of this post. But basically, optimizing beliefs and words for how you fit into the social landscape, rather than optimizing for what corresponds to objective reality).
The "search for empirical tests is and cruxes-of-beliefs" motion is designed to keep each participant's brain in a "language-as-truthseeking" mode. If you're asking yourself "why would I change my mind?", it's more natural to be honest to yourself and your partner than if you're asking "how can I change their mind?"
Meanwhile, the focus on mutual, opposing cruxes keeps things fruitful. Disagreement is more interesting and useful than agreement – it provides an opportunity to actually learn. If people are doing language-as-politics, then disagreement is a red flag that you are on opposing sides and might be threatening each other (which might either prompt you to fight, or prompt you to "agree to disagree", preserving the social fabric by sweeping the problem under the rug).
But if you can both trust that everyone's truthseeking, then you can drill directly into disagreements without worrying about that, optimizing for learning, and then for building a shared model that lets you actually make progress on your product.
Trigger Action Plans
Knowing this is all well and good, but what might this translate into in terms of actions?
If happen to have a live disagreement right now, maybe you can try doublecrux. But if not, what circumstances should prompt
I've found the "Trigger Action Plan [LW · GW]" framework useful for this sort of thing, as a basic rationality building-block skill. If you notice an unhelpful conversational pattern, you can build an association where you take some particular action that seems useful in that circumstance. (Sometimes, the generic trigger-action of "notice something unhelpful is happening ----> stop and think" is good enough)
In this case, a trigger-action I've found useful is:
TRIGGER: Notice that we've been arguing awhile, and someone has just repeated the same argument they said a little while ago (for the second, or especially third time)
ACTION: Say something like: "Hey, I notice that we've been repeating ourselves a bit. I feel like conversation is kinda going in circles...." followed by either "Would you be up for trying to formally doublecrux about this?" or following Duncan's vaguer suggestions about how to unilaterally improve a conversation [LW · GW] (depending on how much shared context you and your partner have).
- Intractable disagreements don't always matter. But if you're trying to build something together, and disagreeing substantially about how to go about it, you will need some way to resolve that disagreement.
- Hierarchy can obviate the need for resolution if the disagreement is simple, and if everyone agrees to respect the boss's decision.
- If the disagreement has persisted awhile and it's still wasting motion, at the very least it's probably useful to do something differently. In particular, if you've been repeating the
- Doublecrux is a particular framework I've found helpful for resolving intractable disagreements (when they are important enough to invest serious energy and time into). It focuses the conversation into "truthseeking" mode, and in particular strives to avoid "political mode"
Comments sorted by top scores.