Can we build a better Public Doublecrux?

post by Raemon · 2024-05-11T19:21:53.326Z · LW · GW · 7 comments


  Some comments from other discussions
  Ideas or Volunteers?

Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format. I'm not sure if I'll end up focusing on it, but here are some ideas.

Public Doublecrux is a more truthseeking oriented version of Public Debate. The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice versa."

Reasons to want to do public doublecrux include:

Sidebar: Public Debate is also good although not what I'm gonna focus on here.

I know several people who have argued that "debate-qua-debate" is also an important part of a truthseeking culture. It's fine if the individuals are trying to "present the best case for their position", so long as the collective process steers towards truth. Adversarial Collaboration is good. Public disagreement is good. 

I do generally buy this, although I have some disagreements with the people who argue most strongly for Debate. I think I prefer it to happen in written longform than in person, where charisma puts a heavier thumb on the scale. And I think while it can produce social good, many variants of it seem... kinda bad for the epistemic souls of the people participating? By becoming a champion for a particular idea, people seem to get more tunnel-vision-y about it. Sometimes worth it, but, I've felt some kind of missing mood here when arguing with people in the past.

I'm happy to chat about this in the comments more but mostly won't be focusing on it here.

Historically I think public doublecruxes have had some problems:

  1. First, having the live audience there makes it a bit more awkward and performative. It's harder to "earnestly truthseek" when there's a crowd you'd still kinda like to persuade of your idea, or at least not sound stupid in front of.
  2. Historically, people who have ended up doing "public doublecrux" hadn't actually really understood or really bought into the process. They often end up veering towards either classical debate, or "just kinda talking."
  3. When two people are actually changing *their* minds tend to get into idiosyncratic frames [LW · GW] that are hard for observers to understand. Hell, it's even hard for two people in the discussion to understand. They're chasing their cruxes, rather than presenting "generally compelling arguments." This tends to require getting into weeds and go down rabbit holes that don't feel relevant to most people.

With that in mind, here are some ideas:

For the facilators:

One is in the room with the doublecruxers, focused on helping them steer towards useful questions. They probably try to initially guide the participants towards communicating their basic position, and then ironing out their differences in ontology. They ask questions like "can you paraphrase what you think the other person's position is?".

The second (and maybe third) facilitator hangs out with the audience outside, and is focused on tracking "what is the audience confused about?". The audience participates in a live google doc where they're organizing the conversational threads and asking questions. 

The first facilitator is periodically surreptitiously checking the google doc or chat, and maybe periodically summarizing their guess of the state-of-the-debate for the audience's benefit.

Those were just some starting ideas, but my most important point here is to approach this as an unsolved "product development" problem. Invest in trial runs with different participants and audiences, with a specific eye towards identifying the problems and ironing out kinks.

Some comments from other discussions

I'd previously talked about this on facebook and twitter. Two comments that seemed particularly good to crosspost as potential ideas:

Duncan Sabien suggested:

My first off-the-top idea is actually more like Circling double crux. Two people are double cruxing (or similar) while a third party is right there with them, and periodically (after no less than 1min and no more than like 6min of back-and-forth) interrupts them to draw out "okay, what was going on there? What were you doing in your head? What was the goal of that probe?" etc.

So the two main participants are spending half their time progressing on the object level, and half their time expositing about what's going on in their heads.

Duncan didn't specify his goals here, but my interpretation (which seems worth exploring to me), is that this is meant to both:

Divia noted:

I’ve done some public double crux attempts! I’d say I had varying results.

I found it super important for me to do a lot of cruxing mapping and repeated summarizing and checking

Some of them turned into basically what I would call trying to understand one of the people’s positions and mostly ignoring the other one

Here's a twitter thread about a double crux of Eli's that I liked: 

Meanwhile, on twitter Anna Salamon suggested:

I think it’s wise to also map out anti cruxes: statements that both parties already agree about and expect to continue agreeing about regardless of how the discussion goes (that are as near as possible to the disagreement). Useful in private, more useful in public.

I replied:

ah yeah, that sounds right. (though I'm not really a fan of the "anti-crux" name for it, I'd naively just think that means "thing that doesn't matter")

(I had always thought it'd make sense for 'the double crux' to be called 'the common crux', since it was more clear that it was shared between the people. And, if you had that, you might naturally call 'the things we both believe' the 'common ground')

(I thought about trying to call it "Common Crux" in this post to facilitate my agenda of renaming it, but that seemed more likely to be confusing than helpful. If I end up pursuing this project in more detail I might push for it more tho)

Ideas or Volunteers?

Those are some takes for now. I'm not sure if I'm going to pursue this right now, but thought I'd leave these thoughts for now. 

I'm interested in both:


Comments sorted by top scores.

comment by cousin_it · 2024-05-11T21:16:12.709Z · LW(p) · GW(p)

Last year I had an idea for a debate protocol [LW(p) · GW(p)] which got pretty highly upvoted.

Replies from: Raemon
comment by Raemon · 2024-05-11T21:34:09.889Z · LW(p) · GW(p)

Ah yeah, that actually seems like maybe a good format given that the event-in-question I'm preparing for is "a blogging festival". There is trouble with (one of my goals) being "make something that makes for an interesting in-person-event" (we sorta made our jobs hard by framing an in-person-event around blogging, although I think something like "get two attendees to do this sort of debate framework beforehand, and then maybe have an interviewer/facilitator have a "takeaways discussion panel" might be good)

Copying the text here for convenience:

Here's a debate protocol that I'd like to try. Both participants independently write statements of up to 10K words and send them to each other at the same time. (This can be done through an intermediary, to make sure both statements are sent before either is received.) Then they take a day to revise their statements, fixing the uncovered weak points and preemptively attacking the other's weak points, and send them to each other again. This continues for multiple rounds, until both participants feel they have expressed their position well and don't need to revise more, reaching a kind of Nash equilibrium. Then the final revisions of both statements are released to the public, side by side.

Note that in this kind of debate the participants don't try to change each other's mind. They just try to write something that will eventually sway the public. But they know that if they write wrong stuff that the other side can easily disprove, they won't sway the public. So only the best arguments remain, within the size limit.

Replies from: cousin_it
comment by cousin_it · 2024-05-11T21:41:18.420Z · LW(p) · GW(p)

Cool. If you go with it, I'd be super interested to know how it went, and lmk if you need any help or elaboration on the idea.

comment by whestler · 2024-05-13T14:59:42.437Z · LW(p) · GW(p)

I think it might be a good idea to classify a "successful" double crux as being a double crux where both participants agree on the truth of the matter at the end, or at least have shifted their world views to be significantly more coherent.

It seems like the main obstacles to successful double crux are emotional (pride, embarrassment), and associations with debates, which threaten to turn the format into a dominance contest.

It might help to start with a public and joint announcement by both participants that they intend to work together to discover the truth, recognising that their currently differing models means that at least one of them has the opportunity to grow in their understanding of the world and become a stronger rationalist, that they are committed to helping each other become stronger in the art.

Alternatively you could have the participants do the double crux in their own time, and in private (though recorded). If the double crux succeeds, then post it, and major kudos to the participants. If it fails, then simply post the fact that the crux failed but don't post the content. If this format is used regularly, eventually it may become clear which participants consistently succeed in their double crux attempts, and which don't, and they can build reputation that way, rather than trying to "win" a debate.

comment by DPiepgrass · 2024-05-13T20:44:17.893Z · LW(p) · GW(p)

Doublecrux sounds like a better thing than debate, but why such an event should be live? (apart from "it saves money/time not to postprocess")

comment by scarcegreengrass · 2024-05-13T16:58:23.154Z · LW(p) · GW(p)

I quite like the Arguman format of flowcharts to depict topics. In a live performance, participants might sometimes add nodes to the flowchart, or sometimes ask for revision to another participant's existing node. For example, asking for rewording for clarity.

Perhaps the better term would be tree, not flowchart. Each node is a response to its parent. This could perhaps be implemented with bulleted lists in a Google Doc.

It's nice for the event to output a useful document.

comment by Seed (Donqueror) · 2024-05-12T17:52:27.809Z · LW(p) · GW(p)

my most important point here is to approach this as an unsolved "product development" problem

This is a good take. I'd take it one step further and suggest that an even-more material product is the ideal target.

Some of this may be obvious, but for sake of clarity: there exists a process of attempting to identify assumptions which comprise a model which projects to the potential disagreement, for all possible disagreements. In the case of unstated or unnoticed supporting assumptions, this may involve identifying significantly differing probabilities placed on some assertion, and working backwards to identify its supports. Tracking this process explicitly via one or more belief networks, on a whiteboard, a sheet of paper, or even better, purpose-tailored software, allows for reasoners to identify where they may have a crux, and directly focus on building out or otherwise contending with its supports. Reasoners may prefer to produce their own separate belief networks and attempt to merge them.

Mapping out belief networks leads to probability distributions for assertions following from the combinatorics of the supports. When an assertion is under-supported relative to the reasoner's credence, this is made highly visible. The structure makes it apparent when an argument is hand-waved rather than contending with it via the supports it should have or via offering conflicting supports, and helps to fix attention on unfinished work implied by the raising of each assertion. Many rhetorical techniques would rightly fail under these conditions. It also makes for a high utility game of solitaire, whether in advance of an upcoming double crux with another reasoner or not. 

A drawback is that some reasoners may prefer not to reveal all of their supports, as in the case of those which may contain infohazardous or exfohazardous content, or ones which may cause those things to be easier to derive or notice. In some cases, reasoners may prefer to engage in this sort of protocol in private, with the option to multilaterally make the results available after-the-fact.