Books/Literature on resolving technical disagreements?
post by Hazard · 2019-11-14T17:30:16.482Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 16 elityre None 3 comments
I've seen many books and schools of thought that seem to be about conflict resolution. Books like Crucial Conversations and Non-Violent Communication. There are multiple parties that want different things, there's strong emotional undertones/overtones, and these books advise you on how to navigate those conflicts and find some sort of common ground and get people what they want.
So far the Double Crux framework is the only thing I've seen that's had the explicit goal of resolving disagreements, especially disagreements about technical topics. Can anyone recommend any other books or bodies of work that have this explicit goal?
Answers
I've looked into this question a little, but not very far. The following are some trailheads that I've have on the list to investigate, when I get around to it. My current estimation is that all of these, are at best, tangential to the problem that I (and it sounds like you) are interested in: getting to the truth of epistemic disagreements. My impression is there are lots of things in the world that are about resolving disputes, but not many people are interested in resolving disputes to get the answer. But I haven't looked very hard.
Nevertheless...
- The philosopher Robert Stalnaker, has a theory of conversations that involves building up a series of premises that both parties agree with. If either makes a claim that the other doesn't buy, you back up and substantiate that claim. Or something like that. I can't currently find a link to the essay in which outlines this method (anyone have it?), but this seems the most interesting to me, of all the things on this list.
- H/T to Nick Beckstead, who shared this with Anna, who shared it with me.
- There's a book called How to Have Impossible Conversations. I haven't read it yet, but seems mostly about having reasonable conversations about heated political / culture war style topics.
- Erisology is the study of disagreement, a term coined by John Nerst.
- Argument mapping is a thing, that some people claim is useful for disagreement resolution. I'm not very impressed though.
- Bay NVC teaches something called "convergent facilitation", which is about making decisions accommodating everyone's needs, and executing meetings rapidly.
- There's circling, which an number of rationalists have gotten value from, including for resolving disagreement.
Most of the things that I know about, and seem like they're in the vein of what you want, have come from our community. As you say, there's CFAR's Double Crux. Paul wrote this [LW · GW] piece as a precursor to an AI alignment idea. Anna Salamon has been thinking about some things in this space lately. I use a variety of homegrown methods. Arbital was a large scale attempt to solve this problem. I think the basic idea of AI safety via debate is relevant, if only for theoretical reasons (Double Crux makes use of the same principle of isolating the single most relevant branch in a huge tree of possible conversations, but Double Crux and AI safety via debate used different functions for evaluating which branch is "most relevant").
I happened to have written about another framework for disagreement resolution today, but this one in particular is very much in the same family as Double Crux.
↑ comment by romeostevensit · 2019-11-16T14:25:46.709Z · LW(p) · GW(p)
Have you come across anything that gives concrete methods for articulating unstated premises?
One of the things certain people with superpowers seem to do in the Feynman-esque tradition of having a list of unusual methods and unusual problems is have a core loop composed of a pretty flexible representation that they try to port everything in to. Then the operations that they have for this representation acts as a checklist and they can look for missing or overdetermined edges between vertices or what have you (in this case a graph, I don't know how people think without graphs. Maybe graphs are a memetic virus).
edit: found these
https://www.webpages.uidaho.edu/eng207-td/Formal%20Argument%20Analysis.htm
https://slideplayer.com/slide/15828804/
A list of common bad premises
https://conceptspace.fandom.com/wiki/List_of_General_Semantics_Concepts
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-11-16T14:31:25.835Z · LW(p) · GW(p)
Thinking at the Edge gives an excellent process for this.
Replies from: romeostevensit↑ comment by romeostevensit · 2019-11-16T14:38:30.843Z · LW(p) · GW(p)
+1 TatE is underrated relative to focusing.
↑ comment by Matt Goldenberg (mr-hire) · 2019-11-15T18:15:50.445Z · LW(p) · GW(p)
That last link seems to go to the wikipedia article on argument mapping, and not whatever you wrote about today.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-22T05:56:29.066Z · LW(p) · GW(p)
Whoops. Mixed up my links. Fixed now.
3 comments
Comments sorted by top scores.
comment by romeostevensit · 2019-11-14T20:17:11.476Z · LW(p) · GW(p)
instead of searching for things that are about disagreements, I'd look for things that are about creating technical diagrams or other large scale representations of problems and then figure out what aspects are good for doing with 2 people.
Replies from: Hazard↑ comment by Hazard · 2019-11-14T20:59:36.851Z · LW(p) · GW(p)
Thanks. Any particular key words or fields you'd suggest?
Replies from: romeostevensit↑ comment by romeostevensit · 2019-11-15T15:22:31.875Z · LW(p) · GW(p)
This and other pubs within intelligence analysis were useful for me. There might be some stuff written up somewhere on different things that were tried for having superforecaster teams aggregate methods (the stuff I found here was pretty vague, seemed like they tried a lot of stuff and nothing was a grand slam vs the others). Also, judgemental bootstrapping and deliberate practice have overlap.