Basics of Handling Disagreements with People
post by Camille Berger (Camille Berger) · 2024-11-12T17:55:08.143Z · LW · GW · 4 commentsContents
0-Actually, maybe, don’t argue. 1-Rephrase, Rephrase, Rephrase 2-Ask More Questions 3-The Typical Method Fallacy 4-How to generate a good question, fast 5-Personal experiences help clear out confusions. 6-Care about their underlying values. 7-Negotiate through Brainstorming 8-Heal yourself to get in the right Mindset Finally, a word of caution: None 4 comments
Epistemic Status: This is a collection of useful heuristics I’ve gathered from a wide range of books and workshops, all rather evidence-based (robustness varies). These techniques are designed to supplement basics of rationalist discourse [LW · GW], helping facilitate interactions—mostly with those unfamiliar with rationalist thought, especially on entry-level arguments. They may also be useful in conversations between rationalists on occasion. This is also a minimal viable product for an upcoming sequence that will dive into the analysis of well-managed disagreements. Details are intentionally left out.
Tl;dr: Rephrase, ask questions, do not presume your conversation partner shares your epistemology (i.e way of coming to conclusions, in general), ask them for real-world counter-examples, share personal experiences both ways as a means to get clearer, check what kinds of blindspots their own motivation presumes, dovetail interests with brainstorming, and also, all of what I’ve just said is merely pointing to a specific state of mind. You can get to this state of mind only with some form of introspection.
Arguing is sometimes wonderful. Yet sometimes it derails, or flat-out fails. Circumstances in which arguing fails tend to involve people who are not actively displaying rationality. LessWrong has done a lot to teach how to mutually progress on such disagreement. Yet this is only a very small community -the Rest of the World, aka People, still hasn't read the Sequences.
Unproductive disagreement with people can lead to poor impression, pig-headedness, stress, anger, and sometimes worse. There has already been numerous discussions, on this forum, of ways to avoid getting there: there is already a book review [LW · GW] on How Minds Change, then I spent too much time breaking down good disagreements to teach how to do it [LW · GW]. But there wasn't a short document that was summarizing the key takeaways. This serves as that document.
Ethical Caveat:
This post presumes that you follow ethical advice such as:
1-Being earnestly truth-seeking
2-Getting the consent of your partner beforehand to question their beliefs
3-Not harassing people who don't want to talk
4-Choosing the right context (most of the times, 1:1 conversations)
5-Choosing the right person (not a hierarchical subordinate, expect conversations to be harder with a family member)
6-Choosing the right topic (probably not things that are subject to trigger warnings, such as the gender identity of the person, nor things displayed during the conversation itself, such as thoughts you interpret from their non-verbal cues)
7-Not Being a D*ck, in general.
Attention to the reader: Reading about tennis does not teach enough to actually fluently play tennis. Practice is key. In the same way, reading about Effectively Handling Disagreements will be less effective than training yourself at it. Workshops in the comments (feel free to suggest some).
0-Actually, maybe, don’t argue.
Arguing is a choice. It can be fitting or unfitting. It can be a good choice, or a bad choice. Argument is a virtue [LW · GW] of rationalists, but it is a virtue because it coheres with all the other ones. When you discuss with a stranger, surrounding virtues such as evenness or curiosity might slip away. A good way to bring it back is to refrain from counter-arguing, and start with listening. The argument will still be there -but in a form that will make it softer and more pedagogical.
1-Rephrase, Rephrase, Rephrase
By Default, You Don’t Understand Your Partner. Understanding your partner does not take a large amount of pondering and ostentatious thinking as a first step: It requires at least repeating back, with your own words, then genuinely asking your interlocutor if this is what they mean, and if not, offering them to make a correction ("If I understood you right, and if this is not what you meant, feel free to correct me, you meant that.... is this right?). This is fairly basic, but it is worth practicing if you’re not accustomed to it yet. Remember that you probably don’t understand your conversation partner if you didn’t rephrase what they said.
Of interest: Smart Politics.
2-Ask More Questions
By Default, You Don’t Understand Your Partner. Worse than this -You Don’t Know You Don’t Understand Your Partner. Your partner, you might think, came to their conclusion because of claim X, or person Y, or argument Z. You might follow-up on those reasons without having pre-emptively checked they were even relevant to the discussion at hand. Cached [LW · GW] as they are, your conversation partner will say counter-arguments. But they will not bother about whether said counter-arguments figure in their crux at all. This is a massive waste of time and rapport.
If anything, ask for a working definition[1] of the things you’re talking about. Ask for their reasons to believe, rather than presuming what those reasons are.
3-The Typical Method Fallacy
Your partner is not necessarily an empiricist. If they tell you that God exists, and that they do so because of personal experience, this does not mean they think (like I would personally do) that their experience is statistically significant. Their method is relying on a claim, and the claim is Personal experience is reliable (as understood in, "more reliable than science on topics where science challenges it"). You might think that this departs so much from sanity that the only dignified move is to impatiently frown your eyebrows and go talk to someone worth your time. But you might as well question the claim.
Of interest: Street Epistemology.
4-How to generate a good question, fast
Note: This one is a personal observation. Although it took reading scientific litterature to notice it, there aren't publications on it, to my knowledge. Addendum: I've replaced "personal experience is reliable" as an example with "Karm exists". See comments on LW.
Let's take the argument "Karma exists. For example, if I throw garbage out of the window of my car, then I'll break a nail within 24 hours".
Questioning productively such a claim might sound like it leaves a lot of options open, but there is a rough-and-quick way to do it.
Step 1: Identify the property that makes the inference valid in the eyes of your partner (here, the fact that the nail was broken after throwing garbage out of the window)
Step 2: Ask for an example of the same (super)class that has the same property, but does not lead to the conclusion. (here, “Could be there times where you break your nail, yet you haven't done anything bad prior to that?’’)
Step 3: If you get an answer (e.g, “Yes, that's an accident’), ask your partner how they distinguish said answer ("accident") from their initial answer ("Karma").
This is a rough outline of the process, which I’ll elaborate on in a future post. In short, ontological relationships form a socratic artillery. A true socratic move is one that helps your interlocutor have more than one hypothesis [LW · GW] and apply an approximation of Bayes’ rule.
5-Personal experiences help clear out confusions.
If there is one thing I’d like you to remember and that is the most evidence-backed, and the most impressively efficient at solving disagreement, it is that stories help people understand what you’re actually talking about. This is mostly valid of short and to-the-point stories, so keep anything you refer to clear and concise.
When offered to share a story, then hearing a story, people develop more trust, which helps with paying attention to the interpretation you have of it, the actual information you’re trying to convey. They get a lot more details and a fleshed-out example of what you are talking about. They actually get what you’re trying to convey in a way that theoretical arguments completely fail at.
This does not mean that you should use the emotional force of stories to sway your conversation partners around. It rather means that a story -and the emotions it generated in you- are crucial background information to understand what you mean. These two situations can be hard to discriminate, yet the telltale sign of being in an epistemically honest case is contrast: “You see, what I’ve just shared, this is what I mean when I say X’’.
Of Interest: Deep Canvassing.
6-Care about their underlying values.
Whatever your proposition is, it might well fit perfectly within your conversation partner’s values in some instrumental way, contrary to their own beliefs. Try to spot and bring up your partner’s motivations -say, an e/acc who cares about innovation- then, from there, point at whether the topic at hand fits with it -typically, AI Safety can be expected to contribute to innovation. Of course, do not say lies about how the topic at hand fits with it (“lies” here is understood broadly and refers to epistemic obfuscation [LW · GW] in general).
Of interest: Motivational Interviewing.
7-Negotiate through Brainstorming
In the spirit of Ask More Questions, focus on your partner’s interests (or “needs” if you’re the CNV type), not positions (“Why do you want that ?” instead of “What do you want?”). You’ll get building blocks for brainstorming creative win-win solutions together. Note that in real-world situations, you’ll still need to spend a lot of time building positive rapport and getting your partner to think about the solution with you.
Of Interest: Getting To Yes
8-Heal yourself to get in the right Mindset
As she stirred and opened her eyes, I saw her differently. Her freckles were more obvious now, the colors of her face more vibrant. It was like I was seeing her in high definition for the first time.
-Chris Lakin, Learning to do *real* empathy.
Empathy isn’t just a series of scripted responses. I’ve been nudging you to imitate the ways in which the right mindset manifests -through questioning, rephrasing, narrating. Yet the mindset itself is the key. Getting in a mindset is a therapeutic act, it requires both practice, but also and mainly introspection, insight, and acknowledging denial. Getting in the right mindset does not only change your actions -it changes your perception.
Of interest:
-Chris Lakin, How Unconscious predictions update
-VIEW Mindset
-Compassion Focused Therapy
-Focusing
Finally, a word of caution:
What is shared here is not necessarily suited to all people and contexts. Of course, relying on conversation patterns also has a potential to fall within the Dark Arts -police yourself. My belief is that there is a subset of conversational attitudes that are in line with virtuous rationality. These attitudes have to be mastered in order to manage conversations with the rest of the world. Such conversations will happen regardless—so it’s best to be prepared.
Many thanks to Neil [LW · GW] for suggesting me to write this post.
- ^
Contested, see comments for discussion. My position is that people tend to focus on platonic definitions (as opposed to "working" definitions) way too much, even if it can be good in some instances.
4 comments
Comments sorted by top scores.
comment by Camille Berger (Camille Berger) · 2024-11-12T18:05:31.454Z · LW(p) · GW(p)
Workshops:
https://deepcanvass.org/ organizes introductions to Deep Canvassing regularly. My personal take is that the workshop is great, but I don't find it entirely aligned with a truth-seeking attitude (it's not appalling either), and I would suggest rationalists to bring it their own twist.
https://www.joinsmart.org/ also organizes workshops who often vary in theme. Same remark as above.
There is a discord server accessible from https://streetepistemology.com/, they organize regular practices sessions.
Motivational Interviewing and Principled Negotiation are common enough for you to find a workshop near where you live, I guess.
There's also the elephant in the room -my own eclectic workshop, which mostly synthesizes all of the above with (I believe) a more rationalist orientation and stricter ethics.
Someone told me about people in the US who trained on "The Art of Difficult Conversations", I'd be happy to have someone leave a reference here! If you're someone who's used to coaching for managing disagreements, feel free to drop your services below as well.
comment by deepthoughtlife · 2024-11-13T20:26:14.252Z · LW(p) · GW(p)
I have a lot of disagreements with this piece, and just wrote these notes as I read it. I don't know if this will even be a useful comment. I didn't write it as a through line. 'You' and 'your' are often used nonspecifically about people in general.
The usefulness of things like real world examples seems to vary wildly.
Rephrasing is often terrible; rephrasing done carelessly actually often leads to basically lying about what your conversation partner is saying, especially since many people will double down on the rephasing when told that they are wrong, which obviously infuriates many people (including me, of course.). People often forget that just because they rephrased it doesn't mean that they got the rephrasing right. Remember the whole thing about how you don't understand by default?
This leads into one of the primary sins of discussion, mindreading. You think you know what the other party is thinking, and you just don't. When corrected, many don't update and just keep insisting. (Of course, the corrections aren't always true either.)
A working definition may or may not be better than a theoretical one. Often times there really isn't a working definition that the person you are talking to can express (which is obviously true of theoretical at times too). People may have to argue about subjects where the definitions are inexpressible in any reasonable amount of time, or otherwise can't be shared.
Your suggestion for attacking personal experience seems very easy to do very badly. Personal experience is what we bootstrap literally every bit of our understanding of the world from. If that's not reliable, we have nothing to talk about. You have to build on some part of their personal experience or the conversation just won't work. (Luckily, a lot of our personal experiences are very similar.) It reminds me of games people play to win/look good, not to actually have a real discussion.
People don't generally use Bayes rule! Keep that in mind. When you are discussing something with someone, they aren't doing probability theory! (Perhaps very rarely.) Bayes rule can only be used by analogy to describe it.
Stories need to actually be short, clear, and to the point or they just confuse the matter more. If you spend fifty paragraphs on the life story of some random person that I don't care about, I'm just going to tune it out (despite the fact I am super long winded). (This is a problem with many articles, for instance.) Even if I didn't, I'm still going to miss your point, so get to the point. Can you tell this story in a couple hundred words? Then you can use it. No? Rethink the idea.
Caring about their underlying values is useful, but it needs to be preceeded by curiousity about and understanding of, or it does no good.
I do agree that understanding why someone wants something is obviously the best way to find out what you can offer that might be better than what they currently want to do, though I do think understanding what they want to do is useful too.
Something said in point 8 seems like the key. "Empathy isn't just a series of scripted responses." You need to adapt to the actual argument you are having. This isn't just true about empthy, but for any kind of understanding. The thing itself is the key, and the approach will have to change for each individual part. This isn't just once in attempting understanding, but recursively true with every subpart.
Replies from: Camille Berger↑ comment by Camille Berger (Camille Berger) · 2024-11-14T13:33:24.581Z · LW(p) · GW(p)
Hi! Thank you for writing this comment. I understand it can be a bit worrying to feel like your points might not be understood, but I'll give it a try nonetheless. I really genuinely want to fix any serious flaw in my approach.
However, I find myself in a slightly strange situation. Part of your feedback is very valuable. But I also believe that you misunderstood part of what I was saying. I could apply the skills I described in the post on your comment as a performative example, but I'm sensing that you could see it as a form of implied sarcasm, and it'd be unethical, so I'll refrain from doing that. There is a last part of me that just feels like your point is "part of this post is poorly written". I've made some minor edits in the hope that it accomodates your criticism.
My suggestion would be for you to watch real-life examples of the techniques I promote (say https://www.youtube.com/watch?v=d2WdbXsqj0M and https://www.youtube.com/watch?v=_tdjtFRdbAo ) then comment on those examples instead.
Alternatively, you can just read my answers:
Rephrasing is often terrible;
Agree, I've added the detail on "genuinely asking your interlocutor if this is what they mean, and if not, feel free to offer a correction" (e.g. "If I got you right, and feel free to correct me if I didn't.... "). I think that this form makes it almost always a pleasant experience and I somehow forgot this important detail.
Your suggestion for attacking personal experience [...]
You're referring to point 4, not 5, right ?
If yes, I think this is extrapolating beliefs I don't actually have. I admit however I didn't choose a good example, you can refer to the Street Epistemology video above for a better one.
I'll replace the example soonish. In the mean time, please note that I do not suggest to "attack" personal experiences. I suggest to ask "What helps us distinguish reliable personal experiences from unreliable ones ?". This is a valid question to ask, in my view. For a bunch of reasons, this question has more chances to bounce off, so I prefer to ask "How do you distinguish personal experiences from [delusions]?", where "[delusions]" is a term that has been deliberately imported by the conversation partner. I think most interlocutors will be tempted to answer something in the lines of intersubjectivity, repeatability or empirical experiments. But I agree this is a delicate example and I'd better off pointing to something else.
Stories need to actually be short, clear, and to the point or they just confuse the matter more.
This was part of the details I was omitting. I'll add it.
Caring about their underlying values is useful, but it needs to be preceeded by curiousity about and understanding of, or it does no good.
Agree. This was implied in several parts of the post, i.e "Be genuinely truth-seeking" in the ethical caveats. But I don't think it is that hard.
A working definition may or may not be better than a theoretical one.
Please note that I'm talking about conversations that happen between rationalists and non-rationalists on entry-level arguments. E.g. "We can't lose control of AI because it's made of silicon", not "Davidad has a promising alignment plan" (please note that I'm not making the argument to apply these techniques to AI Safety Outreach and Advocacy, this is just an example). I think we really should not spend 15 minutes with someone not acquainted with LessWrong or even AI to define "losing control" in a way that is close to mathematically formal. I think that "What do you mean with losing control? Do you mean that, if we ask to do something specific, then it won't do it? Or do you mean something else?" is a good enough question. I'd rather discuss the details when the said person is more acquainted with the topic.
There will, of course, be situations where this isn't true. Law of equal and opposite advice applies. But in most entry-level arguments, I'd rather have people spend less time problematizing definitions as opposed to asking to their interlocutor what are their reasons.
People don't generally use Bayes rule!
Of course. I'm not suggesting to mention Bayes' Rule out loud. Nor am I suggesting people actually use Baye's Rule in their everyday life. I'm noting that techniques I think are more robust are the ones that lead people to apply an approximation thereoff, usually by contrasting one piece of evidence under two different hypotheses. The reference to 'Bayes' comes from Bayesian psychology of reasoning, my model is closest to the one described in The Erotetic Theory of reasoning (https://web-risc.ens.fr/~smascarenhas/docs/koralus-mascarenhas12_erotetic_theory_of_reasoning.pdf)
Something said in point 8 seems like the key.
It is the key, I thought I hade made it clear with "Yet the mindset itself is the key".
However I don't want to make a post on it without explaining the ways in which it manifests, because healing myself made no sense, up until I started analyzing the habits of healed people. Some people who were already healed didn't want to "give the secrets away" or scoughed at my attempts. They came up to me as snob and preventing me to actually learn, I actually really got a lot out of noting down recurrent patterns in their conversations, if only because it allowed me to do Deliberate Practice.
Finally, please remember that this post is an MVP. It is not meant to be exhaustive and cover all the nuances of the techniques -it's just that I'd rather write a post than nothing at all, and the entire sequence will take time before publication.
If you feel like I completely misunderstood your points, and are open to have my skills applied to our very conversation, feel free to DM me a calendly link and we can sort it out live. I'd describe myself as a good conversation partner and I would put quite low the probability for the exchange to go awry.
PS: It would help me out if you could quote the [first sentence of the] parts you are reacting to, in order to make clear what you are talking about. I hope I'm right in understanding what parts of the post you are reacting to.
↑ comment by deepthoughtlife · 2024-11-15T02:33:48.047Z · LW(p) · GW(p)
As it often does when I write, this ended up being pretty long (and not especially well written by the standards I wish I lived up to).
I'm sure I did misunderstand part of what you are saying (that we do misunderstand easily was the biggest part of what we appear to agree on), but also, my disagreements aren't necessarily things you don't actually mention yourself. I think we disagree mostly on what outcomes the advice itself will give if adopted overly eagerly, because I see the bad way of implementing them as being the natural outcome. Again, I think your 8th point is basically the thrust of my criticism. There is no script you can actually follow to truly understand people, because people are not scripted.
Note: I like to think I am very smart and good at understanding, but in reality I think I am in some ways especially likely to misunderstand and to be misunderstood. (Possible reason: Maybe I think strangely as compared to other people?) You can't necessarily expect me to come at things from a similar angle as other people, and since advice is usually intended as implicitly altering the current state of things, I don't necessarily have a handle on that.
Importantly, since they were notes, I took them linearly, and didn't necessarily notice if my points were addressed sufficiently later.
Also, I view disagreements as part of searching for truth, not for trying to convince people you are right. Some of my distaste is that it feels like the advice is being given for persuasion more than truthseeking? (Honestly, persuasion feels a little dirty to me, though I try to ignore that since I believe there isn't actually anything inherently wrong with persuasion, and in many cases it is actually good.) Perhaps my writing would be better / make more sense if I was more interested in persuading people?
An important note on my motives for the comment is that I went through with posting it when I think I didn't do particularly well (there were obvious problems) in part to see how you would respond to it. I don't generally think my commenting actually helps so I mostly don't, but I've been trying out doing it more lately. There are also plenty of problems with this response I am making.
Perhaps it would have been useful for me to refer to what I was writing about by number more often.
Some of the points do themselves seem a bit disrespectful to me as well. (Later note: You actually mention changing this later and the new version on Karma is fine.) Like your suggestion for how to change the mind of religious people (though I don't actually remember what I found disrespectful about it at this moment). (I am not personally religious, but I find that people often try to act in these spaces like religious people are automatically wrong which grates on me.)
Watching someone else having a conversation is obviously very slow, but there is actually a lot of information in any given conversation.
Random take: The first video is about Karma, which I do have an opinion on. I believe that the traditional explanation of Karma is highly unlikely, but Karma exists if you think of it as "You have to live with who you are. If you suck, living with yourself sucks. If you're really good, living with yourself is pretty great." See also, "If you are in hell, it's a you thing. If you are in heaven, it's also a you thing." There are some things extreme enough where that isn't really true, like when being actively tortured, but your mind is what determines how your life goes even more than what events actually happen in the normal case, and it does still effect how you react to the worst (and best) things. (People sometimes use the story about a traveler asking someone what the upcoming town is like, and the person just asking the traveler what people in the previous place were like, while answering 'much the same' for multiple travelers with different outlooks and I do think this is somewhat true.)
Also, doing bad things can often lead to direct or indirect retaliation, and good to direct or indirect reward. Indirect could definitely 'feel' like Karma.
I think that the actual key to a successful conversation is to keep in mind what the person you are talking to actually wants from the conversation, and I would guess what people mostly want from a random conversation is for the other person to think they are important (whenever they don't have an explicit personal goal from the conversation). I pretty much always want to get at the truth as my personal goal because I'm obsessive that way, but I usually have that goal as an attempt at being helpful.
It seems to work for him getting his way, and nothing he does is too bad, but the conversational tactics seem a bit off to me. (Only a bit.) It seems like he is pushing his own ideas too much on someone else who is not ready for the conversation (though they are happy enough to talk).
No, I don't know any way to make sure your conversation partner is ready for the conversation. A lot of evidence for your position is not available in a synchronous thing like a conversation, and I believe that any persuasion should attempt to be through giving them things they can think through later when not under time pressure. He didn't exactly not do that, but he also didn't do that. "You must decide now" (before the end of the conversation) seemed to be a bit of an undercurrent to things. (A classic salesman tactic, but I don't like it. And sure, the salesman will pivot toward being willing to talk to you again later if you don't bite on that most of the time, but that doesn't mean they weren't pressuring you to decide quickly.)
The comparison between 'Karma' and 'Santa' seems highly disrespectful and unhelpful. They are very different things and the analogy seems designed to make things unclear more than clearing them up. In other words, I think it is meant to confuse the issue rather than give genuine insight. You could object that part of the Santa story is literally Karma (the naughty list) but I don't think that makes the analogy work.
I don't really get the impression he was actually willing to be convinced himself. He said at one point that he was willing to, and maybe in the abstract he is, but he never seemed to seek information against his own position. Note that I don't think I would necessarily be able to tell, and I actually disapprove of 'mindreading' quite strongly.
The fact that I am strongly against 'mindreading' and actually resort to it myself automatically is actually one of the points I was trying to make about how easy it is to misuse conversational tactics. I was genuinely trying to understand what he was doing, (in service of making a response based on it) and I automatically ended up thinking he was doing the opposite of what he claimed, just based on vibes without any real evidence.
You could argue I am so against it because I notice myself doing it, and maybe it is true, but I find it infuriating when others do it badly. I don't actually have any issues with them guessing what I'm doing correctly, though I'm unlikely to always be right about that either (just more than other people about me).
He also didn't seem entirely open that he was pushing for a specific position throughout the entire conversation, when he definitely was. This wasn't a case of just helping someone update on the information they have (though there was genuinely a large amount of that too.) (People do need help updating and it can be a valuable service, but for it to really be helpful, it needs to not be skewed itself.)
The second video (about convincing someone to support trans stuff) seems pretty strange. This video seems completely different from the previous one; more propaganda than anything. Clearly an activist (and I generally dislike activists of any stripe.). (Emotional response: Activists are icky.) Also an obviously hot culture war issue (which I have an opinion on though I don't think said opinion is relevant to this discussion). It's also very edited down which makes it feel even more different.
The main tactic seems like trying to force a person to have an emotional reaction through manipulative personal stories (though he claims otherwise and there are other explanations). But he seemed to do it over and over again, so this time I am pretty sure he isn't being entirely honest about that (even though I still disapprove of mindreading like I am doing here.). I feel like he is a bad person (though not unusually so for an activist.)
The alternate explanation, which does work, is just that people like to tell stories about themselves when talking about any subject. I clearly reference myself many times in this response and my original response. I'm not saying I'm being fair in my conclusions.
Do you really see those two videos as similar? While there are some similarities, they feel quite different to me! I didn't love it, but the first video was about talking through the other person's points and having a genuine conversation. The latter was about taking advantage of their conversation partner's points for the next emotional reaction. In other words, the latter video felt a lot more like tricking someone while the former was a conversation.
Moving past the videos to the rest of the response.
Yes, the switch to the longer way of rephrasing that includes explicitly accepting that you might be wrong seems much, much better. Obviously, it is best for the person to really believe they might be wrong, and saying it both helps an honest participant remember that, and should make it easier for the person they are talking with to correct them. Saying the words isn't enough, but I like it a lot better than before.
Obviously, I'm not rephrasing your points because that still isn't how I believe it should be done, but if there is a key point this way of asking about it can be very useful. Or, to rephrase, rephrasing is a tool to be used on points that seem to be particularly important to check your understanding of.
I don't remember exactly what you said in point 4 before you changed it, but I don't particularly read point 5 as being anti personal experience in the way my comment indicates. I have no idea why I would possibly write that about point 5 so I assume you are correct in your assumption.
Since I only vaguely remember it, my memory only contains the conclusion I came to which we both agree can be faulty. But the way I remember it, the old point 4 is very clearly a direct attack on personal experience in general rather than on distinguishing between faulty and reliable personal experience. From past experience, this could be attributable to many things, including just not reading a few words along the way.
I don't really have any issues with your new point 4 (and it is clearly taken from that first video.) That is very obviously a good approach for convincing people of things that doesn't rely on anything I find distasteful. It seems very clearly like what you are saying you are going for and I think it works very well.
For the record, I think 'working definition' is no more different from 'mathematical definition' than 'theoretical definition' is from 'mathematical definition' because I am using 'theoretical definition' in a colloquial way. I was definitely not saying mathematical definitions or formal definitions are useful when talking to a layperson. (Side note: I've been paying attention to 'rationalists' of this sort for about 20 years now, but I am not one. I tend to resist becoming part of groups.) I do generally think that unless you are in the field itself that 'formal definitions' are not helpful since they take far too much time and effort that could be used on other things (and formal definitions are often harder to understand even afterward in many cases), and mathematical definitions are unnatural for most people even after they understand them.
I do not want people spending more time on definitions in conversation unless it is directly relevant, but think remembering that there are different kinds of brief definitions seems important to me.
I perhaps overreacted to the mention of Bayes Rule. It's valid enough for describing things in probalistic circumstances, but people in this community try to stretch it to things that aren't probability theory related and it's become a bit of a pet peeve for me. I have never once used Bayes Rule to update my own beliefs (unless you include using it formally to decide what answer to give on a probability question a few times in school), but people act like that is how it is done.
In the paper on 'Erotectic' reasoning, ... includes a pretty weird bit of jargon on their very first example (first full paragraph of second page) which makes it hard to understand their point. And not only do they not explain, it isn't even something I could look up with a web search because all explanations are paywalled seemingly. They claim it is a well-known example, but it clearly isn't.
As best as I can tell, the example is really just them abusing language. Because 'or else' is closer to 'exclusive or' but they are pretending it is just 'or'. (It is a clear misuse to pretend it doesn't.) I don't know philosophy jargon well, but misstating the premise is not clever. In this case, every word of the premise mattered, and they intentionally picked incorrect ones. Their poor writing wasted a great deal of time. And yes, I am actually upset at them for it. I kept looping back around to being upset about their actions and wanting to yell at them rather than considering what they were writing about. (Which is an important point I suppose, if the person you are conversing with is upset with you, things are reasonably likely to go badly regardless of whether your points are good or bad.) I think it is the most upset I've been reading a formal paper (though I mostly have only really read a small number of AI and/or Math ones.)
In the end I could tell I wasn't going to stop if I kept reading, so I quit without understanding what they were writing about. (I can definitely be a bit overboard sometimes.) All I got was that they think there is some way to ask questions that works with the basic reasoning people normally use and leads to deductively valid reasoning. I have no idea what method of questioning they are in favor of or why they think it works. (I do think the example could have been trivially changed and not bothered me.) I do think my emotional reaction is a prime example of a reason resolving disagreements often doesn't work (and even why 'fake disagreements' where the parties actually agree on the matter can fail to be resolved).
To really stress point 8 it should be point 1. I was just saying it needed to be stressed more. I did notice you saying it was important and I was agreeing with you for the most part. Generally you evaluate points based on what came before, not based on what came after (though it does happen). It's funny, people often believe they are disagreeing when they are just focusing on things they actually agree on in a different manner.
On a related note, it's not like I'm ordering this stuff in order of how important I think it is. Sometimes things fit better in a different order than importance (this is obviously in order of what I am responding to.) (Also, revising this response on a global scale would take far too long given how long it already takes me to write comments. It might be worth writing shorter but better in the same amount of time, but I don't seem inclined to it.)
You know what, since I wrote that I had a lot of disagreements, I really should have pointed out that not all of the things I was writing were disagreements! I think my writing often comes off as more negative than I mean it (and not because other people are reading it badly).
On the note of it being a minimum viable product, I think those are very easy to badly. You aim for what you personally think is the minimum... when you already know the other stuff you are trying to say. It is then often missing many things that are actually necessary for it to work, which makes it just wrong. I get the idea, perfectionism is slow, a waste of resources, and even impossible, but aiming for the actual minimum is a bad idea. It is often useful advice for startups, but we do not want to accept the same rate of failure as a startup business! Virtually all of them fail. We should aim more for the golden mean in something like a formal post like you made. (A high rate of failure in comments/feedback seems fine though since even mostly failed comments can spark something useful.)
As far as quoting the first sentence of each thing I am responding to, that does sound like a useful idea, and I should do it, but I don't think I am going to anyway. For some reason I dread doing it (especially at this point). I also don't even know how to make a quote on lesswrong, much less a partial one. I know I don't necessarily signpost well exactly what I am responding to. (Plus, I actually write this in plaintext in notepad rather than the comment area. I am paranoid about losing comments written on a web interface since it takes me so long to write them.)