On "aiming for convergence on truth"
post by gjm · 2023-04-11T18:19:18.086Z · LW · GW · 55 commentsContents
Background Convergence Prediction markets and divergence Disagreement and disrespect Good faith, bad faith, how to win big Not aiming for convergence, in practice Disrespect, again None 55 comments
Background
Duncan Sabien wrote a list of proposed "basics of rational discourse [LW · GW]" guidelines. Zack M Davis disagrees with (his interpretation of) one of the guidelines [LW · GW]. I think the question is interesting and don't feel that those two posts and their comments resolve it. (Spoiler: I largely agree with Duncan on this.)
So, Duncan says that we should
aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth
and that we should care whether an interlocutor is
present in good faith and genuinely trying to cooperate.
Zack, on the other hand, holds (as I understand him) that
- we should aim for truth and not particularly care about convergence
- in cases where we have something valuable to contribute, it should be precisely because there is divergence between what others think and our view of the truth
- a good frame for thinking about this is that debates are like prediction markets
- it shouldn't matter much whether someone else is "in good faith" or "trying to cooperate"; the merits and defects of their arguments are whatever they are, regardless of intentions, and we should be responding to those
- talk of "cooperation" and "defection" implies a Prisoner's Dilemma situation, and attempted rational discourse doesn't generally have the properties of a Prisoner's Dilemma.
I will argue
- that we should aim for truth rather than for convergence as such
- that I think Zack has misunderstood what Duncan means by aiming for convergence
- that what Duncan means by "aiming for convergence" is a reasonable goal
- that debates are importantly unlike prediction markets (and even prediction markets don't necessarily have quite the incentives Zack assumes)
- that there are specific forms of not-aiming-for-convergence that are more common than Zack's "aiming for divergence"
- and they are usually harmful when they occur
- and when they do, it often becomes necessary to adopt modes of discourse that attend explicitly to status, politeness, intentions, etc., which makes everything worse
- and this failure mode is made less likely if (as Duncan proposes) we avoid those forms of not-aiming-for-convergence and assume until we get strong contrary evidence that others are also avoiding them
- that while in principle we can just respond to others' arguments and ignore their (actual or perceived) intentions, in practice we can't and shouldn't
- that informal talk of cooperation and defection doesn't have to imply a PD-like situation
- and is reasonable in this context even though (I agree with Zack) what is going on isn't altogether PD-like.
So, let's begin.
Convergence
Suppose A and B are discussing something they disagree on: A thinks X and B thinks Y. Here are some goals A might have. I've given them brief but crude names for later reference. The names do not perfectly reflect the meanings.
- WIN, by making A look clever and wise and good, and making B look stupid or ignorant or crazy or evil. (One way, but not the only one, to do this is by proving that X is right and Y is wrong.)
- LEARN, ending up believing X if X is right and Y if Y is right.
- TEACH, so that B ends up believing X if X is right and Y if Y is right.
- CONVINCE, so that B ends up believing X.
- EXPOUND, so that the audience ends up believing X.
- AVOID CONFLICT, by backing off from any point at which A and B seem likely to disagree very sharply.
- AGREE, by actually ending up believing the same thing whether right or wrong.
I think that when Duncan says we should "aim for convergence on truth" he means that A should aim to LEARN and TEACH, with the hope that if A and B pool their knowledge and understanding they will likely (1) both end up with truer views, (2) end up understanding one another better, and (3) since the truth is a single thing, end up nearer to one another. 3 here is not particularly a goal but it is a likely outcome of this sort of discussion if pursued honestly and competently, and I think "aim for convergence on truth" means "aim for the truth, with the expectation of convergence".
(Why do I think Duncan means that rather than, e.g., "explicitly aim to converge"? Because he wrote this: "If two people disagree, it's tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what's true." And, next paragraph: "If you are moving closer to truth [...] then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth." I think that's a little too optimistic, incidentally.)
It is possible to aim for truth without aiming for convergence even in this sense. For instance, if A is very confident that X is right but has given up hope of convincing B, he may aim to EXPOUND and secondarily to WIN, promoting truth by convincing the audience that X (which A believes) is true and also making them less likely to be (as A sees it) taken in by B. Or A may not care what anyone else ends up thinking and aim only to LEARN, which will produce convergence if it happens that B is right and A is wrong but will likely do little to help B if A was right from the start.
So, Duncan thinks that the combination of LEARN and TEACH is a good goal. I don't think he's advocating AGREE as a goal. My sense is that he may favour a bit of AVOID CONFLICT where Zack largely doesn't. But what does Zack want?
Prediction markets and divergence
Well, Zack suggests that a debate is like a prediction market; you should only get into one if you think you have some knowledge or understanding that the people already there lack; you should therefore expect to diverge from others' views, to try to prove them wrong, because that's how you "win status and esteem"; to whatever extent debate converges on truth, it's by incentivizing people to contribute new knowledge or understanding in order to win status and esteem.
In terms of the goals I mentioned above, this amounts to supposing that everyone is trying to WIN, hopefully with constraints of honesty, which they do partly by trying to CONVINCE and EXPOUND. We may hope that everyone will LEARN but in Zack's presentation that doesn't seem to be present as a motive at all.
(Which is odd, because later on -- responding to Duncan's talk of cooperation and defection -- Zack instead considers someone who is simply aiming unilaterally at truth, trying to LEARN regardless of what happens to anyone else.)
Accordingly, Zack thinks it's nonsense to try to "aim at convergence", because you're only going to be able to WIN if you start out disagreeing and argue vigorously for your dissenting opinion. And because the only way for your contribution to be net-valuable is if the others are wrong and you therefore move them away from the previous consensus.
I dislike this whole framing, and I am not persuaded that any of it is incompatible with what (as I understand it) Duncan means by "aiming at convergence on truth".
When I engage in debate somewhere like LW I am not (I hope) primarily trying to gain status and esteem, and I am not (I hope) trying to WIN as such. I am hoping to LEARN and TEACH and sometimes EXPOUND; if I see that I disagree with Zack, then I don't much care about my "esteem and status" versus Zack's, but I do think it's possible that I understand something Zack doesn't, in which case maybe he'll learn something useful from me, or that Zack understands something I don't, in which case maybe I'll learn something useful from him, and that even if we fail to learn from one another whoever's reading may get something useful from the discussion. This is all rather unlike Zack's account of prediction-market-like debate.
It does all still require, as Zack suggests, that I think I know or understand something better than Zack. If I don't, I likely won't be engaging Zack in debate. So, "on average", I'm hoping that engaging Zack in debate will mean that Zack moves towards my position; if the prior state of things is that Zack and Yak and Xach are all agreeing with one another, when I enter the debate I am hoping to move them all away from the prior consensus.
But none of that means I'm not hoping for convergence! If Zack moves towards my position, then we have converged at least a bit. If instead Zack points out where I'm wrong and I move towards his position, then we have converged at least a bit. The only way we don't converge is if neither of us moves the other at all, or if one of us makes arguments so terrible that the other's contrary position gets more firmly entrenched, and I am emphatically not hoping for either of those outcomes.
And, even more so, none of it means that I'm not doing what I think Duncan means by "aiming for convergence on truth". I am hoping that Zack, or I, or more likely both, will become a bit less wrong by discussing this matter on which we have different opinions. If that happens then we will both be nearer the truth, and we will both be nearer one another.
Most of this applies to prediction markets too: it doesn't depend on what maybe Zack thinks is my naïve view of the motives of people engaging in debate. If I enter a prediction market expecting to make money, then (as Zack says) I am expecting to make the market move away from its previous consensus, but that doesn't mean I'm not anticipating convergence. I expect that the market will move towards me when I enter, because that's what markets do. If the market moves less towards me than I expected, or if it moves away from me again, then I should move towards the market because that's evidence that the market's opinion is stronger than I thought. It's true that in the prediction-market context I'm not particularly aiming for convergence, or at least I might not be, because maybe my main goal there is to make money, but I'm still expecting convergence, and for basically the same reasons as in a debate.
And, to reiterate, a discussion is not a prediction market, and I may prefer an outcome where everyone's beliefs exactly match the truth even though at that point I no longer have any prospect of WINning. (I might prefer that even in a prediction market, if it's on some important topic: better for us all to arrive at the correct view of how to win the war / stop the plague / deflect the asteroid than for me to keep making money while our cities are getting bombed, our hospitals are overflowing with the dead, and the asteroid hurtles closer to earth.)
I confess that I am confused by some of Zack's arguments in this area, and I wonder whether I am fundamentally missing his point. He says things like this:
I would prefer to correctly diverge from the market—to get something right that the market is getting wrong, and make lots of money in the future when my predictions come true.
but this isn't about diverging-the-opposite-of-converging at all, it's using "diverge" to mean "differ at the outset", and once Zack starts participating in the market the effect of what he does will be to reduce the divergence between his position and the market's. In other words, to produce convergence, hopefully "on truth". Converging means getting closer, not starting out already in agreement.
Disagreement and disrespect
Zack says that "disagreement is disrespect":
the very fact that you're disagreeing with someone implies that you think there's something wrong with their epistemic process
Well, yes and no.
Suppose I think I have a proof of some proposition and I hear that Terry Tao has proved its negation. I check over my proof and I can't find an error. At this point, I think it's reasonable to say that TT and I have a disagreement. But TT is a better mathematician than I am, and my likely epistemic state might be "70% I've made a mistake that I'm currently unable to see, 25% TT has made a mistake, 5% something else like misunderstanding what it is TT claims to have proved". It would not be accurate to say that I think there's something wrong with TT's epistemic process, in this situation. I'm seriously considering the possibility that there might be, but I'm also seriously considering the possibility that the error is mine.
But suppose I have checked my own proof _very_ carefully (and also checked any other work it depends on). Maybe at this point I'm 95% confident that the error is TT's. Does that mean I think that TT isn't "aiming for convergence on truth"? Heck no. It means I think he is honestly and skilfully trying to get at the truth, just as I am, and I think that in this case he has uncharacteristically made a mistake. If I'm going to discuss the matter with him, I would do well to assume he's basically honest and competent even though I think he's slipped up somewhere along the line. (And to be aware that it's still possible that the mistake is my own.)
I think this sort of thing is the rule rather than the exception. If I engage in a debate, then presumably in some sense I think I'm right and the others are wrong. But I may well think there's an excellent chance I'm wrong, and in the case where I'm right the others' problems needn't be "something wrong with their epistemic process" in any sense that requires disrespect, or that requires me not to think they're aiming for truth just as much as I am. And in that case, I can perfectly reasonably anticipate convergence and (in so far as it happens by us all getting nearer the truth) hope for it.
Similarly, Zack says:
So why is the advice "behave as if your interlocutors are also aiming for convergence on truth", rather than "seek out conversations where you don't think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged"?
and it seems to me that he's conflating "not aiming to converge on truth" with "being wrong". You can aim to converge on truth and none the less be wrong, and I think that's the usual case.
Good faith, bad faith, how to win big
Duncan talks about cooperation and defection and the like. Zack responds that such language implies a Prisoner's Dilemma and (attempted) rational discourse is not a Prisoner's Dilemma: each participant should be trying to get things right unilaterally and can't possibly benefit from getting things deliberately wrong.
It seems like Zack has forgotten his own prediction-market metaphor here. If a debate is like a prediction market, where everyone is aiming for "status and esteem", then it can be quite PD-like.
Suppose Zack and I disagree on some difficult question about, say, theoretical physics. The cooperate move for either of us is to do the calculations, check them carefully, and report on the result whether it matches our prior position or not. The defect move is to give some handwavy argument, optimized for persuasiveness rather than correctness. Our audience knows (and knows it knows) less about theoretical physics than we do.
If we both cooperate, then we presumably end up each knowing the result of the calculation, having one another's calculation to cross-check, and agreeing with some confidence on the correct answer. That's pretty good.
If we both defect, then neither of us learns anything, our opinions remain unchanged, and our audience sees roughly equally plausible handwaving from both sides. The status quo persists. That's not great, but we'll cope.
If one cooperates and the other defects, the cooperator has done a ton of work and may have learned something useful, but the audience will be convinced by the handwaving (which they think they kinda follow) and not by the calculation (which they definitely don't). All the "status and esteem", which Zack proposed was the actual goal here, goes to the defector.
Is this really a Prisoner's Dilemma? Maaaaybe. It's not clear whether CD really gives the cooperator a worse outcome than DD, for instance. But I think it's PD-like enough for "cooperate" and "defect" to be reasonable labels. (Those labels do not require a strictly PD-type situation; one can make sense of them even when the inequalities between outcomes that define a Prisoner's Dilemma do not all hold.)
I think this is the sort of thing Duncan is gesturing towards when he talks of cooperation and defection in would-be rationalist discourse. And, informally, of course "cooperation" simply means "working together": what the parties in a discussion do if they are aiming to help one another understand and share their knowledge. Both of which are things you can do, and both of which are things you may be less inclined to do if your aim is "status and esteem" rather than "converging on truth". (And both of which, I think, are things we could always use more of.)
Zack considers the following possible exchange:
A. Hey, sorry for the weirdly blunt request, but: I get the sense that you're not treating me as a cooperative partner in this conversation. Is, uh. Us that true?
B. You don't need to apologize for being blunt! Let me be equally blunt. The sense you're getting is accurate: no, I am not treating you as a cooperative partner in this conversation. I think your arguments are bad, and I feel very motivated to explain the obvious counterarguments to you in public, partially for the education of third parties, and partially to raise my status at the expense of yours.
Zack says (and, in comments, Duncan agrees) that B's response is rude but also in good faith. I agree too. B is admitting to being uncooperative but not necessarily defecting in a PD-like sense. On the other hand, if B doesn't really care that much whether his arguments are actually better than A's but has seen a nice opportunity to raise his status at A's expense, then B is defecting as well as being uncooperative, and I think that's unfortunately common when people aren't treating one another as cooperative conversation partners; see below.
And I say (though I don't know how Duncan would feel about this) that while B's comment is in good faith, the behaviour he is owning up to is (not only rude but) less than maximally helpful, unless A really is as big a bozo as B apparently thinks. That is: if A is merely wrong and not an idiot, it's likely that A and B could have a more productive conversation if A were willing to treat B as a cooperative partner.
(My impression is that Zack is more willing than Duncan thinks he should be to decide that other people are bozos and not worth trying to engage constructively with, as opposed to dunking on them for "status and esteem".)
Not aiming for convergence, in practice
What does it usually look like in a conversation, where one party is not "aiming for convergence on truth"? I don't think it usually looks like Zack's optimistic picture, where they are just aiming unilaterally to arrive at the truth and not caring much about convergence. Much more of the time, it looks like that party engaging in status games, aiming to WIN rather than LEARN or TEACH. Picking arguments for persuasiveness more than for correctness. Cherry-picking evidence and not admitting they're doing so. Attacking the other party's character, who they associate with, what they look like, rather than their evidence and arguments. Posturing to make themselves look good to the audience. Etc., etc., etc.
I don't claim that these things are always bad. (Maybe the discussion is in a deliberately-designed adversarial system like a law court, where the hope is that by having two people skilfully arguing just for their own position the truth will become clear to disinterested third parties. Maybe you're arguing about something with urgent policy implications, and the audience is full of powerful people, and you are very very confident that you're right and the other person's wrong.) But in something like an LW discussion, I think they usually are bad.
They're bad for three reasons. One is that these behaviours are not optimized for arriving at truth. The second is that, if the other party is like most of us, they make it psychologically harder for them to keep aiming at truth rather than at WINning. And the third is that, if the other party notices that the discussion is at risk of turning into a status-fight, they may have to waste time and energy pointing out the status-fight manoeuvres and objecting to them, rather than dealing with the actual issues.
The default outcome of any disagreement between human beings is for it to degenerate into this sort of adversarial status-fighting nonsense. Our discussion-norms should try to discourage that.
Ah, says Imaginary Zack, but if someone does that sort of thing you can just ignore it and keep attending to the actual issues, rather than either switching into status-fight mode yourself or diverting the conversation to how the other guy is being unreasonable.
In principle, perhaps. With a debate between (and an audience of) spherical rationalists in a perfect vacuum, perhaps. Here in the real world, though, (1) status-fighting nonsense is often effective and (2) the impulse to respond to status-fighting nonsense with more status-fighting nonsense is very strong and it can happen before you've noticed it's happening.
I think this is the purpose of Duncan's proposed guideline 5. Don't engage in that sort of adversarial behaviour where you want to win while the other party loses; aim at truth in a way that, if you are both aiming at truth, will get you both there. And don't assume that the other party is being adversarial, unless you have to, because if you assume that then you'll almost certainly start doing the same yourself; starting out with a presumption of good faith will make actual good faith more likely.
Disrespect, again
But what if the other party is a bozo? Do we really have to keep Assuming Good Faith and pretending that they're aiming at the truth in a way that should lead to convergence?
Arguing with bozos is often a waste of time. The best course of action in this situation may be to walk away. (Your audience will likely figure out what's going on, and if they don't then maybe you've mis-evaluated who's the bozo.) But if for whatever reason you have to continue a debate with someone you think is a bozo -- by all means, go ahead, drop the assumption of good faith, accept that there's going to be no convergence because the other guy is a bozo who has nothing to teach and is unable to learn, play to the gallery, etc., etc., etc. But know that that's what you're doing and that this is not the normal course of events. (If it is the normal course of events, then I suggest that either everyone around is just too stupid for you and you should consider going elsewhere, or else you have again mis-evaluated who's the bozo.)
Norms and guidelines have exceptions. The situation where your discussion partner is invincibly ignorant or confused should be an exceptional one; if not, the problem isn't with the norms, it's with the people.
55 comments
Comments sorted by top scores.
comment by anonymousaisafety · 2023-04-14T00:37:04.799Z · LW(p) · GW(p)
Sometimes when you work at a large tech-focused company, you'll be pulled into a required-but-boring all-day HR meeting to discuss some asinine topic like "communication styles".
If you've had the misfortune fun of attending one of those meetings, you might remember that the topic wasn't about teaching a hypothetically "best" or "optimal" communication style. The goal was to teach employees how to recognize when you're speaking to someone with a different communication style, and then how to tailor your understanding of what they're saying with respect to them. For example, some people are more straightforward than others, so a piece of seemingly harsh criticism like "This won't work for XYZ reason." doesn't mean that they disrespect you -- they're just not the type of person who would phrase that feedback as "I think that maybe we've neglected to consider the impact of XYZ on the design."
I have read the many pages of debate on this current disagreement over the past few days. I have followed the many examples of linked posts that were intended to show bad behavior by one side or the other.
I think Zack and gjm have a good job at communicating with each other despite differences in their preferred communication styles, and in particular, I agree strongly with gjm's analysis:
I think this is the purpose of Duncan's proposed guideline 5. Don't engage in that sort of adversarial behaviour where you want to win while the other party loses; aim at truth in a way that, if you are both aiming at truth, will get you both there. And don't assume that the other party is being adversarial, unless you have to, because if you assume that then you'll almost certainly start doing the same yourself; starting out with a presumption of good faith will make actual good faith more likely.
And then with Zack's opinion:
That said, I don't think there's a unique solution for what the "right" norms are. Different rules might work better for different personality types, and run different risks of different failure modes (like nonsense aggressive status-fighting vs. nonsense passive-aggressive rules-lawyering). Compared to some people, I suppose I tend to be relatively comfortable with spaces where the rules err more on the side of "Punch, but be prepared to take a punch" rather than "Don't punch anyone"—but I realize that that's a fact about me, not a fact about the hidden Bayesian structure of reality. That's why, in "'Rationalist Discourse' Is Like 'Physicist Motors'", I made an analogy between discourse norms and motors or martial arts—there are principles governing what can work, but there's not going to be a unique motor, a one "correct" martial art.
I also agree with Zack when they said:
I'm unhappy with the absence of an audience-focused analogue of TEACH. In the following, I'll use TEACH to refer to making someone believe X if X is right; whether the learner is the audience or the interlocutor B isn't relevant to what I'm saying.
I seldom write comments with the intent of teaching a single person. My target audience is whoever is reading the posts, which is overwhelmingly going to be more than one person.
From Duncan, I agree with the following:
It is in fact usually the case that, when two people disagree, each one possesses some scrap of map that the other lacks; it's relatively rare that one person is just right about everything and thoroughly understands and can conclusively dismiss all of the other person's confusions or hesitations. If you are trying to see and understand what's actually true, you should generally be hungry for those scraps of map that other people possess, and interested in seeing, understanding, and copying over those bits which you were missing.
Almost all of my comments tend to focus on a specific disagreement that I have with the broader community. That disagreement is due to some prior that I hold, that is not commonly held here.
And from Said, I agree with this:
Examples?
This community is especially prone to large, overly-wordy armchair philosophy about this-or-that with almost no substantial evidence that can tie the philosophy back down to Earth. Sometimes that philosophy gets camouflaged in a layer of pseudo-math; equations, lemmas, writing as if the post is demonstrating a concrete mathematical proof. To that end, focusing the community on providing examples is a valuable, useful piece of constructive feedback. I strongly disagree that this is an unfair burden on authors.
EDIT: I forgot to write an actual conclusion. Maybe "don't expect everyone to communicate in the same way, even if we assume that all interested parties care about the truth"?
comment by Zack_M_Davis · 2023-04-13T20:13:15.378Z · LW(p) · GW(p)
Thanks for writing this!! There's a number of places where I don't think you've correctly understood my position, but I really appreciate the engagement with the text I published: if you didn't get what I "really meant", I'm happy to do more work to try to clarify.
TEACH, so that B ends up believing X if X is right and Y if Y is right.
CONVINCE, so that B ends up believing X.
EXPOUND, so that the audience ends up believing X.
I'm unhappy with the absence of an audience-focused analogue of TEACH. In the following, I'll use TEACH to refer to making someone believe X if X is right; whether the learner is the audience or the interlocutor B isn't relevant to what I'm saying.
this amounts to supposing that everyone is trying to WIN, hopefully with constraints of honesty, which they do partly by trying to CONVINCE and EXPOUND. We may hope that everyone will LEARN but in Zack's presentation that doesn't seem to be present as a motive at all. [...] you're only going to be able to WIN if you start out disagreeing and argue vigorously for your dissenting opinion
That's not what I meant. In terms of your taxonomy of motives, I would say that people who don't think they have something to TEACH, mostly don't end up writing comments: when I'm only trying to LEARN and don't have anything to TEACH, I usually end up silently reading (and upvoting contributions I LEARNed from) without commenting. LEARNing can sometimes be a motive for commenting: when I think I think an author might have something to TEACH me, but I can't manage to infer it from the text they've already published, I might ask a question. But I do think that's a minority of comments.
The relevance of WINning is as a reward for TEACHing. I think we should be trying to engineer a culture where the gradients of status, esteem, and WINning are aligned with good epistemology—where the way to WIN is by means of TEACHing truths, rather than CONVINCEing people of falsehoods. I am fundamentally pessimistic about efforts to get people to care less about WINning (as contrasted to my approach of trying to align what WINning means in the local culture). If I claimed that my motive for commenting on this website was simply that I altruistically wanted other users of this website to have more accurate beliefs, I would be lying; I just don't think that's how human psychology works.
but this isn't about diverging-the-opposite-of-converging at all, it's using "diverge" to mean "differ at the outset" [...] Converging means getting closer, not starting out already in agreement.
You know, that's a good point! Now that you point it out, I see that the passage you quote is bad writing and sloppy thinking on my part. As a result of you pointing this out, I think it's healthy for people to think (slightly) more of you for pointing out a flaw in the text I published, and (slightly) less of me for publishing flawed text. (And very slightly more of me for conceding the point as I am doing now, rather than refusing to acknowledge it.) You WIN. I think it's okay for you to WIN, and to enjoy WINning. You've earned it!
in the case where I'm right the others' problems needn't be "something wrong with their epistemic process" in any sense that requires disrespect
Isn't it, though?—at least for some relevant sense of the word "disrespect." Previously, you thought Tao was so competent that you never expected to find yourself in the position of thinking he was wrong. Now, you think he was wrong. If you're updating yourself incrementally [LW · GW] about Tao's competence, it seems like your estimate of his competence should be going down. Not by very much! But a little. That's the sense in which disagreement is disrespect. (The phrase comes from the title of the linked Robin Hanson post, which is also linked when I used the phrase in my post; Hanson explicitly acknowledges that the degree of disrespect might be small.)
he's conflating "not aiming to converge on truth" with "being wrong"
So, I tried to clarify what I meant there in the parenthesized paragraph starting with the words "This is with respect to the sense". Did that suffice, or is this more bad writing on my part? I'm not sure; I'm happy to leave it to the reader to decide.
disagree on some difficult question about, say, theoretical physics. The cooperate move [...]
I like the physics debate analogy, but the moral depends on how you map real-world situations to a payoff matrix. When expressing disapproval of the Prisoner's-Dilemma-like frame, it's because I was worried about things analogous to finding errors in the other person's calculation being construed as "defection" (because finding fault in other people's work feels "adversarial" rather than "collaborative").
Zack is more willing than Duncan thinks he should be to decide that other people are bozos
I would particularly emphasize that people can be bozos "locally" but not "globally". If I'm confident that someone is wrong, I don't want to pretend to be more uncertain than I actually am in order to make them feel respected. But I'm also not casting judgement on the totality of their worth as a person; I'm just saying they're wrong on this topic.
I don't think it usually looks like Zack's optimistic picture
I'm glad you noticed the optimism! Yes, I would say that I'm relatively optimistic about the possibility of keeping discussions on track despite status-fighting instincts—and also relatively pessimistic about the prospects of collaborative norms to actually fix the usual problems with status-seeking rather than merely disguising them and creating new problems [LW(p) · GW(p)].
When having a discussion, I definitely try to keep in mind the possibility that the other person is right and I'm wrong. But if the person I'm currently arguing with were to tell me, "I don't feel like you're here to collaborate with me; I think you should be putting in more effort to think of reasons I might be right," that actually makes me think it's less likely that they might be right (even though the generic advice is good). Why? Because giving the advice in this context makes me think they're bluffing: I think if they had an argument, they would stick to the argument (telling me what I'm getting wrong, and possibly condemning my poor reading comprehension if they think they were already adequately clear), rather than trying to rules-lawyer me for being insufficiently collaborative.
That said, I don't think there's a unique solution for what the "right" norms are. Different rules might work better for different personality types, and run different risks of different failure modes (like nonsense aggressive status-fighting vs. nonsense passive-aggressive rules-lawyering). Compared to some people, I suppose I tend to be relatively comfortable with spaces where the rules err more on the side of "Punch, but be prepared to take a punch" rather than "Don't punch anyone"—but I realize that that's a fact about me, not a fact about the hidden Bayesian structure of reality. That's why, in "'Rationalist Discourse' Is Like 'Physicist Motors'" [LW · GW], I made an analogy between discourse norms and motors or martial arts—there are principles governing what can work, but there's not going to be a unique motor, a one "correct" martial art.
Replies from: gjm↑ comment by gjm · 2023-04-18T01:32:44.229Z · LW(p) · GW(p)
(Content-free reply just to note that I have noticed this and do intend to reply to it properly, when unlike now I have a bit of time to give it the attention it deserves. Apologies for slowness.)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-04-18T03:56:50.189Z · LW(p) · GW(p)
Don't apologize; please either take your time, or feel free to just not reply at all; I am also very time-poor at the moment.
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T07:22:05.573Z · LW(p) · GW(p)
I have only read through this once, somewhat quickly, and have not had time to deeply grok and savor it.
But from that one quick read-through, I basically endorse gjm's presentation of my position; it passes my ITT with at least a B+ and plausibly if I read through carefully it would prove to have full marks.
I'm not quite sure how I would rephrase 5, given pushback and time to mull (it is more the case that the given phrasing doesn't do a good job expressing the thing than that I think the thing isn't part of the basics of good rationalist discourse).
There are a handful of pieces to the puzzle, including but not limited to:
- Good rationalist discourse, as opposed to good unilateralist truth-seeking inquiry, values and prioritizes some kind of collaboration. There are many different ways that a good collaborative truth-seeking conversation can look, ranging from sparring/debate to long trying-to-pass-one-another's-ITT's. But there has to be some kind of being-on-the-same-page about how the conversation itself works, some sort of mutual consent regarding the local rules of engagement, or you end up failing to get anything out of the discussion at best, or end up in conflict or wasting time/energy at worst.
- Setting "convergence with your partner" as the target/goal/thing to optimize for is wrong, as clearly stated in the original post. Most ways of converging are orthogonal to truth; if your intention is to smooth or strengthen a social connection you're not doing truth-seeking.
- It is in fact usually the case that, when two people disagree, each one possesses some scrap of map that the other lacks; it's relatively rare that one person is just right about everything and thoroughly understands and can conclusively dismiss all of the other person's confusions or hesitations. If you are trying to see and understand what's actually true, you should generally be hungry for those scraps of map that other people possess, and interested in seeing, understanding, and copying over those bits which you were missing.
So from the perspective of being on a forum about clear thinking, clear communication, and collaborative truth-seeking, i.e. if you came here to play the game that is being played here, there is some combination of prescriptions that goes something like:
- Don't be an antagonistic, suspicious, hostile dick; if you don't have some amount of baseline trust and charity for the median LWer then you're in the wrong place. Good faith should not be extended infinitely or blindly; people can in fact prove that they don't deserve it, but you should start from a place of "we're probably both here to try to get closer to the truth" because if you don't, then you're going to ruin the potential for collaboration before it has a chance to be realized. Analogy: don't join a soccer team full of people you do not trust to play soccer well and then immediately refuse to coordinate with them because you think they're untrustworthy teammates.
- That being said, don't prioritize "being a good teammate." Prioritize playing a good game of soccer/doing correct things with the ball. Trust others to receive the ball from you and pass the ball to you until they show you that they don't deserve that trust; don't make "evaluating whether they deserve that trust" or "bending over backwards to prove that you deserve that trust" high priorities. Part of the point of assembling in the garden of LessWrong rather than just doing everything out on the naked internet is that there's a higher baseline of good players around and you can just get down to playing soccer quickly, without a lot of preamble. Figuring out what's true is the actual game, getting closer to truth is the actual goal, and if you're focused on that, you will tend to move closer to others who are also doing that as a side effect.
- Share the labor of making progress with your conversational partners. This loops back to the Zeroth guideline, of expecting it to take some work, but there's a thing where, like, one person sits back and keeps expecting their partner to (basically) do their bidding, and do all of the heavy lifting, and this isn't playing soccer, this is plopping yourself down in the middle of the field and getting in the way of the other people who now have to play soccer around you and tune out your attempts to call plays from a seated position.
(Another way to say this is "it isn't anybody's job to convince or educate you—it's both of your jobs to figure out what's true, and the best way to do that is to hand information back and forth to each other, not to demand that other people hand their information to you.")
"Aim for convergence on truth" was my rough attempt to try to say these all in a single mouthful, because I do think there's One Thing here, even though it has some component parts. It's the thing that people are failing to do if they're prioritizing social smoothness above all else, but it's also the thing that people are failing to do if they act like a hostile jerk; it's something like "effectively leveraging the presence of two humans each in possession of different scraps of the underlying truth."
Some people fail on this one by not even trying to leverage the presence of two humans, and others fail on this by not doing so effectively. But like. There's some potential that is present in [the interaction of two people] that is bigger and better than what a unilateralist can expect to achieve by diving into enemy territory. That's what the Fifth guideline was trying to gesture at/codify.
Replies from: gjm↑ comment by gjm · 2023-04-12T11:16:47.900Z · LW(p) · GW(p)
(Upvoted for providing useful information about the relationship between what-I-conjectured-to-be-Duncan's-position and Duncan's-actual-position. Replace "read through this once, somewhat quickly" with "read through this carefully and can confirm either that there are no major discrepancies or that I have mentioned any that there are" and it'll be a strong-upvote instead :-).)
comment by Viliam · 2023-04-12T10:23:36.668Z · LW(p) · GW(p)
This debate feels to me like husband and wife yelling at each other, starting with some object-level thing like "you didn't do the dishes again", but quickly escalating to meta and more meta ("it's not just the stupid dishes, but your general attitude towards...", "no, the real problem is how you communicate...", "no, it's you who cannot communicate, because..."), where almost everything that could be said was already said in the past.
Prediction: the only way this will be helpful is people choosing sides, based on whom they like more. This may solve the situation in the sense that the losing side may admit their loss and leave, or the winning side may feel socially empowered to ban the losing side... but if this is what actually happens, I would hope we are adult enough to realize it (even if commenting on it explicitly may not be the smartest idea, because it adds more potential fuel to the drama, yet another thing to go meta about, etc.).
What should be done instead... no idea, just a general suspicion that going meta/abstract is actually avoiding the painful part.
comment by Said Achmiz (SaidAchmiz) · 2023-04-11T19:28:25.317Z · LW(p) · GW(p)
(Why do I think Duncan means that rather than, e.g., “explicitly aim to converge”? Because he wrote this: “If two people disagree, it’s tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what’s true.” And, next paragraph: “If you are moving closer to truth [...] then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.” I think that’s a little too optimistic, incidentally.)
I’m afraid that it remains unclear to me how this interpretation may be simultaneously described as “aiming for convergence” (in any sense) and “aiming for truth”.
Yes, it seems more or less[1] correct that if two agents are moving closer to the truth, then they will also move closer to each other. But this does not seem to require any effort whatsoever from these two agents to converge on each other. Indeed it seems to require precisely the opposite: if either agent ever stops having convergence on truth as his only goal, he may well (indeed, is likely to) veer from the path—which, by hypothesis, will actually impede his progress toward convergence with his counterpart!
You say:
It is possible to aim for truth without aiming for convergence even in this sense.
And my question remains: what does it mean to “aim for convergence” “in this sense”, other than aiming for truth? It seems to me that not only is “aim[ing] for truth without aiming for convergence ” possible, it is in fact the only thing that’s possible. In other words, it does not seem possible to aim for truth with aiming for convergence!
For instance, if A is very confident that X is right but has given up hope of convincing B, he may aim to EXPOUND and secondarily to WIN, promoting truth by convincing the audience that X (which A believes) is true and also making them less likely to be (as A sees it) taken in by B. Or A may not care what anyone else ends up thinking and aim only to LEARN, which will produce convergence if it happens that B is right and A is wrong but will likely do little to help B if A was right from the start.
… or, having given up hope (or having never had any hope or interest) of convincing B, A may pursue some other goals [LW(p) · GW(p)] which, however, are not “dishonorable” (as “EXPOUND” and/or “WIN” may be described as being). (Which includes “LEARN”, certainly. But not only that!)
But not in all cases. For example, suppose that the truth lies at point 100 along some continuum. Alice and Bob start at 1 and 3 respectively (that is, Alice believes that the true value is 1, Bob believes that the true value is 3). Alice encounters some evidence/arguments/whatever, and shifts her belief to 2. Bob also encounters some (different, presumably) evidence/arguments/whatever, and shifts her belief to 10. Alice and Bob have both moved closer to the truth, but have thereby moved further apart.
(And this is not merely a toy example. It is very easy to find real-life examples of this sort of thing. Most tend to be political in nature. We may consider, however, the case of attitudes about AI risk. Alice and Bob might start out with almost-but-not-quite-identical beliefs that AI risk is approximately irrelevant, and then Alice might read some articles in mainstream news sources and grow ever-so-slightly concerned, while still believing that this is a problem which will be solved in due course and deserves no attention from her; meanwhile, Bob might read Superintelligence and become moderately concerned, and conclude that AI is a real threat and something needs to be done. Alice and Bob’s beliefs are now further apart than they were before, but both have—or so we, here at Less Wrong, would generally claim!—moved closer to the truth.) ↩︎
↑ comment by gjm · 2023-04-11T20:01:12.118Z · LW(p) · GW(p)
I agree that "aiming for convergence to truth" isn't great terminology for what I think Duncan means by those words. (Maybe he will drop by and clarify whether I'm understanding his intent correctly.)
I am guessing that (1) he picked that terminology without lengthy deliberation and would not necessarily wholly endorse it given that it's been challenged, and (2) that the notions in his head when he did so were some combination of these:
- Aiming for truth, expecting / hoping for convergence (as described at length in my post).
- Aiming to work together to get at the truth. Obviously "cooperating on X" and "converging to X" are different things, but
- they do tend to go together in this case -- if A and B are working together on finding the truth, while C and D are working independently of one another on finding the truth, on some difficult question, I would expect A and B to end up closer than C and D, on average
- the mere fact that they sound alike may have been a factor
- Aiming to come to a common understanding. One way for A and B to have a productive discussion is that they don't end up agreeing on everything, but their models of the world have enough overlap that they can understand one another well and see what bits they still disagree about.
I am a little puzzled by your asking
what does it mean to "aim for convergence" "in this sense", other than aiming for truth?
since you do so after quoting the first sentence of a paragraph whose purpose is to illustrate the distinction between merely "aiming for truth" and "aiming for convergence on the truth", and then go on to quote the rest of the paragraph which gives some examples of doing one without the other, and to add some further examples of your own.
I'll be more explicit (while reminding you that I agree that the terminology seems suboptimal to me, as I understand it does to you, so I'm not claiming that "aiming for convergence to the truth" is a particularly great way to describe the thing we are talking about):
- To say that someone is "aiming for convergence on the truth" rather than merely "aiming for truth" is (1) to affirm that they are "aiming for truth" and (2) to say also that they are choosing modes of aiming-for-truth that are designed not only to help them get to the truth but also to help others to get to the truth along with them, especially if those modes have the property of making it especially likely that both parties not only individually approach the truth but do so in ways that bring their positions closer to one another.
- Suppose, for instance, that you and I disagree about climate change: I accept the position of e.g. the IPCC, whereas you think their position is one of unscientific alarmism and expect both less warming and less harm than they predict. In particular, we disagree about (say) the best point-estimate for the amount of warming the climate will see by 2100.
- I could pull out a bunch of IPCC reports and look up their numbers. This will, by my lights, get me closer to the truth, because (in my view) the IPCC has looked carefully at the best available analyses and summarized them accurately. But I have no particular expectation that doing this will change your position at all, and since I don't know which direction my current guesses about the numbers are wrong in I also have no idea whether it will bring us closer or further.
- I could suggest that you and I describe our reasons for holding the views we do, and take a look at one another's best arguments, and do our best to evaluate them honestly. Each of us might be a bit concerned that the other has spurious but plausible-sounding arguments and will lead us astray, but if we're both reasonably confident in our ability to spot bullshit we should both expect that we will understand one another's positions better, and (in expectation) that we will move (a) towards the truth and (b) together, because each of us will concede some chance that the other will correctly persuade us of something we are currently unconvinced by.
- The first of those is "aiming for truth". The second is "aiming for convergence on the truth".
- "Aiming for convergence on the truth" also has connotations of hoping that we will end up agreeing more than we do at the outset (specifically, hoping that we do so by both becoming less wrong, but there is a psychological difference between "I hope to get less wrong" and "I hope he gets less wrong" and "I hope we come to disagree less" and I am suggesting that all three are at least a little bit present in this case).
I agree with the point in your footnote that if two people get closer to the truth they don't necessarily get closer to one another, and that was the kind of thing I had in mind when I said "I think that's a little too optimistic". I think (as, if I'm interpreting you right, you do too) that this isn't the usual case, and furthermore I think it's especially not the usual case when they way they get closer to the truth is by discussing the matter with one another. And, as I hope the discussion above has already indicated, this is part of what I think is meant by "aiming for convergence on the truth".
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-11T20:54:22.022Z · LW(p) · GW(p)
I am a little puzzled by your asking
what does it mean to “aim for convergence” “in this sense”, other than aiming for truth?
since you do so after quoting the first sentence of a paragraph whose purpose is to illustrate the distinction between merely “aiming for truth” and “aiming for convergence on the truth”, and then go on to quote the rest of the paragraph which gives some examples of doing one without the other, and to add some further examples of your own.
Well, in the cited paragraph, you say:
I think “aim for convergence on truth” means “aim for the truth, with the expectation of convergence”.
But how is that different from “aim for the truth, without the expectation of convergence”? What, concretely do you do differently?
You explain like so:
I think that when Duncan says we should “aim for convergence on truth” he means that A should aim to LEARN and TEACH, with the hope that if A and B pool their knowledge and understanding they will likely (1) both end up with truer views, (2) end up understanding one another better, and (3) since the truth is a single thing, end up nearer to one another. 3 here is not particularly a goal but it is a likely outcome of this sort of discussion if pursued honestly and competently …
And, again: if (3) is not a goal, then it doesn’t affect what you actually do. Right? Again, you say things like “do [things] with the hope that [whatever]”. But how is that different from doing the same things without the hope that [whatever]?
Relatedly, you describe one possible interpretation/aspect of “aiming for convergence on truth” as:
if A and B are working together on finding the truth, while C and D are working independently of one another on finding the truth, on some difficult question, I would expect A and B to end up closer than C and D, on average
Would you? But is that because you think that A and B are going to be, on average, closer to the truth than C and D? Or further away from it? Or is one just as likely as the other? (This does not seem to me to be a trivial question, by the way. It has profound implications for the structural [LW · GW] questions involved in designing mechanisms and institutions for the purpose of truth-seeking.)
And relatedly again:
To say that someone is “aiming for convergence on the truth” rather than merely “aiming for truth” is (1) to affirm that they are “aiming for truth” and (2) to say also that they are choosing modes of aiming-for-truth that are designed not only to help them get to the truth but also to help others to get to the truth along with them, especially if those modes have the property of making it especially likely that both parties not only individually approach the truth but do so in ways that bring their positions closer to one another.
This seems like a wonderful thing to me!
… until we get to the “especially” part. Why would we want this? Isn’t that goal necessarily at odds with “aiming for truth”? Whatever choice you make to serve that goal, aren’t you inevitably sacrificing pursuit of truth to do so?
What’s more, it seems to me that what “choosing modes of aiming-for-truth that are designed not only to help them get to the truth but also to help others to get to the truth along with them” actually looks like is almost the diametric opposite of most of what’s been described (by Duncan, e.g.) as “aiming for convergence on truth”! (It is, ironically, the kinds of approaches that Zack and I have described that I would consider to be better at “also helping others to get to the truth”.)
As far as your climate change debate example goes—I am fully on board with such things, and with “aiming to come to a common understanding”. I think such efforts are tremendously valuable. But:
-
I don’t think that describing them as anything like “aiming for convergence on truth” makes any sense. In such a scenario, you are in one sense aiming for truth, directly (that is, truth about your interlocutor’s beliefs and so on), and in another sense (on the object level of whatever the disagreement is about) certainly not aiming for convergence on anything.
-
I think that such efforts to achieve mutual understanding have almost nothing to do with the rest of what has been proposed (e.g., again, by Duncan) as allegedly being part of “aiming for convergence on truth”.
In summary, I think that what’s been described as “aiming for convergence on truth” is some mixture of:
(a) Contentless (“do X, while hoping for Y”, where the second clause is epiphenomenal)
(b) Good but basically unrelated to the rest of it (“try to achieve mutual understanding”)
(c) Bad (various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on)
I think that the contentless stuff is irrelevant-to-harmful (harmful if it distracts from good things or serves as a smokescreen for bad things, irrelevant if it’s actually epiphenomenal); the good stuff is good but does not benefit at all from the “aim for convergence on truth” rhetoric; and the bad stuff, of course, is bad.
Replies from: gjm↑ comment by gjm · 2023-04-11T22:43:53.987Z · LW(p) · GW(p)
To much of the above I don't think I have anything to say beyond what I've said already.
Before answering those parts I think I have something possibly-useful to say to, a higher-level point. Assuming for the moment that I've given a roughly correct and at-least-kinda-coherent account of what Duncan meant by "aiming for convergence on the truth", it seems to me that there are two entirely separate questions here.
Question 1: Is "aiming for convergence on the truth" a good way to describe it?
Question 2: Is it a good thing to aim for?
It seems to me that most of what you are saying is about question 1, and that question 2 matters much more. I have limited appetite for lengthy argument about question 1, especially as I have already said that the phrase Duncan used is probably not a great description for what I think he meant.
is that because you think that A and B are going to be, on average, closer to the truth than C and D?
I think the specific activity of "trying to arrive at the truth together" tends to produce outcomes where the people doing it have opinions that are closer than they would be if they put the same amount of effort into trying to arrive at the truth individually.
I don't have any particular expectation for which of those activities gets the participants individually closer to the truth.
(But if you're already in a discussion, then 1. maybe that's because for whatever reason you think that doing it collectively will help, and 2. maybe it's because you prefer to do it collectively for other reasons besides trying to optimize your individual approaches to the truth, and 3. in that context I suspect that doing something that doesn't aim at finding the truth together is more than averagely likely to have suboptimal outcomes, e.g. because it's rude and most people don't react well to rudeness.)
until we get to the "especially" part. Why would we want this? Isn't that goal necessarily at odds with "aiming for truth"?
It's necessarily different from "aiming for truth". (Just as "not being an asshole", and "having some time available for one's paid work", and "impressing other people who are reading", and "having fun", and "avoiding giving influential people the impression that we're crackpots", etc., etc., are all different from "aiming for truth" and may to greater or lesser extents conflict with it, but people commonly try to do those things as well as "aiming for truth".)
(I think that in most contexts[1] it shouldn't be a goal. But it may be a consequence of seeking truth collaboratively, and that fact is one reason why I think it's not crazy to describe seeking truth collaboratively as "aiming to converge on the truth". Even though, as I have already mentioned two or three dozen times, I do not think that "aiming to converge on the truth" is a perfect description of what I think Duncan was trying to describe.)
[1] One plausible exception: when the people involved are going to have to work together on something related to the question at issue. E.g., an engineering team may work more effectively if they are all agreed on what the product should be like than if everyone has different ideas, even if what they all agree on isn't actually the best possible version of the product.
You are welcome to believe that your-and-Zack's preferred mode of interaction[2] is more effective in helping others get to the truth along with you than Duncan's preferred mode. You might be right. I do not, so far, find myself convinced.
[2] I don't mean to imply that you and Zack would do everything the exact same way, of course.
I don't agree with your analysis of the climate-change example. The approach I argue for is not aiming only for truth about the other person's beliefs, it's aiming for truth about the object-level question too; and it's no less genuinely aiming at convergent understanding just because you can also describe that as "aiming for truth about one another's beliefs".
As for your summary, your opinion is noted. I think that in "do X while hoping for Y" the second clause is relevant because, like it or not, people's attitudes are relevant when engaging in conversation with them; I don't agree that "try to achieve mutual understanding" is at all unrelated to "the rest of it"; I don't know where you get "don't ask people for examples of their claims" from and it sounds like a straw man[3]; the norms of interaction Duncan has actually suggested in this connection are things like "if you think the other person is not arguing in good faith, give them more chances to show otherwise than you feel like giving them" and "if you find what the other person is doing frustrating, it's better to say 'you said X, I interpret it as Y, and I find that frustrating because Z' than 'you are being frustrating'" and while you're welcome to find those "bad" you, again, haven't given any reason why anyone else should agree; and at least one important component of what-I-think-Duncan meant, namely "trying to work together to find the truth, as opposed to e.g. fighting one another about what it is or each aiming for truth independently", has completely disappeared in your summary of the components of "aiming for convergence on the truth".
[3] I think the things Duncan has actually said are more like "Said engages in unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself", and wherever that lands on a scale from "100% truth" to "100% bullshit" it is not helpful to pretend that he said "it is bad to ask people for examples of their claims".
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T00:35:27.433Z · LW(p) · GW(p)
Assuming for the moment that I’ve given a roughly correct and at-least-kinda-coherent account of what Duncan meant by “aiming for convergence on the truth”, it seems to me that there are two entirely separate questions here.
Question 1: Is “aiming for convergence on the truth” a good way to describe it?
Question 2: Is it a good thing to aim for?
It seems to me that most of what you are saying is about question 1, and that question 2 matters much more. I have limited appetite for lengthy argument about question 1, especially as I have already said that the phrase Duncan used is probably not a great description for what I think he meant.
Well… this is somewhat disheartening. It seems that we are quite far from mutual understanding, despite efforts, because I would characterize the discussion thus far in almost entirely the opposite way. It seems to me that you haven’t given anything like a coherent account of what Duncan meant by “aiming for convergence on truth”; and I would say that most of what I’ve said has been abou what is and isn’t good to aim for, and only peripherally have I commented on what terminology is appropriate.
I don’t have any particular expectation for which of those activities gets the participants individually closer to the truth.
But then there is no particular reason to prefer the cooperative variant! Why would we ever want to converge with each other, if not because we believe that this will result in us both being closer to truth than if we had converged less? There isn’t any other motivation (except those which are decidedly not motivated by a desire to find the truth!).
I think that in most contexts[1] it shouldn’t be a goal.
Just so…
But it may be a consequence of seeking truth collaboratively, and that fact is one reason why I think it’s not crazy to describe seeking truth collaboratively as “aiming to converge on the truth”.
I disagree entirely. If something isn’t and also shouldn’t be a goal, then we shouldn’t say that we’re aiming for it. I mean, this really doesn’t seem complicated to me.
I don’t agree with your analysis of the climate-change example. The approach I argue for is not aiming only for truth about the other person’s beliefs, it’s aiming for truth about the object-level question too …
If you think that talking with intelligent climate change skeptics is a more effective way to get closer to the truth (in expectation) than perusing IPCC reports (a reasonable enough belief, certainly), then yes, if you do that, then you are “aiming for truth” (on the object level). If you don’t think that, and are talking with the skeptic only to gain understanding of what you believe to be a mistaken worldview, then of course you’re not “aiming for truth” (on the object level).
In both cases, convergence (on the object level) is a completely irrelevant red herring.
at least one important component of what-I-think-Duncan meant, namely “trying to work together to find the truth, as opposed to e.g. fighting one another about what it is or each aiming for truth independently”, has completely disappeared in your summary of the components of “aiming for convergence on the truth”
I do not agree. Quite simply, a comment like this one [LW(p) · GW(p)] is “working together to find the truth”, and anyone who does not think so, but instead perceives it as some sort of attack, is precisely the problem.
(By the way, here is what it looks like [LW(p) · GW(p)] when someone asks “what are some examples” and you don’t treat it as an attack, but instead as a way to clarify and elaborate on your claims. There is your “working together to find the truth”!)
And by extension, anything which is described by “working together to find the truth”, but which isn’t already included in “aiming for truth” as I understand it, goes in the Bad category.
Replies from: gjm, Duncan_Sabien↑ comment by gjm · 2023-04-12T02:03:06.840Z · LW(p) · GW(p)
I regret your disheartenment. I'm not sure what to do about it, though, so I shall just bear in mind that apparently at least one of us is having trouble understanding at least some of what the other writes and proceed.
Why would we ever want to converge with each other, if not because we believe that this will result in us both being closer to truth than if we had converged less?
As I said in the comment you were replying to, usually convergence-as-such should not be a goal. (I did also give an example of an important class of situations in which it reasonably might be.)
However, I want to register my not-total-agreement with an assumption I think you are making, namely that the only creditable motivation is "a desire to find the truth". We all have many goals, and finding the truth on any particular issue is never going to be the only one, and there is nothing wrong or disreputable or foolish about doing something for reasons that are not all about optimizing truth-finding on the particular issue at hand.
Again, I don't think that "end up with my opinion and so-and-so's opinion closer together" is generally a worthwhile goal. But other related things may be even if optimizing truth-finding is the top-level goal. "Make this place where we try to find the truth together a pleasant place so that more truth-finding can happen here". "Come to understand one another's positions better, so that in future discussions our attempts at truth-finding aren't obstructed by misunderstandings". "Make it clear that I respect So-and-so, so that it's less likely that he or others misinterpret something I say as a personal attack". And other related things may be worthwhile goals although they have little impact on truth-finding efficacy as such. "Have an enjoyable discussion" and "Help the other person have an enjoyable discussion", for instance. (One reason why people engage in discussion at all, when seeking the truth, rather than spending the time in solitary reading, thinking, etc., is that they enjoy discussion.)
If something isn't and also shouldn't be a goal, then we shouldn't say that we're aiming for it.
I feel I've almost said everything I usefully can on this terminological question, but maybe it's worth trying the following alternative tack. Since on the whole A and B tend to converge when they both approach the truth, and since this is especially true when the way they approach the truth is by discussing the issue in a manner intended to be mutually helpful, "approaching the truth" and "converging on truth" are both descriptions of the thing they are trying to achieve. Picking "converging on truth" over "approaching the truth" does not have to mean that you advocate pursuing (1) truth and (2) convergence as separate things. It can mean, and I think Duncan did mean, that you advocate pursuing truth, and the particular ways you have in mind for doing so are ones that tend to produce convergence.
Apparently you disagree. Fine. I have already said several times that I don't think Duncan's wording was optimal, after all. Something like "Aim to seek the truth together" or "Aim to seek the truth cooperatively" or "Aim to help one another arrive at the truth" would, I think, have expressed much the same idea; of course you might still object to those goals, but we'd at least be arguing about the goals rather than about the words. Still, I think Duncan's wording is a reasonable way of gesturing at the general idea of working together, seeking mutual understanding, hoping to end up both arriving at the truth (as opposed to e.g. hoping to end up arriving at the truth oneself and rather looking forward to the other guy not getting there so one can prove one's superiority), and generally trying to treat one another more as teammates than as bitter rivals; maybe you don't; I am not sure that there is any value in further argument about the words as opposed to the ideas.
If you think that talking with intelligent climate skeptics is a more effective way to get closer to the truth (in expectation) than perusing IPCC reports [...]
I don't think it is necessary to think that in order for talking with an intelligent climate skeptic to be useful and to be reasonably described as "aiming for truth". It isn't the case that only doing the single most truth-productive thing available to you is "aiming for truth".
Suppose (this is, after all, the situation I was proposing) you are already talking with an intelligent climate skeptic. Then maybe it's true that (1) in expectation you think you will learn more for a given expenditure of time and effort by reading IPCC reports, but (2) you none the less expect to learn things by talking with the skeptic and (3) you also expect to help the skeptic learn things at the same time, and (4) since you're already having a discussion it would be rude and annoying to just drop it and say "sorry, I have decided my time would be better spent reading IPCC reports". I submit that despite (1), (2) justifies saying that the discussion is "aiming for truth" and (3) and (4) are reasons for having that discussion (and for trying to make it a constructive discussion where you try to work together, rather than just dumping quotations from IPCC reports on the skeptic).
a comment like this one is "working together to find the truth" [...] here is what it looks like when someone asks "what are some examples" and you don't treat it as an attack
I think you are replying not to me but to some imagined version of me that is claiming it's always wrong to ask people for examples, even though in the comment you are replying to I said the exact opposite of that.
anything which is described by "working together to find the truth", but which isn't already included in "aiming for truth" as I understand it, goes in the Bad category.
So, first of all, you're contradicting yourself, because previously you were putting "aiming to understand one another" not in the Bad category but in the Irrelevant category, and affirming that working towards mutual understanding is a valuable thing even though you would prefer not to describe it as "aiming for convergence on the truth".
But, aside from that, the difference between "aiming for truth" and "aiming for convergence on the truth", as I have been conjecturing Duncan meant the term, is not about doing things other than aiming for truth, it's about selecting particular ways of aiming for truth. (On the grounds 1. that they are ways of aiming for truth, and 2. that they make the place more pleasant for everyone, which 2a. is valuable in itself and 2b. encourages people to engage in discussion here, and 3. that they help not only you but the other party arrive at the truth, which again is valuable in itself, and 4. that they make future discussions more likely to be productive, etc.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T01:49:45.574Z · LW(p) · GW(p)
A comment like that one is not at all an attack.
But it's also not at all "working together" to find truth. At best, it's offering the other person a modicum of encouragement, and giving them a smidge of a reason to focus their attention here rather than in any of the other places they might have been considering putting extra words—promoting a particular opportunity to attention.
That's not valueless, but it's small. 100% of the subsequent actual work was done by ryan_b, who put forth substantial effort in a 300+ word comment, to which Said then replied:
Thanks, but I meant actual, real-world examples, of each of the claims / points / sections—not, like, fictional / imagined ones. (Preferably, multiple examples per point / claim / section.)
... which is more of the "I'll sit over here in the shade while you connect every dot for me" mentality that I personally find quite tiresome. I didn't realize that, along with imperious calls for the intellectual and interpretive labor of others (with nothing whatsoever offered in exchange, such as e.g. one's own bits of possibly-relevant information), there was also a mental story aggrandizing those calls as "working together with [the person doing all the work] to find truth."
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T04:37:05.735Z · LW(p) · GW(p)
At best, it’s offering the other person a modicum of encouragement, and giving them a smidge of a reason to focus their attention here rather than in any of the other places they might have been considering putting extra words—promoting a particular opportunity to attention.
That’s not valueless, but it’s small.
You undervalue this greatly, I think. Attention is perhaps the greatest commodity, and correctly identifying where to focus efforts is of tremendous value.
Writing a deluge of text is worth little. Concisely saying exactly what needs to be said, and no more, is the goal.
100% of the subsequent actual work was done by ryan_b, who put forth substantial effort in a 300+ word comment
Effort spent on the wrong thing is worse than useless.
… which is more of the “I’ll sit over here in the shade while you connect every dot for me” mentality that I personally find quite tiresome.
Of these options:
- The dots remain un-connected and, indeed, not even drawn in the first place.
- Dots are drawn by commeters, connecting them left to authors or other commenters. (There is no law, after all, that only the OP may “connect the dots” drawn by a commenter.)
Which do you choose?
And it is no good, please note, to protest that there is a third option of some commenter drawing and connecting the dots himself. For one thing, the results tend to be worse than the author doing it… but, more importantly, empirically this simply doesn’t happen. It’s one of those “fabricated options” [LW · GW].
So: you object to #2. #3 is unavailable. That leaves #1.
And that is precisely what we see, in many cases, where no one steps up and says “hey, what are some examples”, or asks some similar should-be-obvious question.
You are, of course, free to huff and get offended, and refuse to do the “intellectual and interpretive labor” of doing something so unreasonable as to provide examples of your claims (not even unprompted, but in response to a comment). Nobody’s forcing you to do anything but ignore such comments.
But who really loses, then? Is it the asker? Or is it you, and everyone in your audience?
What does it matter that the one who asks for examples offers you no “bits of possibly-relevant information” in exchange? Does that have the slightest bearing on whether having examples is necessary in order for the claims to be meaningful or useful?
Why is it even an “exchange” in the first place? If you make some claim, and I ask for examples, and you provide them, have you done something for me (and only for me)? Have you gained nothing from the exercise? Have all your other readers gained nothing?
Truly, I find your attitude toward such things baffling. It has always seemed to me that criticism—especially concise, well-directed criticism—is a gift, and a public service. Both when giving and when receiving it, I view criticism as a selfless contribution made to the commons, and a cooperative act—a way of helping both the author or creator of a work-in-progress (as most works of any value tend to be), and any other members of a collaborative community, to perfect it, together—building on one another’s commentary (critical and otherwise), responding to one another, contributing both generative and selective effort. Criticism cannot be the whole of that process, but it is an inseparable and necessary part of it.
Your transactional view, in contrast, really strikes me as quite strange. I cannot understand it. It seems to me quite clearly to be a worse way of doing things. Are you really quite sure that you want to reject all cooperation, all contributions, that do not conform to such transactional forms?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T05:34:23.474Z · LW(p) · GW(p)
concise, well-directed criticism—is a gift, and a public service
I agree. Please ping me if you ever offer any.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T05:49:49.020Z · LW(p) · GW(p)
In lieu of that, I’d like to offer examples of good critical comments which I’ve received:
1 [LW(p) · GW(p)] 2 [LW(p) · GW(p)] 3 [LW(p) · GW(p)] 4 [LW(p) · GW(p)] 5 [LW(p) · GW(p)] 6 [LW(p) · GW(p)]
Some of these could be briefer, of course; though I can’t entirely begrudge their authors the reluctance to put in the effort to make their comments more concise. Still, it does seem to me that, on the whole, my own comment history is not too dissimilar from the above-linked set of comments made on one of my own posts. (And these are just the most useful ones!)
Do you disagree? Do you think that some or all of these comments are worthless, bad, harmful? (I assure you, I—the author of the post to which those commenters were responding—do not see them thus!) Or do you think that they bear no resemblance at all to my own commenting style? (That, too, seems like a dubious claim!)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T06:34:48.452Z · LW(p) · GW(p)
Thanks, but I meant actual, written-by-you examples, that meet the criteria of being concise and well-directed criticism—not, like, ones written by other people. (Preferably, multiple examples that show you putting forth at least half of the effort required to bridge the inferential gap between you and the author as opposed to expecting them to connect all the dots themselves.)
(This comment is a mirroring of/reference to a specific Said comment, meant to highlight certain properties of how Said engages with people.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T07:09:38.145Z · LW(p) · GW(p)
Well, if you insist, I’ll be glad to provide such examples; I was mostly trying to give a helpful additional perspective on the matter, while also avoiding “tooting my own horn”.
However, I do want to object to this part:
(Preferably, multiple examples that show you putting forth at least half of the effort required to bridge the inferential gap between you and the author as opposed to expecting them to connect all the dots themselves.)
My earlier comment said:
It has always seemed to me that criticism—especially concise, well-directed criticism—is a gift, and a public service.
You are asking for examples of “concise, well-directed criticism”; fair enough. But what has this business about “putting forth at least half of the effort”, etc., got to do with anything? I certainly don’t agree with the implication that good and useful criticism (whether concise or not) must necessarily show the criticizer putting forth any specific amount of effort. (As I’ve said many times, effort, as such, is of no value.)
Furthermore, you have mentioned the “inferential gap” several times, and suggested that it is the criticizer’s job, at least in part, to bridge it. I disagree.
For one thing, the “inferential gap” framing suggests a scenario where Alice writes something which is true and correct, and Bob doesn’t understand it. But surely this isn’t to be taken for granted? Couldn’t Alice be wrong? Couldn’t what she’s written be nonsense? Couldn’t Alice be, as you put it [LW · GW], “full of shit to begin with”? (Many people are, you know!)
And it quite mystifies me why you would suggest that it’s the job of interlocutors to bridge any such purported gap. Of course it’s not! A commenter who has read some post and honestly finds that its truth is not evident, or its meaning unclear, and asks the author for explanations, clarifications, examples, etc., owes the author no “half of the effort to bridge the inferential gap”. That’s the author’s job. Fair consideration, honest communication, an open but critical mind, basic courtesy—these things are owed. But doing the author’s job for them? Certainly not.
With that said, here are some examples of comments, written by me, containing criticism which is inarguably concise and arguably well-directed (at least, I judge it to be so):
1 [LW(p) · GW(p)] 2 [LW(p) · GW(p)] 3 [LW(p) · GW(p)] 4 [LW(p) · GW(p)] 5 [LW(p) · GW(p)] 6 [LW(p) · GW(p)] 7 [LW(p) · GW(p)]
(I have more examples of what I’d consider well-directed criticism; but when it comes to brevity, I’m afraid I can’t do any better than the above-linked comments… well, as they say: growth mindset!)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T07:30:50.122Z · LW(p) · GW(p)
I'm sorry, how do any of those (except possibly 4) satisfy any reasonable definition of the word "criticism?" And as for "well-directed", how does a blanket "Examples?", absent any guidance about what kind, qualify? Literally the property that these examples specifically and conspicuously lack is "direction."
(This comment is a mirroring of/reference to Said's commenting style, meant to highlight certain properties of how Said engages with people.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T07:59:07.051Z · LW(p) · GW(p)
I’m sorry, how do any of those (except possibly 4) satisfy any reasonable definition of the word “criticism?”
Well, I think that “criticism”, in a context like this topic of discussion, certainly includes something like “pointing to a flaw or lacuna, or suggesting an important or even necessary avenue for improvement”. I’d say that a request for examples of a claim qualifies as such, by pointing out that there aren’t already examples provided, and suggesting that it’s important to have some available for consideration. I’d call this sort of thing a fairly central type of criticism.
And as for “well-directed”, how does a blanket “Examples?”, absent any guidance about what kind, qualify?
What do you mean by this? I think that’s clear enough, in each of the linked cases, “what kind” of examples are needed.
In the first case [LW(p) · GW(p)], the request is for examples of bureaucracies being used in the described way.
In the second case [LW(p) · GW(p)], the request is for examples of things of the described “types”.
In the third case [LW(p) · GW(p)], the request is for examples of the described phenomenon.
I could continue, but… this really seems to me to be a quite strange objection. Is it really unclear what’s being asked for, in these cases? I rather doubt it.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:03:51.715Z · LW(p) · GW(p)
Oh, I'm just engaging with you in precisely the way that you engage with others. This is what it feels like to be on the receiving end of Said (except I've only done it to you for, like, two comments, instead of relentlessly for years).
(I'll be sure to keep "I could continue, but… this really seems to me to be a quite strange objection. Is it really unclear what’s being asked for, in these cases? I rather doubt it." in a pasteboard, though; it's the correct response to quite a large fraction of your comments.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T08:09:33.500Z · LW(p) · GW(p)
You seem to be suggesting that your questions weren’t asked in good faith, but rather as some sort of act.
If that’s so, then I must protest at your characterization of such behavior as being the way that I engage with others. I don’t think that bad faith, of all things, is an accusation that it makes any sense (much less is at all fair) to apply to my comments.
If, on the other hand, I’ve misinterpreted, and you were asking for examples and explanations in good faith—well, I’d say that I’ve provided both, and more than adequately. I hope that you may take inspiration from this, and consider behaving likewise in the future.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:12:22.529Z · LW(p) · GW(p)
If "let's be fair and justifiable in our assessments of one another's internals" is a standard you'd like to employ, you're something like 10,000 words too late [LW(p) · GW(p)] in your treatment of me, and I find your hypocrisy ... bold.
Separately, though, I wasn't making any claims about your internals; I was mirroring your observable behavior, asking the kind of unnecessary and obtuse questions you reliably flood threads with. If I had a model of why you do this, I would've used that model to get you to stop long before this point.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T08:17:34.579Z · LW(p) · GW(p)
I don’t think that “internals” has much to do with it. In this case, the accusation of bad faith is not based on some purported “internals”, but simply on behavior—statements, made by you. As I said, you seem to actually be saying that your earlier comments were made in bad faith. What need is there of “assessment of … internals”, when we have your plain words before us?
Likewise, I certainly don’t ask for fairness in assessment of my “internals”, nor, indeed, for any assessment of my “internals” at all; but fairness in assessing my behavior seems like an entirely reasonable thing to ask. Certainly it’s a standard which I’ve tried to follow, in my interactions with all members of Less Wrong.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:22:10.641Z · LW(p) · GW(p)
You are confused about what my previous comment was referencing. I've added a link, since the clue of "something like 10,000 words" was apparently not enough to make clear that I wasn't referring to the last couple of entries in this exchange.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T08:24:44.680Z · LW(p) · GW(p)
Perhaps. Feel free to clarify, certainly. (EDIT: Ah, you edited in a clarification. But… why did you think that I was confused about that? It never occurred to me to think that you were referring only to this subthread. My comment is entirely unchanged by this attempted clarification, and I can’t imagine why you would expect otherwise…)
But it remains the case that, by your own admission, you seem to have been engaging in bad faith here. My response [LW(p) · GW(p)] to that admission (and that that fact) stands.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:29:35.687Z · LW(p) · GW(p)
If you care to describe "someone exhibiting the exact same conversational behaviors that I, Said, regularly exhibit" as bad faith, that's certainly an interesting development.
I'd refer to it as "conforming to Said's preferred norms of engagement." I'm adopting your style; if you think that's bad, then perhaps you should do things differently.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T08:36:08.596Z · LW(p) · GW(p)
This comment [LW(p) · GW(p)] seems pretty clearly to imply that you were not asking your questions in good faith. That seems to me to be the plain meaning of it, quite independent of any questions of what constitutes “my style” or “your style” or any such thing.
As I have never, to my recollection, engaged in bad faith, I must object to your characterization of such as being “my style” or “my preferred norms of engagement”.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:43:42.794Z · LW(p) · GW(p)
As I have never, to my recollection, engaged in bad faith, I must object to your characterization such as being “my style” or “my preferred norms of engagement”.
Ah, but of course you would deny it. Why would you say "yeah, I flooded these threads with disingenuous whataboutery and isolated demands for rigor"? It would make you look pretty bad to admit that, wouldn't it? Why shouldn't you instead say that you asked every question because you were nobly contributing to the collaborative search for truth, or some other respectable reason? What downside is there, for you?
And given that, why in the world would we believe you when you say such things? Why would we ever believe any commenter who, after immediately identifying a certain kind of question as confusing and dubious and unlikely to be genuine, claims that when he asks such questions, they're totally for good reasons? It doesn't make any sense at all to take such a claim seriously!
(This comment also a near-exact reproduction of a Said comment, slightly modified to be appropriate to this situation, and thus surely the kind of utterance and reasoning that Said will overall endorse. I am slow to pick up Said-style lingo and will doubtless make errors as I climb up the learning curve; this kind of discourse is deeply alien to me and will take some time to master.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T09:05:30.979Z · LW(p) · GW(p)
after immediately identifying a certain kind of question as confusing and dubious and unlikely to be genuine
To what “certain kind of question” do you refer here?
In any case, the point remains that you admitted (as far as I can tell; and you haven’t disputed the interpretation) that you engaged in bad faith. I certainly haven’t done any such thing (naturally so, because such an “admission” would be a lie!).
Thus there is no need to believe or disbelieve me on that count; we need only check the record. As I have noted, I refer only to behavior here, not to “internals”.
why in the world would we believe you when you say such things?
But I wouldn’t say such things.
The difference between “disingenuous whataboutery” and “nobly contributing to the collaborative search for truth” hasn’t anything to do with anyone’s motivations, except insofar as they are reflected in their actions—but then we can simply examine, and discuss, those actions.
Something like, say, a request for examples of some purported claim, is good and praiseworthy, regardless of whether it is, secretly, posted for the most noble or the most nefarious of reasons.
The distinction you’re pointing to, here, is one of evaluation, not fact. So it makes no sense to speak of “believing”, or of “denying”. One may reasonably speaking of “disagreeing”, of “disputing”—but that is different.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T09:09:08.786Z · LW(p) · GW(p)
why in the world would we believe you when you say such things?
But I wouldn’t say such things.
"Such things" above refers to your claim that you've never engaged in bad faith, which is a thing you just said.
I am no longer concerned with your beliefs about anything, after your blatant falsehoods in this comment [LW(p) · GW(p)], in which you explicitly claim that four different links each say something that none of them come even remotely close to saying. That is sufficient justification for me to categorize you as an intentional liar, and I will treat you as such from this point forward.
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T18:34:14.547Z · LW(p) · GW(p)
“Such things” above refers to your claim that you’ve never engaged in bad faith, which is a thing you just said.
But this makes no sense. As I made it clear, when speaking of “bad faith”, I was referring to your (apparent) admission of engaging in bad faith. There is no need to “believe” or “disbelieve” your subsequent claims about whether you’ve done this, after you’ve admitted doing so.
Certainly I wouldn’t say “you must believe me that I’m not engaging in bad faith [but this is entirely a matter of my internal state, and not public record]”. That is the sort of thing of which you might reasonably say “why would we believe you?”. So, indeed, I did not “just” (or ever, in my recollection) say any such thing.
your blatant falsehoods in this comment [LW(p) · GW(p)], in which you explicitly claim that four different links each say something that none of them come even remotely close to saying
On the contrary, I correctly (as far as I can tell) claim that the five provided links demonstrate that, as I said, you believe a certain thing. I did not claim that you said that thing, only that you clearly (so it seemed, and seems, to me) believe it. (Certainly your behavior is difficult to explain otherwise; and when a person says something—multiple times and in multiple ways—that sounds like X, and then they also act like they believe X, is it not reasonable to conclude that they believe X? Of course more complex explanations are possible, as they always are; but the simplest explanation remains.)
Now, you later—after I’d responded, and after the ensuing discussion thread—edited the linked comment (Wayback Machine link as evidence; screenshot)[1] to add an explicit disclaimer that you do not, in fact, hold the belief in question.
Had you included such a note at the outset, the conversation would have gone differently. (Perhaps more productively, perhaps not—but differently, in any case. For example, I would have asked you for examples of you asking for examples and of you providing examples when asked; and then—supposing that you gave them—we could have discussed those examples, and analyzed what made them different from those which I provided, and perhaps gained understanding thereby.)
But that note wasn’t there before. So, as I see it, I made a claim, which had, until that moment and to my knowledge, been contradicted by nothing—not even by any disclaimer from you. This claim seemed to me to be both true and fairly obvious. In response to a challenge (by gjm) that the claim seemed like a strawman, I provided evidence, which seemed (and continues to seem) to me to be convincing.
I do not see any way to construe this that makes it reasonable to all me an “intentional liar”. The charge, as far as I can tell, is wholly unsubstantiated.
Now, you may protest that the claim is actually false. Perhaps. Certainly I don’t make any pretensions to omniscience. But neither do I withdraw the claim entirely. While I would no longer say that I “do not think it’s controversial at all to ascribe this opinion” to you (obviously it is controversial!), your previous statements (including some in this very discussion thread) and your behavior continue (so it seems to me) to support my claim about your apparent views.
I now say “apparent”, of course, because you did say that you don’t, in fact, hold the belief which I ascribed to you. But that still leaves the question of why you write and act as though you did hold that belief. Is it that your actual views on the matter are similar to (perhaps even indistinguishable for practical purposes from) the previously-claimed belief, but differ in some nuance (whether that be important nuance or not)? Is it that there are some circumstantial factors at play, which you perceive but I do not? Something else?
I think that it would be useful—not just to you or to me, but to everyone on Less Wrong—to dig into this further.
Incidentally, I find this to be a quite unfortunate habit of yours. It has happened several times in this conversation that you posted some comment, I’ve responded to it, and then you later edited the comment in such a way that my response would’ve been very different if I’d read the edited version first (or, in some cases, would never have been written at all). In
no casesalmost no cases (EDIT: correction) have you signaled the edit (as I do [LW(p) · GW(p)] when I edit for substance), leaving me no way to discover, except by vigilant watchfulness, that a comment I’d responded to now contained new and/or different words, sentences, paragraphs. I cannot but strongly disapprove of such behavior. ↩︎
↑ comment by Raemon · 2023-04-12T18:45:59.867Z · LW(p) · GW(p)
Mod note: I just gave Duncan and Said a commenting rate-limit of 1-per-day, mostly as a "slow down and give mods time to actually think" measure.
(This is not my ideal technological solution to the situation, but it was the easiest one to implement quickly, apologies)
Replies from: Raemon↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T09:12:24.752Z · LW(p) · GW(p)
Needless to say, I don’t agree with your characterization (as my comment in that thread notes).
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T07:03:29.996Z · LW(p) · GW(p)
By the way, I’d like to call to your attention several other comments on that post [LW · GW]:
The reply [LW(p) · GW(p)] to my “Thanks, but I meant actual, real-world examples” comment:
Fair request—I have a few examples for consideration, which would probably be better to break out into individual comments attached to your parent to focus discussion.
Two comments (which describe, in detail, real-world cases purporting to be examples of the claimed phenomenon) posted as follow-ups:
https://www.lesswrong.com/posts/brQwWwZSQbWBFRNvh/how-to-use-bureaucracies#j7C5JmbhpP9boBrNe [LW(p) · GW(p)]
https://www.lesswrong.com/posts/brQwWwZSQbWBFRNvh/how-to-use-bureaucracies#oshL8azidaBNEipgK [LW(p) · GW(p)]
(Do you think the discussion was made better by these two comments’ presence? Made worse? Unaffected?)
A comment [LW(p) · GW(p)] by Raemon, which says:
At the time Samo was writing his sequence, I had a hesitation about the entire thing summed up by some of Said’s comments: it’s fairly easy to armchair philosophize about society. This post would be better with clear examples, and I’d still encourage Samo to rewrite this post and others to feature examples and evidence.
Nonetheless, all of my own experiences with bureaucracy roughly matches the descriptions given here. I recently explicitly linked back to this post to explain a point, and more generally, this models this post play an important role in how I think about group coordination.
(Do you disagree with the sentiment described in the first of those paragraphs?)
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T07:25:25.317Z · LW(p) · GW(p)
I don’t know where you get “don’t ask people for examples of their claims” from and it sounds like a straw man[3]
[3] I think the things Duncan has actually said are more like “Said engages in unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself”, and wherever that lands on a scale from “100% truth” to “100% bullshit” it is not helpful to pretend that he said “it is bad to ask people for examples of their claims”
If “asking people for examples of their claims” doesn’t fit Duncan’s stated criteria for what constitutes acceptable engagement/criticism, then it is not pretending, but in fact accurate, to describe Duncan as advocating for a norm of “don’t ask people for examples of their claims”. (See, for example, this subthread [LW(p) · GW(p)] on this very post, where Duncan alludes to good criticism requiring that the critic “[put] forth at least half of the effort required to bridge the inferential gap between you and the author as opposed to expecting them to connect all the dots themselves”. Similar descriptions and rhetoric can be found in many of Duncan’s recent posts and comments.)
Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about. I really do not think it’s controversial at all to ascribe this opinion to Duncan.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T07:34:22.904Z · LW(p) · GW(p)
Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about. I really do not think it’s controversial at all to ascribe this opinion to Duncan.
No. I have never said any such thing, and you will not be able to back this up. You are, in the comment above, either lying or stupendously bad at reading comprehension. I do not hold such a stance, and provide examples to people who ask for them on the regular, and ask people for examples myself.
(In the subthread you link, I am deliberately reflecting back at you your own mode of engagement, in which you make arbitrary demands of your conversational partner; the words of that comment are intentionally almost identical to a comment of yours.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T08:40:52.653Z · LW(p) · GW(p)
you will not be able to back this up
But I’ve already backed it up, with the link I provided.
(In the subthread you link, I am deliberately reflecting back at you your own mode of engagement, in which you make arbitrary demands of your conversational partner.)
I’ve certainly never made any demand like the one I quoted you as making, so “reflecting back at me” can’t be the explanation there.
Here are more citations for you expressing the sentiment I ascribed to you:
https://www.lesswrong.com/posts/9vjEavucqFnfSEvqk/on-aiming-for-convergence-on-truth#4GACyhtGfgmn4e6Yt [LW(p) · GW(p)]
https://www.lesswrong.com/posts/LrCt2T5sDn6KcSJgM/repairing-the-effort-asymmetry [LW · GW]
(Of course, that last post was written on April 1st. And, clearly enough, the “PMTMYLW” stuff is a joke. But was the rest of it also a joke? It did not seem to be, but certainly such things can be difficult to be very sure of, on the internet.)
https://www.lesswrong.com/posts/SX6wQEdGfzz7GKYvp/rationalist-discourse-is-like-physicist-motors#cGYzFnbFhyCYQwNQE [LW(p) · GW(p)]
https://www.lesswrong.com/posts/SX6wQEdGfzz7GKYvp/rationalist-discourse-is-like-physicist-motors#AuhFdDDzevPiQBqAP [LW(p) · GW(p)]
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T08:50:20.557Z · LW(p) · GW(p)
The first link does not support your characterization, and in fact explicitly states that the value of asking for examples is small but definitely real/nonzero. It is disingenuous of you to pretend that this is evidence in favor of your characterization when it directly (though weakly) contradicts it.
The second link is an April Fool's Day joke, explicitly heralded as such, and it is disingenuous of you to include it, in addition to the fact that it comes nowhere near the topic of asking for examples except in one place where, in the context of an April Fool's Day joke, the asking-for-examples is clearly one small part of an objectionable pile whose problematic nature is rooted in being a pile.
The third link is to a comment that does not support your characterization, and it is disingenuous of you to pretend it does.
The fourth link is to a comment that does not support your characterization, and it is disingenuous of you to pretend it does.
None of those are remotely close to defending, justifying, or supporting the claim "a comment that just says 'what are some examples of this claim?' is unacceptable according to Duncan".
You're claiming I believe X, and none of your links are even a Y that strongly implies X, let alone being X directly. Calling them "citations for you expressing the sentiment I ascribed to you" is a blatant falsehood.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T09:09:34.811Z · LW(p) · GW(p)
I leave it to readers to determine whether my links support my characterization or not. Certainly I think that they do (including, yes, the April Fool’s Day post).
Perhaps you might clarify your views on this topic? It’s always possible that I misinterpreted what you’ve written. (Although, in this case, I think that’s unlikely; and if I am mistaken about that, then I must note that it’s a very reasonable interpretation—the most reasonable one, I’d say—and if your views are other than what they seem to be, then you may wish to clarify. Or not, of course; that’s your call. But then you ought not be surprised when you’re read thus.)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-12T09:12:56.480Z · LW(p) · GW(p)
Just noting that, based on the falsehoods above, in which Said explicitly claims that four different links each say something that none of them come even remotely close to saying, I now categorize him as an intentional liar, and will be treating him as such from this point forward.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T18:37:01.956Z · LW(p) · GW(p)
Please see this comment [LW(p) · GW(p)], where I note, and demonstrate, that the accusation is unsubstantiated and wholly unfair.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-11T20:20:45.404Z · LW(p) · GW(p)
Immediately before the first quote you reproduced, gjm says:
I think "aim for convergence on truth" means "aim for the truth, with the expectation of convergence".
Within the quote you reproduced, Duncan says:
If you are moving closer to truth [...] then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
To put it another way, "we are not aiming for mere convergence. Nor are we aiming for mere truth! We are aiming for convergence on the truth in a community of other truth-seekers."
If you aren't in a community of fellow truth-seekers, or don't think that you are, then aiming for convergence on truth is not what you should be doing. You should default to aiming for truth, while searching for a better community.
But if you are part of a community of truth-seekers, then "aiming for truth" is something we are taking more or less for granted. Instead, we are focused on the convergence piece, since even in a community of truth-seekers, we can't take it for granted that we will all arrive on the truth, despite our seeking efforts.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-11T20:30:47.522Z · LW(p) · GW(p)
To put it another way, “we are not aiming for mere convergence. Nor are we aiming for mere truth! We are aiming for convergence on the truth in a community of other truth-seekers.”
This remains obscure. Obvious questions:
What is “mere truth”, and how does it differ from some other (“non-mere”?) truth?
If aiming for truth “inevitably” causes you to move closer to convergence with other truth-seekers, then why would you ever need to aim for convergence (or for anything whatsoever besides truth)?
What exactly is “aiming for convergence on the truth”? I understand “aim for truth” and I understand “aim for convergence”, but I don’t understand “aim for convergence on the truth”. What does that look like, mechanically? How does it differ from either of the other things?
But if you are part of a community of truth-seekers, then “aiming for truth” is something we are taking more or less for granted.
What does this mean, exactly?
We’re taking for granted that everyone else is “aiming for truth”? (But are they really?)
Or, each of us is taking for granted that he/she is “aiming for truth”? (But what does that mean? You can know that you’re aiming for truth if you are in fact aiming for truth, to the best of your ability. I am not sure what it means to take for granted that you’re aiming for truth. Probably you meant the other thing…)
Instead, we are focused on the convergence piece, since even in a community of truth-seekers, we can’t take it for granted that we will all arrive on the truth, despite our seeking efforts.
We can’t? But then how does it help to not aim for truth, if we can’t even take for granted that we’ll get to the truth even if we aim for it? How does “aiming for convergence”… er, sorry, “focus[ing] on the convergence piece” (but how is that different from “aiming for convergence”…?) actually help?
comment by Dagon · 2023-04-11T20:02:27.042Z · LW(p) · GW(p)
The problem with this decomposition of motives and behaviors is that "truth" is actually elusive, and often poorly-defined in most real debates. In the case of propositional, testable predictions, prediction markets are awesome because they ALIGN "win" motive with "truth-publication" motive. But just having multiple participants perform the calculations or share their evidence probably works too. It's pretty easy to recognize if someone is handwaving or not actually trying to improve their prediction, so walking away is feasible as well.
For discussions of generality or framing or modeling of a situation, that's a whole lot harder. Even policy or action debates often turn on which generalized principle(s) are most salient, for which "truth" doesn't actually apply. For these, the best debates seek cruxes, not necessarily convergence. The disagreement CAN be legitimate from different priors, or different weighting of interpretations of observations, not from incorrect epistemology.
comment by Said Achmiz (SaidAchmiz) · 2023-04-11T19:07:00.774Z · LW(p) · GW(p)
Here are some goals A might have. …
Additional possible goals that A might have:
- Learn and/or refine his thinking about things other than just X/Y
- What you call “TEACH”, not B, but rather the audience—so that the audience ends up believing whichever of X/Y are true
- Give the audience the opportunity to learn and/or refine their thinking about things other than just X/Y
- Signal (to the audience) his own and/or B’s status, membership in some faction, adherence to some philosophy, possession of some personal characteristics, etc.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-11T19:40:06.744Z · LW(p) · GW(p)
So, for example, OP writes:
Well, Zack suggests that a debate is like a prediction market; you should only get into one if you think you have some knowledge or understanding that the people already there lack; you should therefore expect to diverge from others’ views, to try to prove them wrong, because that’s how you “win status and esteem”; to whatever extent debate converges on truth, it’s by incentivizing people to contribute new knowledge or understanding in order to win status and esteem.
In terms of the goals I mentioned above, this amounts to supposing that everyone is trying to WIN, hopefully with constraints of honesty, which they do partly by trying to CONVINCE and EXPOUND. We may hope that everyone will LEARN but in Zack’s presentation that doesn’t seem to be present as a motive at all.
I would characterize Zack’s account as describing my second (“TEACH” the audience) and third (“give the audience the opportunity to learn and/or refine their thinking other than just X/Y”) listed goals.
I do not think that “win status and esteem” is the same thing as “WIN, by making A look clever and wise and good, and making B look stupid or ignorant or crazy or evil”, especially given that Zack takes pains to emphasize that this process works well (in the sense of producing good outcomes, i.e. more truth for everyone) only if (and only because) status and esteem are awarded to those who are right (and not, for example, merely those who seem right!).
Replies from: gjm↑ comment by gjm · 2023-04-11T22:02:26.427Z · LW(p) · GW(p)
I agree that my list of goals isn't exhaustive. (It wasn't meant to be. It couldn't be.)
I don't think there is such a thing as a process that gives status and esteem to those who are right, as opposed to one that gives it to people who seem right, because status and esteem are necessarily conferred by people, and by definition a person X cannot distinguish "is right" from "seems right to X".
comment by CronoDAS · 2023-04-12T22:07:13.135Z · LW(p) · GW(p)
Silly nitpick: There is also a non-zero probability that your proof of a statement and Terry Tao's proof of its negation are both valid, which can happen if both of you are making a false assumption. (You prove that Conjecture A implies X, he proves that Conjecture A implies not-X, together you've proven Conjecture A false.)
Replies from: gjmcomment by WilliamTerry · 2023-04-12T00:23:48.059Z · LW(p) · GW(p)
While somebody might not engage in debate with the goal of gaining status, they can do it with the goal of convincing (whether it be through rationally argued explaining or otherwise), i.e. with the goal of spreading their ideas. That seems to be a very, very powerful motivation in the modern psyche. People are ready to pay considerable sums to obtain the rights of leaving comments on such or such media websites etc. I am always struck by the quasi-Darwinian aspect of this. A lot of us, it seems, seek to spread their ideas just like Darwin says we seek to spread our genes. Ideological Darwinism I would call it - if the name was not already taken for something else.