Actually, "personal attacks after object-level arguments" is a pretty good rule of epistemic conduct
post by Max H (Maxc) · 2023-09-17T20:25:01.237Z · LW · GW · 15 commentsContents
15 comments
Background: this post is a response to a recent post [LW · GW] by @Zack_M_Davis [LW · GW], which is itself a response to a comment on another post. I intentionally wrote this post in a way that tries to decontextualize it somewhat from the original comment and its author, but without being at least a bit familiar with all of the context, it's probably not very intelligible or interesting.
This post started as a comment on Zack's post, but I am spinning it out into own post because I think it has broader applicability and because I am interested in hearing thoughts and responses from the readers and upvoters of that post, moreso than from its author.
I originally responded to Zack's post with a comment here [LW(p) · GW(p)], but on further reflection, I want to strengthen and clarify some claims I alluded to in that comment.
I see Zack's post as making two main claims. I have medium confidence that both of the claims are false, and high confidence that they are not well-supported by the text in the post.
Claim one (paraphrased): "personal attacks (alt. negative character assessments) should only come after making object-level arguments" isn't actually a {good,useful,true} rule of epistemic conduct.
I see the primary justifications given for this claim in the text as (paraphrased):
- The person claiming this is a rule should be able to explain why they think such a rule would systematically lead to more accurate beliefs ("maps that reflect the territory").
- In fact, no such valid explanation is likely to exist, because following such a rule would not systematically lead to the formation of more accurate beliefs.
The issue with the first justification is that no one has actually claimed that the existence of such a rule is obvious or self-evident. Publicly holding a non-obvious belief does not obligate the holder to publicly justify that belief to the satisfaction of the author.
Perhaps a more charitable interpretation of the author's words ("I think [the claimant]... should be able to explain why such a rule systematically produces maps that reflect the territory...") is that the absence of a satisfactory explanation from the claimant is Bayesian evidence that no such explanation exists. But if that's what the author actually meant, then they should have said so more plainly, and acknowledged that this is often pretty weak evidence.
(Relevant background context: Zack has previously argued at great length [LW · GW] that this particular claimant's failure to adequately respond publicly about another matter is Bayesian evidence about a variety of important inferences related to that claimant's epistemics.)
The issue with the second justification is that a valid explanation for why this is a good rule very likely does exist; I gave one that I find plausible at the end of my first comment:
If more people read the beginning of an argument than the end, putting the personal attacks at the beginning will predictably lead to more people seeing the attacks than the arguments that support them. Even if such readers are not consciously convinced by attacks without argument, it seems implausible that their beliefs or impressions will not be moved at all.
Another possible explanation (for which the source of inspiration [LW(p) · GW(p)] Raemon gives in the comments): one interpretation of the proposed rule is that it is essentially just a restatement of avoiding the logical fallacy of Bulverism; if that interpretation is accepted, what remains to be shown is that avoiding Bulverism in particular, or even avoiding logical fallacies in general, is likely to lead to more accurate beliefs in both readers and authors, independent of the truth values of the specific claims being made. This seems plausible: one of the reasons for naming and describing particular logical fallacies in the first place is that avoiding them makes it harder to write (or even think!) untrue things and easier to write true things.
Note, that I am not claiming that either of these explanations is necessarily definitely correct, just that they are plausible-to-likely empirically-testable claims for why the proposed rule (in Zack's own words [LW · GW]) "arise[s] from deep object-level principles of normative reasoning" rather than being a guideline due to "mere taste, politeness, or adaptation to local circumstances".
Things like "readers often don't read to the end of the article", and "readers are unconsciously influenced by what they read", and "writers are unconsciously influenced by the structure of their own writing" are empirical claims about how the human brain works, which could in principle be tested. I do not claim to have such experimental evidence in hand, but I know which way I would bet on the experimental results, if someone were actually running the experiment.
Supposing you accept such hypotheses as likely, I contend that the term "epistemic conduct" accurately describes the purpose of the proposed rule, according the ordinary and widely-understood meaning of those words.
Sidebar: perhaps this rule in particular, or other rules which Zack names as actual rules of epistemic conduct, will not apply to some hypothetical ideal agent, or even to all non-human minds. Maybe soon we will be naming and describing rules of reasoning which apply to LLMs but are inapplicable to humans, e.g. "always write out key facts as a list of individual sentences before writing a conclusion, in order to make your KQV vectors larger and longer, which is known to improve the accuracy of your conclusions". For now though, it seems perfectly reasonable under the ordinary meaning of the words to call any rule which seems like it plausibly applies to reasoning processes in most or all of the actual minds we know about a rule of "epistemic conduct".
(My guess is a rule about not opening with personal attacks when you're making controversial object level claims will actually apply to LLMs as well as humans, though probably for very different underlying reasons. In my view, that makes it a particularly good candidate for declaring it a rule of epistemic conduct, rather than merely a good rule of conduct among humans.)
The fact that the original post doesn't really consider and reject even the most obvious possible candidate explanations as invalid or unlikely on empirical or theoretical grounds looks to me like a pretty glaring omission. Following my own advice above, I state plainly that I think such an omission is strong evidence that the claim is unsupported by the text, weak evidence that the claim is false, and make no further claims about what the author "should" do.
Claim two (paraphrased): the Gould [LW · GW] post is an example of a violation of the purported rule.
(Zack dutifully and correctly notes that this claim is not actually relevant to the validity of the first claim, nor is it even an accusation of hypocrisy. Despite this, he spends some time and energy on this point, so I will attempt to refute it here and use that refutation as a frame to make my own point about decontextualization.)
The main justification given for this claim is that, sufficiently decontextualized, the arguments are similar in structure to another post [LW · GW] which is more clearly a central example of a violation of the purported rule.
While decontextualizing [LW · GW] is often a useful and clarifying exercise, it is not a universally valid, truth-preserving operation. In this case, the rule under consideration comes with an implicit context about when and how the rule is meant to be applied.
Zack correctly notes that, for this rule in particular to make sense as a rule of epistemic conduct, it should be applicable independent of at least the truth values of the claims being made, and perhaps some other context, such as local discourse norms. Therefore, decontextualizing from the truth value of the object-level claims being made is a valid step. However, removing other context is not necessarily valid, and indeed in the two cases being compared, the relevant context outside of the truth values of the object-level claims is in fact quite important.
Why, and what context am I referring to? I gave one such explanation in my own comment [LW(p) · GW(p)], essentially, it matters who is being attacked in front of what audience, and how likely anyone is to feel personally affronted by the negative character assessments. In the Gould post, Gould is of course not likely to feel any such affront, nor is anyone in the target audience on Gould's behalf. The result is that readers are likely to be able to dispute the character assessments without distraction, and judging from the comments of that post, this indeed appears to have happened: many people did dispute the character assessments as incorrect, but the discussion was not derailed by accusations or discussion about the author's own motivations, e.g. that he was simply grinding an axe against Gould for unseen personal reasons.
An alternative, perhaps better explanation follows directly from the advice given by Villiam in this comment [LW(p) · GW(p)]:
This is hindsight, but next time instead of writing "I think Eliezer is often wrong about X, Y, Z" perhaps you should first write three independent articles "my opinion on X", "my opinion on Y", my opinion on Z", and then one of two things will happen -- if people agree with you on X, Y, Z, then it makes sense to write the article "I think Eliezer is often wrong" and use these three articles as evidence... or if people disagree with you on X, Y, Z, then it doesn't really make sense to argue to that audience that Eliezer is wrong about that, it they clearly think that he actually is right about X, Y, Z. If you want to win this battle, you must first win the battles about X, Y, Z individually.
(Shortly, don't argue two controversial things at the same time. Either make the article about X, Y, Z, or about Eliezer's overconfidence and fallibility. An argument "Eliezer is wrong because he says things you agree with" will not get a lot of support.)
Note that no such similar advice need be given to the author of the Gould post, even if the claims about Gould in that post are false! The author gave his views on the object level prior to writing the Gould post [LW · GW], and had those views received mostly positively and uncontroversially.
Again note that this advice applies independently of the truth values of the claims in the posts in question, and is plausibly also independent of any local argument norms - lots of commenters thought the claims about Gould were wrong, to varying degrees, and the norms of 2007 LW were pretty different from the norms of the 2023 EAF.
When the decontextualization operation is applied properly, rather than improperly (by blindly removing all context), it becomes apparent that the proposed rule is simply inapplicable in the case of the Gould post, and was therefore not actually violated. This looks like the rule functioning as intended: the reasoning ability of the author and readers of the Gould post (which is what the rule is meant to protect when it does apply) were not noticeably impaired by the negative character assessments within, nor by their ordering.
(This is also why Zack's example about gummed stamps falls flat: a post about licking stamps is another context in which the rule is inapplicable, rather than wrong or not useful.)
I anticipate a possible objection to this section that the applicable context for when the proposed rule applies is not explicit, legible or stated by me or the original claimant anywhere. This is true, and indeed the fact that no one has provided a clear and explicit statement of exactly in which contexts the rule is supposed to apply and how is weak Bayesian evidence that no such crisp statement exists. Feel free to update on that, though consider also that you might learn more by thinking for yourself about what contexts are relevant and why, and seeing if you can come up with a crisp statement of applicability on your own.
A final remark on the choice of Zack's phrasing, which is not central to the claims above but which I think is key to how the claims were received:
...someone who wants to enforce an alleged "basic rule of epistemic conduct" of the form...
It is implied but not stated directly in the text that the "someone" here is Eliezer; and that by writing the comment, he (Eliezer) was attempting to "enforce a rule".
I think this is an unjustified and incorrect insinuation about Eliezer's internal motivations for leaving the comment. The comment was not necessarily an attempt to enforce a rule at all. I read the comment as an attempt to explain to the upvoters of the Omnizoid post why they erred in upvoting it, and to help them avoid making similar mistakes in the future. In the course of that explanation, the comment stated (but neither explained nor attempted to enforce) a rule of epistemic conduct.
After Eliezer posted the comment in question, the votes on the EAF version of the Omnizoid post swung pretty dramatically, which I take as evidence that my interpretation of the comment's intended purpose is more likely than Zack's, and that the comment was successful in that purpose.
15 comments
Comments sorted by top scores.
comment by Martin Randall (martin-randall) · 2023-09-18T13:49:27.095Z · LW(p) · GW(p)
For me, this post suffers from excessive meta. It is a top-level response to a top-level response to a comment on a top-level post of unclear merit. As I read it I find myself continually drawn to go back up the stack to determine whether your characterizations of Zach's characterizations of Yudkowsky's characterizations of Omnizoid's characterizations seem fair. This is not a good reading experience for me.
Instead, I would prefer to see a post like this written to make positive claims for the proposed rule of epistemic conduct "personal attacks after object-level arguments". A hypothetical structure:
- What is the proposed rule? Does "after" mean chronologically, or within the structure of a single post, book, or sequence? Is it equivalent to Bulverism, poisoning the well, or some other well-known rule, or is it novel? Does it depend on whether the person being attacked is alive, or whether they are a public figure?
- What are some good, clean, uncontroversial examples of writing that follows the rules vs writing that breaks the rules?
- What are the justifications for the proposed rule? Will people unconsciously update incorrectly?
- What are the best counter-arguments against the proposed rule? Why do you think they fail?
- What are the consequences for breaking the rule? Who shall enforce the rule and its consequences?
I think this would be a better timeless contribution to our epistemic norms.
Replies from: Jirocomment by Zack_M_Davis · 2023-09-17T22:47:12.976Z · LW(p) · GW(p)
Thanks for writing!
I am interested in hearing thoughts and responses from the readers and upvoters of that post, moreso than from its author
Cold! (But given that you feel that way, it's fine for you to say so.)
Things like "readers often don't read to the end of the article", and "readers are unconsciously influenced by what they read", and "writers are unconsciously influenced by the structure of their own writing" are empirical claims about how the human brain works [...] perhaps this rule in particular [...] will not apply to some hypothetical ideal agent [...] For now though, it seems perfectly reasonable under the ordinary meaning of the words to call any [such] rule [...] a rule of "epistemic conduct".
This seems like a key crux. In the post, I agreed that a defer-personal-attacks rule is "good writing advice, which I generally endeavor to follow." But I balk at the word "epistemic" being applied in cases where the guidance in question seems to be catering to cognitive biases rather than working to overcome them.
Consider the "Readers often don't read to the end of the article" point. I agree that, indeed, readers often don't read to the end of the article. I don't think this imposes an "epistemic conduct" obligation on authors to only write things that can be understood by such readers. Sometimes an author is trying to say something nuanced that requires a whole article to say! If readers who can't be bothered to read the whole thing won't get it, I want to say that that's "their fault."
it matters who is being attacked in front of what audience, and how likely anyone is to feel personally affronted by the negative character assessments
I would say that if it matters who is being attacked, that would seem to be an indictment of that audience's epistemic conduct, rather than the author's.
I'm not sure I understand the contrary position? Pragmatically, I get that if I'm trying to persaude an audience of neo-Nazis of something, I would be more successful in my goal of persuasion if I avoid saying anything negative about Hitler. But I don't think of that as good "epistemic conduct" on my part; I see that as catering to my audience's prejudices.
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-09-17T23:32:17.580Z · LW(p) · GW(p)
But I balk at the word "epistemic" being applied in cases where the guidance in question seems to be catering to cognitive biases rather than working to overcome them.
This is indeed a crux; I view this as not relevant to the question of whether a rule is called "epistemic" or not. I see it as less about whether you are "catering to" or trying to "overcome" cognitive biases in yourself or in your reader, and more about whether you're accurately modeling the consequences of your actions.
Most of my post was arguing for dropping less context when applying this rule or using this term, but here I will actually argue for dropping more.
Facts about cognitive biases are ordinary facts about how minds work, and thus ordinary facts about how the world works, which you can use to make predictions about the consequences of your writing on other minds. Other, less debatable rules of epistemic conduct may follow from other kinds of facts about the world that have little or nothing to do with the category of cognitive biases, but I don't see the use of distinguishing between whether a rule may be called epistemic or not based on which type of true facts it follows from.
The LLM example in the OP is intended to illustrate this point obliquely; another example which stretches my own view to its limit is the following:
Suppose including the word "fnorp" anywhere in your post would cause Omega to reconfigure the brains of your readers so that they automatically agreed with anything you said. I personally would then say that not including the word "fnorp" in your post is a good rule of epistemic conduct. But I can see how in this case, the actual rule might be "don't do things which causally result in your readers having non-consensual brain surgery performed on them", which is less clearly a rule about epistemics, and therefore the use of the word epistimic attached to this rule is less justified.
I'm not particularly interested in assigning blame to authors or readers when such rules are not followed, and indeed I make no claims about what the consequences should be, if any, for not following the purported rules.
Eliezer's view (apparently) is that if you don't follow the rules, you get one comment addressing a couple of your object level claims, and then no further engagement from him personally. That seems reasonable to me, but also not particularly relevant to the question of what to call such rules or how to decide whether someone is following them or not.
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-18T01:37:36.342Z · LW(p) · GW(p)
Eliezer’s view (apparently) is that if you don’t follow the rules, you get one comment addressing a couple of your object level claims, and then no further engagement from him personally. That seems reasonable to me
The problem with allowing yourself to do this sort of thing is that it creates an incentive to construct arbitrary “rules of epistemic conduct”, announcing them to nobody (or else making them very difficult to follow). Then you use non-compliance as an excuse to disengage from discussions and leave criticism unaddressed. If challenged, retort that it was not you who “defected” first, but your critics—see, look, they broke “the rules”! Surely you can’t be expected to treat with such rule-breakers?!
The result is that you just stop talking to anyone who disagrees with you. Oh, you might retort, rebut, rant, or debunk—but you don’t talk. And you certainly don’t listen.
Of course there is some degree of blatant “logical rudeness” [LW · GW] which makes it impossible to engage productively with someone. And, at the same time, it’s not necessarily (and, indeed, not likely to be) worth your time to engage with all of your critics, regardless of how many “rules” they did or did not break.
But if you allow yourself to refuse engagement in response to non-compliance with arbitrary rules that you made up, you’re undermining your ability to benefit from engagement with people who disagree with you, and you’re reducing your credibility in the eyes of reasonable third parties—because you’re showing that you cannot be trusted to approach disagreement fairly.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-18T01:27:26.605Z · LW(p) · GW(p)
This is indeed a crux; I view this as not relevant to the question of whether a rule is called “epistemic” or not. I see it as less about whether you are “catering to” or trying to “overcome” cognitive biases in yourself or in your reader, and more about whether you’re accurately modeling the consequences of your actions.
Conflating epistemics with considerations like this is deadly to epistemics. If we’re going to approach epistemic rationality in this fashion, then we ought to give up immediately, as any hope for successful truthseeking is utterly unjustified with such an approach.
comment by Said Achmiz (SaidAchmiz) · 2023-09-18T01:24:58.989Z · LW(p) · GW(p)
No comment on the rest for now, but:
After Eliezer posted the comment in question, the votes on the EAF version of the Omnizoid post swung pretty dramatically, which I take as evidence that my interpretation of the comment’s intended purpose is more likely than Zack’s, and that the comment was successful in that purpose.
It seems to me that the exact opposite is true: such a vote swing is evidence against your characterization of the situation as “an attempt to explain to the upvoters of the Omnizoid post why they erred in upvoting it, and to help them avoid making similar mistakes in the future”, and for the characterization as “attempting to enforce a rule” (i.e., clearly the attempt was successful).
Eliezer’s comment introduces no new information or arguments (as his object-level rebuttals, while perfectly correct, are nothing new, and merely restate previously written things; nor was anything new needed, of course, the accusing post having contained no arguments valid or coherent enough to require such). So it seems unlikely that anyone was convinced that they had erred, after reading Eliezer’s reply, and consequently changed their vote. Much more likely that people were responding to the post, and the comment, as entries in a conflict, and were rallied to support Eliezer’s side after he came out to support himself.
There’s nothing wrong with that, really, but it’s a case of rule enforcement, not persuasion.
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-09-18T02:59:57.662Z · LW(p) · GW(p)
I disagree, but this seems like something that could be settled with an anonymous survey of EAF readers more easily than via argument. There would probably be some issues with response bias, and you would have to trust EAF readers to accurately recall and report their voting patterns and voting reasons. But even for a biased or flawed survey, we could agree on a methodology and bet on the predicted results beforehand as a way of settling the disagreement.
I personally won't take this on because the point seems pretty low stakes to me either way, but if someone else decides to, please create a Manifold market before conducting any surveys. The market description should include a description of the survey and proposed sampling method, as well as a disclaimer asking market participants not to take the survey themselves.
comment by Algon · 2023-09-17T20:42:03.631Z · LW(p) · GW(p)
(Relevant background context: Zack has previously argued at extreme length that this particular claimant's failure to adequately respond publicly about another matter is Bayesian evidence about a variety of important inferences related to that claimant's epistemics.)
I would change "extreme" to "great". Extreme seems more value-laden to me. I think seperating observations from inferences, especially when value laden, makes your beliefs more legible.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-09-17T22:47:51.688Z · LW(p) · GW(p)
I don't have a problem with "extreme". I read it in Wiktionary sense 3: "Excessive, or far beyond the norm." I can see how that's value-laden in the sense that what's excessive or far beyond the norm depends on what length you think would be adequate or within the norm. Clearly, I don't think it was excessive, or I wouldn't have written that way. But if Max H. thinks it was excessive (relative to his own perception of adequacy or norms), I think it's fine for him to indicate as much with the word "extreme". Who is being misled by this, exactly?
Replies from: Algon↑ comment by Algon · 2023-09-17T23:47:52.818Z · LW(p) · GW(p)
My initial comment didn't express my actual problem with "extreme", because I didn't understand what I felt what wrong with "extreme". My apologies. I'll try again.
"Extreme" is quite close to "extremist", which has a bright negative halo attatched to it. Unfortunately for me, that often spills over to "extreme" when I head the word in a bunch of contexts. If I'm not paying close attention, I expect to translate someone going to an "extreme length" in a piece critical of another claim said person has made, into something with an undue negative halo. So I came away thinking a value judgement had been made on my first pass. I expect enough readers are like me, and read this article in a superficial manner like I often do, that it would be of minor benefit to change the word to "great".
Seperately, saying you argued at "great length" isn't what I would call a value judgement. In my view, if you thought the average reader would go "whoa, that's huge!" you could use this phrase. Yes, the boundary of "huge" is partly socially determined. But I'd say concepts like "huge", "bright", "heavy" etc. are closer to a natural abstraction than judgements like "boo"/"yay" or "good"/bad" or "pleasant"/"unplreasant". They're useful across a wider variety of humans.
↑ comment by Zack_M_Davis · 2023-09-18T05:55:52.445Z · LW(p) · GW(p)
I think I have more faith in Less Wrong readers than you do? I trust readers of this website to be able to interpret words like "extreme" acccording to their literal meanings, rather than being dominated by connotational halo effects.
Replies from: Algon↑ comment by Algon · 2023-09-18T10:46:54.660Z · LW(p) · GW(p)
Never said they'd be dominated by that effect. Nor did I say the majority of LW reader would be at all affected in that way. I think there's at best a minor effect, which would impact within an OoM of 1% of readers. Which is why I said 'it would be of minor benefit to change the word to "great"'. But it would also be pretty quick and maybe a good idea? This is all, of course, assuming Max H didn't want some negative association attatched to your linked piece. I didn't know if that's what Max H was going for.
As for why I spent all this time on such a minor point, I guess something was bothering me about the word "extreme" in that sentence and I wanted to focus on it. When I focus like that, it commonly boots me out of a funk, which is what I was in, so doing this was positive EV for me.
↑ comment by Zack_M_Davis · 2023-09-19T04:43:58.376Z · LW(p) · GW(p)
I'm glad you're out of your funk!
comment by alexey · 2023-10-14T17:06:32.519Z · LW(p) · GW(p)
The issue with the first justification is that no one has actually claimed that the existence of such a rule is obvious or self-evident. Publicly holding a non-obvious belief does not obligate the holder to publicly justify that belief to the satisfaction of the author.
However, Yudkowsky also called the rule "straightforward" and said that
violating it this hugely and explicitly is sufficiently bad news that people should've been wary about this post and hesitated to upvote it for that reason alone
That is, he expected majority of EA Forum members (at least) to also consider is a "basic rule".