"Rationalist Discourse" Is Like "Physicist Motors"
post by Zack_M_Davis · 2023-02-26T05:58:29.249Z · LW · GW · 153 commentsContents
153 comments
Imagine being a student of physics, and coming across a blog post proposing a list of guidelines for "physicist motors"—motor designs informed by the knowledge of physicists, unlike ordinary motors.
Even if most of the things on the list seemed like sensible advice to keep in mind when designing a motor, the framing would seem very odd. The laws of physics describe how energy can be converted into work. To the extent that any motor accomplishes anything, it happens within the laws of physics. There are theoretical ideals describing how motors need to work in principle, like the Carnot engine, but you can't actually build an ideal Carnot engine; real-world electric motors or diesel motors or jet engines all have their own idiosyncratic lore depending on the application and the materials at hand; an engineer who worked on one, might not the be best person to work on another. You might appeal to principles of physics to explain why some particular motor is inefficient or poorly-designed, but you would not speak of physicist motors as if that were a distinct category of thing—and if someone did, you might quietly begin to doubt how much they really knew about physics.
As a student of rationality, I feel the same way about guidelines for "rationalist discourse." The laws of probability and decision theory describe how information can be converted into optimization power. To the extent that any discourse accomplishes anything, it happens within the laws of rationality [LW · GW].
Rob Bensinger proposes "Elements of Rationalist Discourse" [LW · GW] as a companion to Duncan Sabien's earlier "Basics of Rationalist Discourse" [LW · GW]. Most of the things on both lists are, indeed, sensible advice that one might do well to keep in mind when arguing with people, but as Bensinger notes, "Probably this new version also won't match 'the basics' as other people perceive them."
But there's a reason for that: a list of guidelines has the wrong type signature for being "the basics". The actual basics are the principles of rationality one would appeal to explain which guidelines are a good idea: principles like how evidence is the systematic correlation between possible states of your observations and possible states of reality [LW · GW], how you need evidence to locate the correct hypothesis in the space of possibilities [LW · GW], how the quality of your conclusion can only be improved by arguments that have the power to change that conclusion [LW · GW].
Contemplating these basics, it should be clear that there's just not going to be anything like a unique style of "rationalist discourse", any more than there is a unique "physicist motor." There are theoretical ideals describing how discourse needs to work in principle, like Bayesian reasoners with common priors exchanging probability estimates, but you can't actually build an ideal Bayesian reasoner. Rather, different discourse algorithms (the collective analogue of "cognitive algorithm" [LW · GW]) leverage the laws of rationality to convert information into optimization in somewhat different ways, depending on the application and the population of interlocutors at hand, much as electric motors and jet engines both leverage the laws of physics to convert energy into work without being identical to each other, and with each requiring their own engineering sub-specialty to design.
Or to use another classic metaphor [LW · GW], there's also just not going to be a unique martial art. Boxing and karate and ju-jitsu all have their own idiosyncratic lore adapted to different combat circumstances, and a master of one would easily defeat a novice of the other. One might appeal to the laws of physics and the properties of the human body to explain why some particular martial arts school was not teaching their students to fight effectively. But if some particular karate master were to brand their own lessons as the "basics" or "elements" of "martialist fighting", you might quietly begin to doubt how much actual fighting they had done: either all fighting is "martialist" fighting, or "martialist" fighting isn't actually necessary for beating someone up.
One historically important form of discourse algorithm is debate, and its close variant the adversarial court system. It works by separating interlocutors into two groups: one that searches for arguments in favor of a belief, and another that searches for arguments against the belief. Then anyone listening to the debate can consider all the arguments to help them decide whether or not to adopt the belief. (In the court variant of debate, a designated "judge" or "jury" announces a "verdict" for or against the belief, which is added to the court's shared map [LW · GW], where it can be referred to in subsequent debates, or "cases.")
The enduring success and legacy of the debate algorithm can be attributed to how it circumvents a critical design flaw in individual human reasoning, the tendency to "rationalize"—to preferentially search for new arguments for an already-determined conclusion.
(At least, "design flaw" is one way of looking at it—a more complete discussion would consider how individual human reasoning capabilities co-evolved with the debate algorithm—and, as I'll briefly discuss later, this "bug" for the purposes of reasoning is actually a "feature" for the purposes of deception.)
As a consequence of rationalization, once a conclusion has been reached, even prematurely, further invocations of the biased argument-search process are likely to further entrench the conclusion, even when strong counterarguments exist (in regions of argument-space neglected by the biased search). The debate algorithm solves this sticky-conclusion bug by distributing a search for arguments and counterarguments among multiple humans, ironing out falsehoods [LW · GW] by pitting two biased search processes against each other. (For readers more familiar with artificial than human intelligence, generative adversarial networks work on a similar principle.)
For all its successes, the debate algorithm also suffers from many glaring flaws. For one example, the benefits of improved conclusions mostly accrue to third parties who haven't already entrenched on a conclusion; debate participants themselves are rarely seen changing their minds [LW · GW]. For another, just the choice of what position to debate has a distortionary effect even on the audience; if it takes more bits to locate a hypothesis for consideration than to convincingly confirm or refute it [LW · GW], then most of the relevant cognition has already happened by the time people are arguing for or against it. Debate is also inefficient: for example, if the "defense" in the court variant happens to find evidence or arguments that would benefit the "prosecution", the defense has no incentive to report it to the court, and there's no guarantee that the prosecution will independently find it themselves.
Really, the whole idea is so galaxy-brained that it's amazing it works at all. There's only one reality, so correct information-processing should result in everyone agreeing on the best, most-informed belief-state. This is formalized in Aumann's famous agreement theorem, but even without studying the proofs, the result is obvious. A generalization to a more realistic setting without instantaneous communication gives the result that disagreements should be unpredictable: after Bob the Bayesian tells Carol the Coherent Reasoner his belief, Bob's expectation of the difference between his belief and Carol's new belief should be zero. (That is, Carol might still disagree, but Bob shouldn't be able to predict whether it's in the same direction as before, or whether Carol now holds a more extreme position on what adherents to the debate algorithm would call "Bob's side.")
That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements? Isn't that crazy?
Yes. It is crazy. One might hope to do better by developing some sort of training or discipline that would allow discussions between practitioners of such "rational arts" to depart from the harnessed insanity of the debate algorithm with its stubbornly stable "sides", and instead mirror the side-less Bayesian ideal, the free flow of all available evidence channeling interlocutors to an unknown destination.
Back in late 'aughts, an attempt to articulate what such a discipline might look like was published on a blog called Overcoming Bias. (You probably haven't heard of it.) It's been well over a decade since then. How is that going?
Eliezer Yudkowsky laments [LW · GW]:
In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to—they did not really get Bayesianism as thermodynamics [LW · GW], say, they did not become able to see Bayesian structures [LW · GW] any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts.
"A vague spirit of how to reason and argue" seems like an apt description of what "Basics of Rationalist Discourse" and "Elements of Rationalist Discourse" are attempting to codify—but with no explicit instruction on which guidelines arise from deep object-level principles of normative reasoning, and which from mere taste, politeness, or adaptation to local circumstances, it's unclear whether students of 2020s-era "rationalism" are poised to significantly outperform the traditional debate algorithm—and it seems alarmingly possible to do worse, if the collaborative aspects of modern "rationalist" discourse allow participants to introduce errors [LW · GW] that a designated adversary under the debate algorithm would have been incentivized to correct, and most "rationalist" practitioners don't have a deep theoretical understanding of why debate works as well as it does.
Looking at Bensinger's "Elements", there's a clear-enough connection between the first eight points (plus three sub-points) and the laws of normative reasoning. Truth-Seeking, Non-Deception, and Reality-Minding, trivial. Non-Violence, because violence doesn't distinguish between truth and falsehood. Localizability, in that I can affirm the validity [LW · GW] of an argument that A would imply B, while simultaneously denying A. Alternative-Minding, because decisionmaking under uncertainty requires living in many possible worlds. And so on. (Lawful justifications for the elements of Reducibility and Purpose-Minding left as an exercise to the reader.)
But then we get this:
- Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.
I can believe that these are good ideas for having a pleasant conversation. But separately from whether "Err on the side of forgiveness over punishment" is a good idea, it's hard to see how it belongs on the same list as things like "Try not to 'win' arguments using [...] tools that work similarly well whether you're right or wrong" and "[A]sk yourself what Bayesian evidence you have that you're not in those alternative worlds".
The difference is this. If your discourse algorithm lets people "win" arguments with tools that work equally well whether they're right or wrong, then your discourse gets the wrong answer (unless, by coincidence, the people who are best at winning are also the best at getting the right answer). If the interlocutors in your discourse don't ask themselves what Bayesian evidence they have that they're not in alternative worlds, then your discourse gets the wrong answer (if you happen to live in an alternative world).
If your discourse algorithm errs on the side of sticks over carrots (perhaps, emphasizing punishing others' bad epistemic conduct more than most people naturally do), then ... what? How, specifically, are rough-and-tumble spaces less "rational" [LW · GW], more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?
I'm not saying that goodwill is bad, particularly. I totally believe that goodwill is a necessary part of many discourse algorithms that produce maps that reflect the territory, much like how kicking is a necessary part of many martial arts (but not boxing). It just seems like a bizarre thing to put in a list of guidelines for "rationalist discourse".
It's as if guidelines for designing "physicist motors" had a point saying, "Use more pistons than most engineers naturally do." It's not that pistons are bad, particularly. Lots of engine designs use pistons! It's just, the pistons are there specifically to convert force from expanding gas into rotational motion. I'm pretty pessimistic about the value of attempts to teach junior engineers to mimic the surface features of successful engines without teaching them how engines work, even if the former seems easier.
The example given for "[r]eward[ing] others' good epistemic conduct" is "updating". If your list of "Elements of Rationalist Discourse" is just trying to apply a toolbox [LW · GW] of directional nudges to improve the median political discussion on social media (where everyone is yelling and no one is thinking), then sure, directionally nudging people to directionally nudge people to look like they're updating probably is a directional improvement. It still seems awfully unambitious, compared to trying to teach the criteria by which we can tell it's an improvement. In some contexts (in-person interactions with someone I like or respect), I think I have the opposite problem, of being disposed to agree with the person I'm currently talking to, in a way that shortcuts the slow work of grappling with their arguments and doesn't stick after I'm not talking to them anymore; I look as if I'm "updating", but I haven't actually learned. Someone who thought "rationalist discourse" entailed "[r]eward[ing] others' good epistemic conduct (e.g., updating) more than most people naturally do" and sought to act on me accordingly would be making that problem worse.
A footnote on the "Goodwill" element elaborates:
Note that this doesn't require assuming everyone you talk to is honest or has good intentions.
It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side [LW(p) · GW(p)] as the people who disagree with you".
But this seems to contradict the element of Non-Deception. If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?
Other intellectual communities have a name for the behavior of role-playing being on the same side as people you disagree with: they call it "concern trolling", and they think it's a bad thing. Why is that? Are they just less rational than "us", the "rationalists"?
Here's what I think is going on. There's another aspect to the historical dominance of the debate algorithm. The tendency to rationalize new arguments for a fixed conclusion is only a bug if one's goal is to improve the conclusion. If the fixed conclusion was adopted for other reasons—notably, because one would benefit from other people believing it—then generating new arguments might help persuade those others. If persuading others is the real goal, then rationalization is not irrational; it's just dishonest. (And if one's concept of "honesty" is limited to not consciously making false statements [LW · GW], it might not even be dishonest.) Society benefits from using the debate algorithm to improve shared maps, but most individual debaters are mostly focused on getting their preferred beliefs onto the shared map.
That's why people don't like concern trolls. If my faction is trying to get Society to adopt beliefs that benefit our faction onto the shared map, someone who comes to us role-playing being on our side, but who is actually trying to stop us from adding our beliefs to the shared map just because they think our beliefs don't reflect the territory, isn't a friend; they're a double agent, an enemy pretending to be a friend, which is worse than the honest enemy we expect to face before the judge in the debate hall.
This vision of factions warring to make Society's shared map benefit themselves is pretty bleak. It's tempting to think the whole mess could be fixed by starting a new faction—the "rationalists"—that is solely dedicated to making Society's shared map reflect the territory: a culture of clear thinking, clear communication, and collaborative truth-seeking.
I don't think it's that simple. You do have interests, and if you can fool yourself into thinking that you don't, your competitors are unlikely to fall for it. Even if your claim to only want Society's shared map to reflect the territory were true—which it isn't—anyone could just say that.
I don't immediately have solutions on hand. [LW · GW] Just an intuition that, if there is any way of fixing this mess, it's going to involve clarifying conflicts rather than obfuscating them—looking for Pareto improvements, rather than pretending that everyone has the same utility function. That if something called "rationalism" is to have any value whatsoever, it's as the field of study that can do things like explain why it makes sense that people don't like concern trolling. Not as its own faction with its own weird internal social norms that call for concern trolling as a very strong but defeasible default.
But don't take my word for it.
153 comments
Comments sorted by top scores.
comment by evand · 2023-02-26T17:57:28.643Z · LW(p) · GW(p)
"Physicist motors" makes little sense because that position won out so completely that the alternative is not readily available when we think about "motor design". But this was not always so! For a long time, wind mills and water wheels were based on intuition.
But in fact one can apply math and physics and take a "physicist motors" approach to motor design, which we see appearing in the 18th and 19th centuries. We see huge improvements in the efficiency of things like water wheels, the invention of gas thermodynamics, steam engines, and so on, playing a major role in the industrial revolution.
The difference is that motor performance is an easy target to measure and understand, and very closely related to what we actually care about (low Goodhart susceptibility). There are a bunch of parameters -- cost, efficiency, energy source, size, and so on -- but the number of parameters is fairly tractable. So it was very easy for the "physicist motor designers" to produce better motors, convince their customers the motors were better, and win out in the marketplace. (And no need for them to convince anyone who had contrary financial incentives.)
But "discourse" is a much more complex target, with extremely high dimensionality, and no easy way to simply win out in the market. So showing what a better approach looks like takes a huge amount of work and care, not only to develop it, but even to show that it's better and why.
If you want to find it, the "non-physicist motors" camp is still alive and well, living in the "free energy" niche on YouTube among other places.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-02-26T23:54:33.318Z · LW(p) · GW(p)
If discourse has such high dimensionality, compared to motors, how can anyone be confident that any progress has been made at all?
Now, or ever?
Replies from: sparr-risher↑ comment by sparr (sparr-risher) · 2023-02-27T11:38:30.607Z · LW(p) · GW(p)
You can describe metrics that you think align with success, which can be measured and compared in isolation. If many / most / all such metrics agree, then you've probably made progress on discourse as a whole.
Replies from: tailcalled, M. Y. Zuo↑ comment by tailcalled · 2023-02-28T16:19:13.461Z · LW(p) · GW(p)
Has anyone done this? Because I haven't seen this done.
↑ comment by M. Y. Zuo · 2023-02-27T12:28:46.229Z · LW(p) · GW(p)
Metrics are only useful for comparison if they're accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse.
Otherwise the incentive will be for the interlocutor, or groups of interlocutors, to pick a few dozen they selectively prefer out of the possibility space of thousands or millions (?). Which nearly everyone else will ignore.
The parent comment highlighted the fact that certain metrics measuring motor performance are universally, or near universally, agreed upon because they have a direct and obvious relation with the desired outcome. I can't think of any for discourse that could literally receive 99.XX% acceptance, unlike shaft horsepower or energy consumption.
Replies from: jimmy, evand↑ comment by jimmy · 2023-02-28T20:54:17.344Z · LW(p) · GW(p)
As someone working on designing better electric motors, I can tell you that "What exactly is this metric I'm trying to optimize for?" is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque. Is that better or worse? There are a ton of things to trade between, and even if your end application is fairly well specified it's generally not specified well enough to remove all significant ambiguity in which choices are better.
It's true that there are some motor designs that are just better at everything (or everything one might "reasonably" care about), but that's true for discourse as well. For example, if you are literally just shrieking at each other, whatever you're trying to accomplish you can almost certainly accomplish it better by using words -- even if you're still going to scream those words.
The general rule is that if you suck relative to the any nebulosity in where on the pareto frontier you want to be, then there are "objective" gains to be made. In motor, simultaneous improvements in efficiency and power density will go far to create a "better" motor which will be widely recognized as such. In discourse, the ability to create shared understanding and cooperation will go far to create "better" discourse which will be widely regarded as such.
Optimal motors and discourse will look different in different contexts, getting it exactly right for your use case will always be nebulous, and there will always be weird edge cases and people deliberately optimizing for the wrong thing. But it's really not different in principle.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-02-28T21:28:50.652Z · LW(p) · GW(p)
As someone working on designing better electric motors, I can tell you that "What exactly is this metric I'm trying to optimize for?" is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque.
...
If you meant to reply to my comment, the point was that there is nothing for discourse that's accepted as widely as torque, magnet strength, copper loss, vibration, etc...
A sufficiently large supermajority of engineering departments on planet Earth can agree with very little effort on how to measure torque, for example. But even this scenario is superfluous because there are international standardization bodies that have literally resolved any conflict in interpretation for the fundamental metrics, like those for velocity, mass, momentum, angular momentum, magnetic strength, etc...
There's nothing even close to that for discourse.
Replies from: jimmy↑ comment by jimmy · 2023-03-01T00:58:09.352Z · LW(p) · GW(p)
I hear what you're saying.
What I'm saying is that as someone whose day job is in large part about designing bleeding edge aerospace motors, I find that the distinction you're making falls apart pretty quickly in practice when I try to actually design and test a "physics motor". Even things as supposedly straight forward as "measuring torque" haven't been as straight forward as you'd expect. A few years ago we took one of our motors to a major aerospace company to test on their dyno and they measured 105% efficiency. The problem was in their torque measurements. We had to get clever in order to come up with better measurements.
Coincidentally, I have also put in a ton of work into figuring out how to engineer discourse, so I also have experience in figuring out what needs to be measured, how it can be measured, and how you can know how far to trust your measurements to validate your theories. Without getting too far into it, you want to start out by calibrating against relatively concrete things like "Can I get this person, who has been saying they want to climb up this wall but are too afraid, to actually climb up the rock wall -- yes or no?". If you can do this reliably where others fail, you know you're doing something that's more effective than the baseline (even though that alone doesn't validate your specific explanation uniquely). It'd take a book to explain how to build from there, but at the end of the day if you can do concrete things that others cannot and you can teach it so that the people you teach can demonstrate the same things, then you're probably doing something with some validity to it. Probably.
I'm not saying that there's "no difference" between the process of optimizing discourse and the process of optimizing motors, but it is not nearly as black and white as you think. It's possible to lead yourself astray with confirmation bias in "discourse" related things, but you should see some of the shit engineers can convince themselves of without a shred of valid evidence. The cognitive skills, nebulosity of metric, and ease of coming up with trustable feedback are all very similar in my experience. More like "a darkish shade of gray" vs "a somewhat darker shade of gray".
Part of the confusion probably comes from the fact that what we see these days aren't "physics motors"; they're "engineering motors". An engineering motor is when someone who understands physics designs a motor and then engineers populate the world with surface level variations of this blueprint. By and large, my experience in both academic and professional engineering is that engineers struggle to understand and apply first principles and optimize anything outside of the context that was covered in their textbooks. It's true that within the confines of the textbook, things do get more "cut and dry", but it's an illusion that goes away when you look past industry practice to physics itself.
It's true that our "discourse engineering" department is in a sorry state of being and that the industry guidelines are not to be trusted, but it's not that we have literally nothing, and our relative lack is not because the subject is "too soft" to get a grip on. Motor design is hard to get a grip on too, when you're trying to tread even slightly new ground. The problem is that the principles based minds go into physics and sometimes engineering, but rarely psychology. In the few instances where I've seen bright minds approach "discourse" with an eye to verifiable feedback, they've found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-03-01T03:32:16.875Z · LW(p) · GW(p)
In the few instances where I've seen bright minds approach "discourse" with an eye to verifiable feedback, they've found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.
Can you link to some examples?
Replies from: jimmy↑ comment by jimmy · 2023-03-01T18:29:01.467Z · LW(p) · GW(p)
Yes, but it's worth pointing out what you can actually expect to get from it, and how easily. Most of what I'm talking about is from personal interactions, and the stuff that's online isn't like "Oh, the science is unanimous, unarguable and unambiguous" -- because we're talking about the equivalent of "physics motors" not "engineering motors". Even if our aerospace lab dyno results were publicly available you'd be right not to trust them at face value. If you have a physics degree then saying "Here's the reasoning, here are the computer simulations and their assumptions, and here's what our tests have shown so far" is easy. If you can't distinguish valid physics from "free energy" kookiness, then even though it's demonstrable and has been demonstrated to those with a good understanding of motor testing validity who have been following this stuff, it's not necessarily trivial to set up a sufficiently legible demonstration for someone who hasn't. It's real, we can get into how I know, but it might not be as easy as you'd like.
The thing that proved to me beyond a shadow of a doubt that there exist bright feedback oriented minds that have developed demonstrable abilities involved talking to one over and over and witnessing the demonstrations first hand as well as the feedback cycles. This guy used to take paying clients for some specific issue they wanted resolved (e.g. "fear of heights"), set concrete testable goals (e.g. "If I climb this specific wall, I will consider our work to have been successful"), and then track his success rate over time and as he changed his methods. He used to rack his brain about what could be causing the behavior he'd see in his failures, come up with an insight that helps to explain, play with it in "role play" until he could anticipate what the likely reactions would be and how to deal with them, and then go test it out with actual clients. And then iterate.
On the "natural discourse, not obviously connected to deliberate cultivation of skill" side, the overarching trajectory of our interactions is itself pretty exceptional. I started out kinda talking shit and dismissing his ideas in a way that would have pissed off pretty much anyone, and he was able to turn that around and end up becoming someone I respect more than just about anyone. On the "clearly the result of iterated feedback, but diverging from natural discourse" side there's quite a bit, but perhaps the best example is when I tried out his simple protocol for dealing with internal conflicts on physical pain, and it completely changed how I relate to pain to this day. I couldn't imagine how it could possibly work "because the pain would still be there" so I just did it to see what would happen, and it took about two minutes to go from "I can't focus at all because this shit hurts" to "It literally does not bother me at all, despite feeling the exact same". Having that shift of experience, and not even noticing the change as it happened.... was weird.
From there, it was mostly just recognizing the patterns, knowing where to look, and knowing what isn't actually an extraordinary claim.
This guy does have some stuff online including a description of that protocol and some transcripts, but again, my first reaction to his writings was to be openly dismissive of him so I'm not sure how much it'll help. And the transcripts are from quite early in his process of figuring things out so it's a better example of watching the mind work than getting to look at well supported and broadly applicable conclusions. Anyway, the first of his blog posts explaining that protocol is here, and other stuff can be found on the same site.
Another example that stands out to me as exceptionally clear concise and concrete (but pretty far from "natural discourse" towards "mind hack fuckery") is this demonstration by Steve Andreas of helping a woman get rid of her phobia. In particular, look at the woman's response and Steve's response to these responses at 0:39,5:47,6:12,6:22,6:26, and 7:44. The 25 year follow up is neat too.
↑ comment by evand · 2023-03-02T03:03:40.882Z · LW(p) · GW(p)
Metrics are only useful for comparison if they're accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse.
I note that "sufficiently broad" might mean something like "most of LessWrong users" or "most people attending this [set of] meetups". Just as communication is targeted at a particular audience, discourse norms are (presumably) intended for a specific context. That context probably includes things like intended users, audience, goals, and so on. I doubt "rationalist discourse" norms will align well with "televised political debate discourse" norms any time soon.
Nonetheless, I think we can discuss, measure, and improve rationalist discourse norms; and I don't think we should concern ourselves overly much with how well those norms would work in a presidential debate or a TV ad. I suspect there are still norms that apply very broadly, with broad agreement -- but those mostly aren't the ones we're talking about here on LessWrong.
comment by Steven Byrnes (steve2152) · 2023-02-26T19:28:58.240Z · LW(p) · GW(p)
I think a disanalogy here is that all motors do in fact follow the laws of physics (and convert electricity into rotation, otherwise we wouldn’t call it a motor). Whereas not all discourse systematically leads people towards true beliefs. So rationalist discourse is a strict subset of discourse in a way that physicist motors is not a strict subset of motors.
In general, I agree that we should be open to the possibility that there exist types of discourse that systematically lead people towards true beliefs, but that look very different from “rationalist discourse” as described by Duncan & Rob. That said, I think I’m less impressed by the truth-finding properties of debates / trials than you are. Like, in both legal trials and high-school debate, the pop-culture stereotype is that the side with a better lawyer / debater will win, not the side that is “correct”. But I don’t really know.
I also agree that it’s worth distinguishing “things that seem empirically to lead to truth-finding for normal people in practice” versus “indisputable timeless laws of truth-finding”.
I was reading “Reward others' good epistemic conduct (e.g., updating) more than most people naturally do.” as like “If somebody else says ‘Hmm, I guess I overstated that’, then I should respond with maybe ‘OK cool, we’re making progress’ and definitely not ‘Ha! So you admit you were wrong! Pfffft!’” If that was indeed the intended meaning, then that doesn’t really seem to be the opposite of what you called “the opposite problem”, I think. But I dunno, I didn’t re-read the original.
Replies from: TAGcomment by Raemon · 2023-02-27T23:38:36.513Z · LW(p) · GW(p)
It looks like this post is resting on defining "rationalist" as "one who studies the laws of rationality", as opposed "someone training to think rationally", but, hasn't really acknowledged that it's using this definition (when I think Duncan and Robby's posts seem pointed more at the latter definition)
(Actually, looking more, I think this post sort of equivocates between the two, without noting that it's done so).
I'm not 100% sure I've parsed this right, but, this looks at first glance like the sort of language trick that you (Zack) are often (rightfully) annoyed at.
(I think it's a reasonable conversational move to point out someone else's definition of a word isn't the only valid definition, and pointing out their frame isn't the only valid frame. But if you're doing that it's better to do that explicitly)
Replies from: Zack_M_Davis, SaidAchmiz↑ comment by Zack_M_Davis · 2023-03-02T06:42:19.350Z · LW(p) · GW(p)
I'll agree that the "physicist motors" analogy in particular rests on the "one who studies" definition, although I think a lot of the points I make in this essay don't particularly depend on the analogy and could easily be written up separately.
I guess you could view the "foreign policy" motivating this post as being driven by two motives: first, I'd rather not waste precious time (in the year 2023, when a lot of us have more important things to do) fighting over the "rationalist" brand name; if someone else who also cares about thinking well, thinks that I'm going about everything all wrong, I think it's fine that we just have our own separate dojos, Archipelago-style [LW · GW]. That's why the post emphasizes that there are many types of motors and many types of marital arts.
But secondly, insofar as I'm unfortunately stuck fighting over the brand name anyway because words mean what they mean in practice [LW(p) · GW(p)], I really do think that the thing that made middle Yudkowsky (circa 2005–2013) world-changingly valuable was his explanation of there being objective laws of thought (as exemplified by the "Technical Explanation", "The Botttom Line" [LW · GW], or "The Second Law of Thermodynamics, and Engines of Cognition" [LW · GW]), so when I see the brand name being used to market a particular set of discourse norms without a clear explanation of how these norms are derived from the law, that bothers me enough to quickly write an essay or two about it, even though this is probably not a great use of my time or community-drama-instigating budgets in the year 2023.
Replies from: RobbBB, Raemon↑ comment by Rob Bensinger (RobbBB) · 2023-03-04T12:59:23.639Z · LW(p) · GW(p)
so when I see the brand name being used to market a particular set of discourse norms without a clear explanation of how these norms are derived from the law, that bothers me enough to quickly write an essay or two about it
Seems great to me! I share your intuition that Goodwill seems a bit odd to include. I think it's right to push back on proposed norms like these and talk about how justified they are, and I hope my list can be the start of a conversation like that rather than the end.
I do have an intuition that Goodwill, or something similar to Goodwill, plays an important role in the vast majority of human discourse that reliably produces truth. But I'm not sure why; if I knew very crisply what was going on here, maybe I could reduce it to other rules that are simpler and more universal.
↑ comment by Raemon · 2023-03-03T02:40:19.046Z · LW(p) · GW(p)
To be clear, I endorse you doing that, but I would like you to do it without sleight-of-hand-frame-control.
(I do agree you could probably have written the second half of the post without relying on the first half's structure, but, that's not what you did)
I have on my todo list to write up a post that's like "hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella's post about it, and here's why I think we should have a habit of noticing it.".
And then, maybe afterwards, a post going: "Hey, I think 'notice your own frame control, and be a bit careful about it' should graduate to a thing you are obligated to learn, as a good LW citizen. What do people think of that?", and get some sense of how The People think about it. And, depending on how that goes, maybe it becomes an actual LW norm.
I haven't done that and doesn't seem fair to rush it or assume how that'll play out, so, currently this is more of a suggestion that I think you should probably agree to on your own terms rather than something I'm enforcing as a moderator, but, flagging that that's a longer term agenda of mine.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-04T05:29:51.495Z · LW(p) · GW(p)
In your view, is there an important difference between frame control, and the author having a particular frame that they use in a particular essay?
I'm proud of this blog post. I think it's a good blog post that clearly explains my ideas in a way that's engaging to read. If someone wants to talk about my motivations for writing this post and why I chose the analogies I did, I'm happy to have that discussion in the comment section, like we're doing now.
But it seems to me that a blog post that talked about my objections to Bensinger's Goodwill element, without first explaining the "motors" and "martial arts" analogies as illustrations of how I'm thinking about the topic, would be worse than this post, primarily because it would be less faithful to how I'm thinking about the topic, but also because it would just be less interesting to read.
If someone thinks my choice of analogies (or "frames"; I'm not sure if there's a specific definition of "frame" I'm supposed to be familar with in this context) is misleading for some specific reason, they're welcome to argue that in the comment section. So far, you have not persuaded me that I should have made any different writing choices.
a thing you are obligated to learn, as a good LW citizen
I mean, it's your website. (Or maybe it's Oli's or Vaniver's website? I'm not keeping track of the power structure.) If think you can obligate your users to learn something in order to maintain some "good citizen" status, that's definitely a thing you can try to do.
For myself, I use this website because I'm interested in the study of human rationality (as that study was articulated by Yudkowsky in 2007–2009), and this seems like a pretty good website for posting my thoughts and reading other people's thoughts about that topic, and also because I have a long history here (and therefore a large investment in the local jargon, &c.).
I consider myself to have obligations under the moral law to think and write clearly. I do not consider myself to have any obligations whatsoever "as a good LW citizen".
a post that's like "hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella's post about it, and here's why I think we should have a habit of noticing it.".
That sounds like a good blog post! I eagerly look forward to reading it.
Replies from: Raemon↑ comment by Raemon · 2023-03-04T08:13:14.106Z · LW(p) · GW(p)
In your view, is there an important difference between frame control, and the author having a particular frame that they use in a particular essay?
Yep!
Distinctions in Frame Control
I'm still working through this, which is part of why the post isn't written up yet. I'm also not sure if I'm actually going to use the phrase 'frame control' because it might just be too easy to weaponize in a way that makes it more unhelpful than helpful. (i.e. the concept I have in mind here is something it makes to have the norm of 'notice when you do it, and be careful with it', not 'don't do it ever')
But, here are my current thoughts on how I currently carve up the space here:
- having a frame, at all [i.e. set of ways to conceptualize a problem or solution-space or what questions to ask [LW · GW]]
- having a strongly held/presented frame, such as by speaking confidently/authoritatively (which many people who don't hold their own frames very strongly sometimes find disorienting)
- having an insistently held frame (where when someone tries to say/imply 'hey, my frame is X' you're like 'no, the frame is Y' and if they're like 'no, it's X')
- frame manipulation (where you change someone else's frame in a subtle way without them noticing, i.e. presenting a set of assumptions in a way that aren't natural to question, or equivocating on definitions of words in ways that change what sort of questions to think about without people noticing you've done so)
#2, #3 and #4 can be mixed and matched.
The places where people tend to use the word 'frame control' most often refer to #3 and #4, frame-manipulation and frame-insistence. I'm a bit confused about how to think about 'strong frames' – I think there's nothing inherently wrong with them, but if Alice is 'weaker willed' than Bob, she may end up adopting his frame in ways that subtly hurt her. This isn't that different from, like, some people being physically bigger and more likely to accidentally hurt a smaller person. I wouldn't want society to punish people for happening-to-be-big, but it feels useful to at least notice 'bigness privilege' sometimes.
That said, strongly held frames that are also manipulative or insistent can be pretty hard for many people to be resilient against, and I think it's worth noticing that.
Re: this particular post
This post felt like (mild) frame manipulation to me, here:
But there's a reason for that: a list of guidelines has the wrong type signature for being "the basics". The actual basics are the principles of rationality one would appeal to explain which guidelines are a good idea: principles like how evidence is the systematic correlation between possible states of your observations and possible states of reality [LW · GW], how you need evidence to locate the correct hypothesis in the space of possibilities [LW · GW], how the quality of your conclusion can only be improved by arguments that have the power to change that conclusion [LW · GW].
Contemplating these basics, it should be clear that there's just not going to be anything like a unique style of "rationalist discourse", any more than there is a unique "physicist motor."
Where I think this objection only quite makes sense if you're considering "rationalist discourse" to be "rationalist-as-scientist-studying-laws", instead of "applied-rationalist".
A thing that would have fixed it is noting "Duncan seems to be using a different definition of 'rationalist' and 'basics' here. But I think those definitions are less useful because [reasons].
(Note: I don't think you actually need the 'frame' frame, or the 'frame control' frame, to object to your post on these grounds. Equivocation / changing definitions also just seems, like, 'deceptive')
I do want to flag: I also think Duncan's 'The Basics of Rationalist Discourse' is also somewhat frame-manipulative while also being somewhat strongly held, by my definition here. I went out of my way to try and counter-the-frame-control in my curation notice [LW(p) · GW(p)]. (I have some complicated thoughts on whether this was fine, whether it was more or less fine than my complaint here, but it'd take awhile more to put them into legible form)
Replies from: Zack_M_Davis, Raemon↑ comment by Zack_M_Davis · 2023-03-04T17:32:54.837Z · LW(p) · GW(p)
I'm definitely doing #2. I can see your case that the paragraph starting with "But there's a reason for that" is doing #4. But ... I'm not convinced that this kind of "frame manipulation" is particularly bad?
If someone is unhappy with the post's attempt to "grab the frame" (by acting as if my conception of rationalist is the correct one), I'm happy to explain why I did that in the comments [LW(p) · GW(p)]. Do I have to disclaim it in the post? That just seems like it would be worse writing.
Replies from: Raemon↑ comment by Raemon · 2023-03-04T17:47:38.411Z · LW(p) · GW(p)
I think in isolation it wouldn't be particularly bad, no. I think it'd rise to the level of 'definitely better to avoid' (given [probably?] shared assumptions about truthseeking and honesty), but, it's within the set of mistakes I think are fairly normal to make.
I feel like it is part of a broader pattern that (I think probably) adds up to something noticeably bad, but it'd take me awhile of active effort to find all the things that felt off to me and figure out if I endorse criticizing it as a whole.
(So, like, for now I'm not trying to make a strong argument that there's a particular thing that's wrong, but, like, I think you have enough self-knowledge to notice 'yeah something is off in a sticky way here' and figure it out yourself. ((But, as previously stated, I don't have a strong belief that this makes sense to be your priority atm)))
↑ comment by Raemon · 2023-03-04T17:28:13.965Z · LW(p) · GW(p)
A thing that would have fixed it is noting "Duncan seems to be using a different definition of 'rationalist' and 'basics' here. But I think those definitions are less useful because [reasons].
Oh, also to clarify, in my current view, you don’t need to tack on the ‘because [reasons]’ to avoid it being frame manipulation. Simply noting that you think it makes more sense to use a different definition is enough to dispel the sleight of hand feeling. (Although listing reasons may make it more persuasive that people use this definition rather than another one)
↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-28T00:50:51.807Z · LW(p) · GW(p)
Are these two things not intimately connected? Should we not study the laws of rationality in the course of training to think rationally (indeed, in the course of determining what it means to think rationally, and determining how to train to think rationally)? And what is the point of studying the laws of rationality, if not to apply them?
Replies from: Raemon, Vladimir_Nesov↑ comment by Raemon · 2023-03-03T02:44:07.428Z · LW(p) · GW(p)
I certainly do think they're connected, but, they are still distinct concepts, and I think part of the reason Zack is focused on "rationalists as students of the laws of rationality" vs "applicants thereof" is that a community of law-studiers should behave differently. (as I understand it, pure math and applied math are different and people make a big deal about it??)
(and, to be clear, I think this is a pretty interesting consideration I hadn't been thinking of lately. I appreciate Zack bringing it up, just want him to not be slight-of-handy about it)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-03T04:30:30.937Z · LW(p) · GW(p)
Hmm, but I don’t think that rationality is actually analogous to math, in this respect. I think that the intimate connection between learning and applying rationality is, actually, a core property of rationality as a domain, as distinct from domains like math. Any disconnect between study and application threatens to undermine both!
↑ comment by Vladimir_Nesov · 2023-02-28T02:09:00.306Z · LW(p) · GW(p)
And what is the point of studying the laws of rationality, if not to apply them?
The beauty of the subject!
comment by Vladimir_Nesov · 2023-02-26T07:21:18.982Z · LW(p) · GW(p)
Inflation of "rationality" [LW · GW] needs more specific anchors to combat it. As it stands, any purpose that looks good for someone (especially if it's actually quite good) stands a risk of getting enshrined into a "principle of rationality", such that following that principle advances the purpose, while dismissing the principle starts sounding "irrational", a norm violation if there is one in a garden of rationality, worth discouraging [LW · GW].[1]
I think Scott's asymmetric weapons framing gestures at the concept/problem more robustly, while Eliezer's cognitive algorithms [LW · GW] framing gives practical course-correcting advice:
Similarly, a rationalist isn't just somebody who respects the Truth.
All too many people respect the Truth.
A rationalist is somebody who respects the processes of finding truth.
At the moment, LW has accumulated enough anti-epistemology directed at passing good and sensible things for rationality that a post like this gets rejected on the level of general impression. I think a post focused on explaining the problem with unreflectively rejecting posts like this, or on stratifying meaningful senses of "rationality" as distinct from all things good and true, without simultaneously relying on this being already understood, stands a better chance of unclogging this obstruction.
Like this post, voicing disapproval of the good principles of rationality such as "goodwill", right in its sanctum, the nerve! Downvote, for rationality! When I started writing this comment, post's Karma was at the shocking 0 with several votes, but it got a bit better since then. The "rationalist [LW · GW] discourse [LW · GW]" posts are at around 200. ↩︎
comment by cubefox · 2023-02-26T17:50:42.340Z · LW(p) · GW(p)
If your discourse algorithm errs on the side of sticks over carrots (perhaps, emphasizing punishing others' bad epistemic conduct more than most people naturally do), then ... what? How, specifically, are rough-and-tumble spaces less "rational", more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?
In my mind the Goodwill norm has a straightforward justification: Absent goodwill, most people are prone to view disagreement as some sort of personal hostility, similar to an insult. This encourages us to view their arguments as soldiers, rather than as exchange of evidence. Which leads to a mind-killing [LW · GW] effect, i.e. it makes us irrational.
To be sure, I think that some groups of people, particularly those on the autism spectrum, do not have a lot of this "hostility bias". So the Goodwill norm is not very applicable on platforms where many of those people are. Goodwill is likely a lot more important on Twitter than on Hacker News or Less Wrong.
In general, norms which counter the effect of common biases seem to be no less about rationality than norms which have to do more directly with probability or decision theory.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-26T19:26:06.470Z · LW(p) · GW(p)
In my mind the Goodwill norm has a straightforward justification: Absent goodwill, most people are prone to view disagreement as some sort of personal hostility, similar to an insult. This encourages us to view their arguments as soldiers, rather than as exchange of evidence. Which leads to a mind-killing effect, i.e. it makes us irrational.
This seems to imply a norm of “don’t assume hostility”, rather than a norm of “express good will”.
The key difference is that the former moves you closer to the truth, by negating a bias, whereas the latter is, at best, truth-neutral (and quite likely truth-distortionary).
Replies from: cubefox↑ comment by cubefox · 2023-02-26T21:17:40.889Z · LW(p) · GW(p)
Showing goodwill is better than just not assuming hostility, since it also makes your opposite less likely to assume hostility themselves.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-26T22:26:51.642Z · LW(p) · GW(p)
But showing goodwill is worse than just not assuming hostility, because it requires that you communicate something other than whatever facts/views/models/etc. you are trying to communicate—and that’s the best case. The more common, and much worse, case is when your communication of goodwill is basically a lie, or involves other distortions of truth.
Consider the quoted guideline:
Goodwill. Reward others’ good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.
But why should we err at all? Should we not, rather, use as many carrots and sticks as is optimal? Should we not forgive only and precisely when forgiveness is warranted, and punish only and precisely when (and in the way that) punishment is warranted? Civility is a fine and good default, but what if incivility is called for—should we nonetheless err on the side of civility? Why?
The justification given is “empirically, this works”. Well, we may believe that or disbelieve it, but either way, the core of the approach here is “distort truth, in the service of truth”.
That alone makes the justification not “straightforward” at all.
Replies from: dxu, RobbBB, jimmy↑ comment by dxu · 2023-02-27T00:09:54.603Z · LW(p) · GW(p)
But why should we err at all?
Because we cannot choose not to err. [? · GW] So, given that we will err, and given that we err with asymmetric frequency in a particular direction—(and given that errors in that direction also tend to have asymmetrically worse consequences)—then naturally, better to compensate for that with a push in the opposite direction, than to compensate for it not at all.
Straightforward enough, in my estimation!
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-27T00:37:45.246Z · LW(p) · GW(p)
The correct approach, it seems to me, is to determine in which direction we are in fact erring, and to rectify that error by adjusting in the opposite direction, as much as is necessary in order to counteract the error (and no more—for that, too, would be an error).
But you seem to be suggesting first (a) surrendering to inevitable error, not even trying to not err, and (b) correcting, not by precisely as much as is necessary (or some attempt at approximating that amount), but simply by… some arbitrary amount (trusting that it’s enough? trusting that it’s not too much?).
This would seem to be a poor principle to elevate to the rank of “guideline for rational discourse”.
Replies from: dxu↑ comment by dxu · 2023-02-27T02:40:03.789Z · LW(p) · GW(p)
But you seem to be suggesting first (a) surrendering to inevitable error, not even trying to not err
Certainly not. Recalibrating one's intuitions to better reflect reality is an admirable aim, and one in which we should all be engaged. However, as far as norms of discourse go, there is more to the matter than that: different people will unavoidably have differences of intuition regarding their interlocutor's goodwill, with certain individuals quicker to draw the line than others. How best to participate in (object-level) discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.
(b) correcting, not by precisely as much as is necessary (or some attempt at approximating that amount), but simply by… some arbitrary amount (trusting that it’s enough? trusting that it’s not too much?).
One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.
This does not, however, mean that we are left in the dark, with no recourse but to correct by an arbitrary amount—for two reasons:
- One of the further consequences of being the type of agent that errs in a predictable direction, is that whatever corrective factor you generate, by dint of having been produced by the same fallible intuitions that produced the initial misjudgment, is more likely to be too little than too much. And here I do in fact submit that humans are, by default, more likely to judge themselves too persecuted, than too little.
- Two errors of equal magnitude but opposite sign do not have the same degree of negative consequence. Behaving as if your interlocutor is engaged in cooperative truthseeking, when in fact they are not, is at worst likely to lead to a waste of time and effort attempting to persuade someone who cannot be persuaded. Conversely, misidentifying a cooperative interlocutor as some kind of bad actor will, at minimum, preemptively kill a potentially fruitful discussion, while also carrying a nonzero risk of alienating your interlocutor and any observers.
Given these two observations—that we err in a predictable direction, and that the consequences from opposing errors are not of equal magnitude—it becomes clear that if we, in our attempt to apply our corrective factor, were to (God forbid) miss the mark, it would be better to miss by way of overshooting than by undershooting. This then directly leads to the discourse norm of assuming goodwill.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-27T02:50:19.262Z · LW(p) · GW(p)
How best to participate in discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.
Sure, but why should this question have an answer like “we just can’t not err, or even reduce how much we err”? Why would we expect this?
Also (and perhaps more importantly):
different people will predictably have differences of intuition regarding their interlocutor’s goodwill
Hold on, hold on. How did we get to “intuitions regarding their interlocutor’s goodwill”?
We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”.
Why are we suddenly having to have “goodwill”, to try to divine how much “goodwill” other people have, etc.? We identified a problem and then we identified the solution. Seems like we’re done.
One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.
How is that a consequence of “being the type of agent that errs at all”? I don’t see it—please elaborate.
And here I do in fact submit that humans are, by default, more likely to judge themselves too persecuted, than too little.
Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so.
The rest of your comment, from that point, seems to continue conflating perception of others’ behavior, and one’s own behavior. I think it would be good to disentangle these two things.
Replies from: dxu↑ comment by dxu · 2023-02-27T03:15:06.157Z · LW(p) · GW(p)
It seems possible at this point that some of our disagreement may stem from a difference in word usage.
When I say "goodwill" (or, more accurately, when I read "goodwill" in the context of Rob Bensinger's original post), what I take it to mean is something along the lines of "being (at least in the context of this conversation, and possibly also in the broader context of participation on LW as a whole) interested in figuring out true things, and having that as a primary motivator during discussions".
The alternative to this (which your use of "hostility" appears to qualify for as a special case) is any situation in which that is not the case, i.e. someone is participating in the discussion with some other aim than arriving at truth. Possible alternative motivations here are too numerous to list comprehensively, but (broadly speaking) include classes such as: wanting confirmation for their existing beliefs, wanting to assert the status of some individual or group, wanting to lower the status of some individual or group, etc.
(That last case seems possibly to map to your use of "hostility", where specifically the individual or group in question includes one of the discussion's participants.)
This being the case, my response to what you say in your comment, e.g. here
We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”.
and here
Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so.
is essentially that I agree, but that I don't see how (on your view) Rob's proposed norm of "assuming goodwill" isn't essentially a restatement of your "don't perceive disagreements as hostility". (Perhaps you think the former generalizes too much compared to the latter, and take issue with some of the edge cases?)
In any case, I think it'd be beneficial to know where and how exactly your usage/perception of these terms differ, and how those differences concretely lead to our disagreement about Rob's proposed norm.
↑ comment by Rob Bensinger (RobbBB) · 2023-03-04T13:06:50.988Z · LW(p) · GW(p)
But why should we err at all? Should we not, rather, use as many carrots and sticks as is optimal?
"Err on the side of X" here doesn't mean "prefer erring over optimality"; it means "prefer errors in direction X over errors in the other direction". This is still vague, since it doesn't say how much to care about this difference; but it's not trivial advice (or trivially mistaken).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-04T19:31:39.861Z · LW(p) · GW(p)
Yes, I know what the expression means. But that doesn’t answer the objection, which is “why are we concerning ourselves with the direction of the errors, when our objective should be to not have errors?”
The actual answer has already been given elsethread (a situation where changing the sign of the error is substantially easier than reducing magnitude of error, plus a payoff matrix that is asymmetric w.r.t. the direction of error).
↑ comment by jimmy · 2023-02-27T06:50:09.897Z · LW(p) · GW(p)
I agree that "Err on the side of ____" is technically worse than "Try to not err", but I'd argue that it's just a somewhat sloppy and non-literal way of conveying a valid point.
The way I'd say it if being more careful is to say that "Insufficient and excess stick are both problems, however there is a natural tendency to stick too much. Additionally, excess stick quickly begets excess stick, and if you allow things to go supercritical you can quickly destroy the whole thing, so before acting on an impulse to stick make really sure that you aren't just succumbing to this bias.".
Or in other words "Your sights are likely out of alignment, so aim higher than seems appropriate in order to not aim too low in expectation".
Replies from: Measure↑ comment by Measure · 2023-02-27T20:52:04.599Z · LW(p) · GW(p)
Sometimes "try not to err" will result in predictably worse outcomes than "try to minimize the damage your erring causes, even if that means you are more likely or even certain to err".
Replies from: jimmy, SaidAchmiz↑ comment by jimmy · 2023-02-28T18:32:20.145Z · LW(p) · GW(p)
Agreed. You want to "try not to err" in expected value, not in "inches from the bullseye". Sometimes this means you try to put the center of your distribution offset from the bullseye.
I didn't see it as the primary point of contention so I didn't mention it, but you're right, it's probably worth pointing out explicitly.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-27T23:22:19.117Z · LW(p) · GW(p)
What are some examples of this?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-02-28T00:36:41.459Z · LW(p) · GW(p)
Ideally, I would arrive at my workplace exactly when my shift starts (zero error, zero loss). But if I'm ten minutes late, I get in trouble with my boss (small error, large loss), and if I'm ten minutes early, I read a magazine in the breakroom (small error, small loss). Therefore, I should "err on the side of" leaving early.
That is, the "err on the side of" idiom arises from the conflation of different but related optimization problems. The correct solution to the worker's full problem (taking into account the asymmetrical costs of arriving early or late) is an incorrect solution to the "being (exactly) on time" problem.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-28T00:49:18.397Z · LW(p) · GW(p)
I see, thanks.
Do you think that this dynamic appear in the problem which is the subject of the top-level discussion here?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-02-28T01:12:52.038Z · LW(p) · GW(p)
Yes. If my comments are too mean, I might start an unpleasant and unproductive flame war (small error, large loss). If my comments are too nice, they might be slightly less clear than a less nice comment, but nothing dramatically bad like a flame war happens (small error, small loss). Therefore I (arguably) should "err on the side of carrots over sticks."
If "Elements of Rationalist Discourse"'s Goodwill item had explicitly laid out the logic of asymmetric costs rather than taking "err on the side of" as a primitive, I'd still be skeptical, but this post's discussion of it wouldn't be written the same way (and it's possible that I might not have bothered to write the post at all).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-28T05:25:09.270Z · LW(p) · GW(p)
Doesn’t this assume that the cost of writing comments does not vary with “niceness” of the comments (e.g., because it is zero)?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-02-28T05:59:41.248Z · LW(p) · GW(p)
That's one reason someone might object to the asymmetrical-costs argument for niceness, but I'm skeptical that it's the real reason [LW(p) · GW(p)].
I think what's more typically going on is that there's a conflict between people who want to enforce politeness norms and people who want the freedom to be blunt. In venues where the polite faction has the upper hand (by karma voting weight, moderator seats, &c.), blunt people have an incentive to dishonestly claim that writing polite comments is more expensive than it actually is, because polite voters and moderators might be partially swayed by that argument, whereas the polite people would not be sympathetic if the blunt people said what they were actually thinking.
Replies from: SaidAchmiz, jimmy↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-28T07:52:34.787Z · LW(p) · GW(p)
In venues where the polite faction has the upper hand (by karma voting weight, moderator seats, &c.), blunt people have an incentive to dishonestly claim that writing polite comments is more expensive than it actually is, because polite voters and moderators might be partially swayed by that argument, whereas the polite people would not be sympathetic if the blunt people said what they were actually thinking.
Of course this is true, but that doesn’t actually mean that there isn’t, in fact, a cost differential; it only means that claims of such constitute weaker evidence in favor than they would in the absence of such an incentive.
And there are good reasons to believe that the cost differential exists. We may presumably discount (alleged) evidence from introspection, as it’s unreliable for two reasons (unreliability of introspection in the presence of incentives for self-deception; unreliability of reports of introspection, in the presence of incentives for deception). But that’s not all we’ve got. For example, in the linked comment, you write:
Like, it didn’t actually take me very much time to generate the phrase “accountability for alleged harm from simplifications” rather than “pandering to idiots”
This, in the context of disclaiming (indeed, of arguing against) a “cost differential” argument! And yet even there, you didn’t go so far as to claim that it took you zero time to come up with the “nice” wording. (And no surprise there; that would’ve been a quite implausible claim!) Surely these things add up…?
But none of that constitutes the strongest reply to your argument, which is this:
Why do you assume that the only cost is to the comment-writer?
At the very least, “accountability for alleged harm from simplifications” is longer, and more complex, than “pandering to idiots”. It takes more time and effort to read and understand the former than the latter! And the complexity of the former means more chance of misunderstanding. (It’s a persistent delusion, in these parts, that taking more words to explain something, and choosing words more precisely, results in clearer communication.)
And these are only the obvious costs. There are also second-order costs imposed by upholding, rather than subverting, norms which dictate such “nicer” writing…
Replies from: SomeoneYouOnceKnew↑ comment by SomeoneYouOnceKnew · 2023-02-28T09:01:47.285Z · LW(p) · GW(p)
Not gonna lie, I lost track of the argument on this line of comments, but pushing back on word-bloat is good.
↑ comment by jimmy · 2023-03-01T01:22:43.738Z · LW(p) · GW(p)
I'll go on record as a counterexample here; I very much want politeness norms to be enforced here, and in my personal life I will pay great costs in order to preserve or create my freedom to be blunt. The requirement for me to be cautious of how I say things here is such a significant cost that I post here far less than I otherwise would. The cost is seriously non-insignificant.
The reason I don't bitch about it is that I recognize that it's necessary. Changing norms to allow people to be relatively more inconsiderate wouldn't actually make things better. It's not just that "pandering to idiots" calls for a euphemism, it's that it probably calls for a mindset that is not so dismissive to people if they're going to be in or close enough to your audience to be offended. Like, actually taking them into consideration and figuring out how to bridge that gap. It's costly. It's also necessary, and often pays off.
I would like to be able to say "Zack, you stupid twat" without having to worry about getting attacked for doing so, but until I've proven to you that I respect you enough that it's to be taken as an affectionate insult between friends.... phrasing things that way wouldn't actually accomplish the goals I'd have for "less polite" speech. If I can earn that recognition then there's probably some flexibility in the norms, and if I can't earn that recognition or haven't earned that recognition, then that's kinda on me.
There does have to be some level at which we stop bending over backwards to spare feelings and just say what's true [to the best of our ability to tell] dammit, but it actually has to be calibrated to what pushes away only those we'd be happy to create friction with and lose. It's one of those things where if you don't actively work to broaden your scope of who you can get along with without poking at, you end up poking too indiscriminately, so I'm happy to see the politeness norms about where they are now.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-01T05:36:55.292Z · LW(p) · GW(p)
(Worth noting that I used to spend a great deal of effort and energy on putting up with the headache of wading through Zack's style, for the occasional worth-it nugget of insight; the moment when that constant expenditure of effort became clearly not worth it anymore was when Zack started off a(n also-otherwise-flawed) critique by just asserting "This is insane."
Even if it had, in fact, been insane, Zack would've been more effective if he'd been willing to bother with even the tiniest of softenings (e.g. "this sounds insane to me," which, in addition to being socially smoother is also literally more true, as a reflection of the actual state of affairs [LW · GW]).
As it was, though, he was just so loudly + overconfidently + rudely wrong that it was enough to kill my last remaining willingness-to-tolerate his consistent lack-of-epistemic-hygiene-masquerading-as-a-preference-for-directness.)
Replies from: Zack_M_Davis, gworley, sil-ver, jimmy, SaidAchmiz↑ comment by Zack_M_Davis · 2023-03-01T07:09:13.740Z · LW(p) · GW(p)
Would it help if I apologized? I do, actually, regret that comment. (As you correctly point out here, it wasn't effective; it didn't achieve my goals at all.)
The reason I was reluctant to apologize earlier [LW(p) · GW(p)] is because I want to be clear that the honest apology that I can offer has to be relatively narrowly-scoped: I can sincerely apologize specifically for that particular pointlessly rude blog comment, and I can sincerely make an effort to conform to your preferred norms when I'm writing a reply to you specifically (because I know that you specifically don't like the punchy-attacky style I often use), but I'm not thereby agreeing to change my commenting behavior when talking to people who aren't you, and I'm not thereby agreeing that your concept of epistemic hygiene is the correct one.
I'm worried that a narrowly-scoped apology will be perceived as insufficient, but I think being explicit about scope is important, because fake apologies don't help anyone: I only want to say "I'm sorry; I won't do it again" about the specific things that I'm actually sorry for and actually won't do again.
So—if it helps—I hereby apologize for my comment of 4 December 2021 on an earlier draft of "Basics of Rationalist Discourse". In context (where the context includes the draft itself being about your preferred discourse norms, the fact that the draft appeared to have been posted prematurely, the fact that the draft had instructions about how readers should read carefully before reacting to any of the summary points, and our previous interactions), it was an unambiguously bad comment, and I should have known better. I'm sorry.
↑ comment by Gordon Seidoh Worley (gworley) · 2023-03-02T02:07:29.939Z · LW(p) · GW(p)
Even if it had, in fact, been insane, Zack would've been more effective if he'd been willing to bother with even the tiniest of softenings (e.g. "this sounds insane to me," which, in addition to being socially smoother is also literally more true, as a reflection of the actual state of affairs).
Softening like this is one of those annoying things i wish we could do away with because it's smurf naming. Saying that something is insane is literally a claim that I think it's insane, and it's only because of naive epistemology that we think some other meaning is possible.
I only started adding softening because Duncan wouldn't shut up about the lack of smurfs in my comments.
Replies from: philh, Duncan_Sabien↑ comment by philh · 2023-03-03T16:30:52.834Z · LW(p) · GW(p)
But Duncan's suggested softening was "this sounds insane to me", not "I think this is insane".
Like, consider the dress. We might imagine someone saying any of
- "The dress is (white/blue)."
- "I think the dress is (white/blue)."
- "The dress looks (white/blue) to me."
I think that in practice (1) and (2) mean different things; on a population level, they'll be said by people whose internal experiences are different, and they'll justly be interpreted differently by listeners.
But even if you disagree with that, surely you'd agree that (3) is different? Like, "I think the dress is white but it's actually blue" is admittedly a kind of weird thing to say, but "the dress looks white to me but it's actually blue" is perfectly normal, as is "...but I think it's actually blue", or "...but I don't know what color it actually is".
It may be that the dress looks blue to you and also you think it's actually blue, but these are two importantly different claims!
I would further suggest that if the dress happens to look blue to you, but also you're aware that it looks blue to a lot of people and white to a lot of people, and you don't know what's going on, and you nonetheless believe confidently that the dress is blue, you are doing something wrong. (Even though you happen to be correct, in this instance.)
When it comes to insanity, I think something similar happens. Whether or not something sounds insane to me is different from whether it actually is insane. Knowing that, I can hold in my head ideas like "this sounds insane to me, but I might be misinterpreting the idea, or I might be mistaken when I think some key premise that it rests on is obviously false, or or or... and so it might not be insane".
And so we might stipulate that Zack's "this is insane" was no more or less justifiable than "I think this is insane" would have been. But we should acknowledge the possibility that in thinking it insane, he was doing something wrong; and that thinking and saying "this sounds insane to me" while maintaining uncertainty about whether or not it was actually insane would have been more truth-tracking.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2023-03-03T19:10:44.221Z · LW(p) · GW(p)
My point is that something cannot actually be insane, it can only be insane by some entity's judgment. Insanity exists in the map, not the territory. In the territory there's just stuff going on. We're that ones that decide to call it insane. Maybe that's because there's some stable pattern about the world we want to put the label insanity on, and we develop some collective agreement about what things to call insane, but we're still the ones that do it.
If you take this view, these statements don't have much difference between them on a fundamental level because "The dress is X" means something like "I assess the dress to be X" since you're the one speaking and are making this call. We do have things that mean something different, like "I think other people think the dress is X", but that's making a different type of claim than your 3 statements, which I see as making essentially the same fundamental claim with minor differences about how its expressed to try to convey something about the process by which you made the claim so that others can understand your epistemic state, which is sometimes useful but you can also just say this more directly with something like "I'm 80% sure the dress is X".
Replies from: Duncan_Sabien, philh↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-03T20:00:48.857Z · LW(p) · GW(p)
Insanity exists in the map, not the territory.
A big part of what I'm often doing in my head is simulating a room of 100-1000 people listening, and thinking about what a supermajority of them are thinking or concluding.
When you go from e.g. "that sounds insane to me" or "I think that's crazy" to "that is crazy," most of what I think is happening is that you're tapping into something like "...and 70+ out of 100 neutral observers would agree."
Ditto with word usage; one can use a word weirdly and that's fine; it doesn't become a wrong usage until it's a usage that would reliably confuse 70+% of people/reliably cause 70+% of people to conclude the wrong thing, hearing it.
"Wrong in the territory" in this case being "wrong in the maps of a supermajority" + "it's a socially constructed thing in the first place."
↑ comment by philh · 2023-03-03T22:21:49.054Z · LW(p) · GW(p)
I'm baffled by this, and kinda just going to throw a bunch of reactions out there without trying to build them into a single coherent reply.
If you take this view, these statements don’t have much difference between them on a fundamental level because “The dress is X” means something like “I assess the dress to be X” since you’re the one speaking and are making this call.
If someone says "the dress looks white to me, but I think it's actually blue"... how would you analyze that? From this it sounds like you'd think they're saying "I assess the dress to be white, but I assess it to be blue"?
To me it has a perfectly natural meaning, along the lines of "when I look at this picture my brain tells me that it's white. But I'm reliably informed that it's actually blue, and that the white appearance comes from such-and-such mental process combined with the lighting conditions of the photo".
(e: actually, "along the lines of" isn't quite what I mean there. It's more like "this is the kind of thing that might cause someone to say those words".)
It sounds to me like you're trying to say there's, on some level, no meaningful distinction between how something is and how we assess it to be? But how something appears to be and how we assess it to be are still very different!
I see as making essentially the same fundamental claim with minor differences about how its expressed to try to convey something about the process by which you made the claim so that others can understand your epistemic state, which is sometimes useful but you can also just say this more directly with something like “I’m 80% sure the dress is X”.
But "I'm 80% sure the dress is X" doesn't convey anything about the process by which I came to believe it? It's simply a conclusion with no supporting argument.
Meanwhile "the dress looks X" is an argument with no ultimate conclusion. If a person says that and nothing else, we might reasonably guess that they probably think the dress is X, similar to how someone who answers "is it going to rain?" with "the forecast says yes" probably doesn't have any particular grounds to disbelieve the forcast. But even if we assume correctly that they think that, both the explicit and implicit information they've conveyed to us are still different versus "I'm _% confident the dress is X" or "I'm _% confident it's going to rain".
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T04:57:05.879Z · LW(p) · GW(p)
Words mean what they mean, in practice.
In practice, humans (en masse) assign genuinely different weights/strengths to "This is insane" and "This sounds insane to me." The response shows that they are meaningfully different.
I agree (?) with you (assuming you concur with the following) that it would be nice if we had better and more functional terminology, and could make clearer distinctions without spending words that do indeed feel extraneous.
But that's not the world we live in, and given the world we live in, I disagree that it's smurf naming.
Replies from: Ninety-Three↑ comment by Ninety-Three · 2023-03-10T02:31:03.814Z · LW(p) · GW(p)
I agree that people hearing Zack say "I think this is insane" will believe he has a lower P(this is insane) than people hearing him say "This is insane", but I'm not sure that establishes the words mean that.
If Alice goes around saying "I'm kinda conservative" it would be wise to infer that she is probably conservative. If Bob goes around saying "That's based" in the modern internet sense of the term, it would also be wise to infer that he is probably a conservative. But based doesn't mean Bob is conservative, semantically it just means something like "cool", and then it happens to be the case that this particular synonym for cool is used more often by conservatives than liberals.
If it turned out that Alice voted party line Democrat and loved Bernie Sanders, one would have a reasonable case that she had used words wrong when she said she was kinda conservative, those words mean basically the opposite of her circumstances. If it turned out that Bob voted party line Democrat and loved Bernie Sanders, then one might advise him "your word choice is causing people to form a false impression, you should maybe stop saying based", but it would be weird to suggest this was about what based means. There's just an observable regularity of our society that people who say based tend to be conservative, like how people who say "edema" tend to be doctors.
If Zack is interested in accurately conveying his level of confidence, he would do well to reserve "That's insane" for cases where he is very confident and say "That seems insane" when he is less confident. If he instead decided to use "That's insane" in all cases, that would be misleading. But I think it is significant that this would be a different kind of misleading than if he were to use the words "I am very confident that is insane", even if the statements cause observers to make the exact same updates.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-10T02:49:38.446Z · LW(p) · GW(p)
(My point in the comment above is merely "this is not contentless filler; these distinctions are real in practice; if adding them feels onerous or tedious it's more likely because one is blind to, or does not care about, a real distinction, than because there's no real difference and people want you to waste time adding meaningless words." A lot of people act along lines that go something like "well these words SHOULD be taken to mean X, even though they predictably and reliably get interpreted to mean Y, so I'm going to keep saying them and when other people hear 'Y' I'll blame them, and when other people ask me to say something different I will act put-upon." <—That's a caricature/extremer version of the actual position the actual Gordon takes; I'm not claiming Gordon's saying or doing anything anywhere near that dumb, but it's clear that there really are differences in how these different phrases are perceived, at the level of hundreds-of-readers.)
Replies from: Ninety-Three↑ comment by Ninety-Three · 2023-03-10T05:50:47.810Z · LW(p) · GW(p)
Is it wrong for Bob the Democrat to say "based" because it might lead people to incorrectly infer he is a conservative? Is it wrong for Bob the plumber to say "edema" because it might lead people to incorrectly infer he is a a doctor? If I told Bob to start saying "swelling" instead of "edema" then I feel like he would have some right to defend his word use: no one thinks edema literally means "swelling, and also I am a doctor" even if they update in a way that kind of looks like it does.
I don't think we have a significant disagreement here, I was merely trying to highlight a distinction your comment didn't dwell on, about different ways statements can be perceived differently. "There is swelling" vs "There is swelling and also I am a doctor" literally means something different while "There is swelling" vs "There is edema" merely implies something different to people familiar with who tends to use which words.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-10T06:00:25.644Z · LW(p) · GW(p)
"There is swelling" vs "There is swelling and also I am a doctor" literally means something different while "There is swelling" vs "There is edema" merely implies something different to people familiar with who tends to use which words.
Yes, but I don't think this is particularly analogous, specifically because the difference in interpretation, in practice, between "swelling" and "edema" seems to me like it's likely at least an order of magnitude smaller than the difference in interpretation, in practice, between "this is crazy" and "this sounds crazy to me."
As for whether either of these usages are wrong, it depends entirely on whether you want to successfully communicate or not. If you reliably cause your listener to receive concepts that are different than those you were trying to transmit [LW · GW], and this is down to utterly predictable boring simple truths about your language usage, it's certainly your call if you want to keep doing a thing you know will cause wrong beliefs in the people around you.
Separately, 100% of the people I've encountered using the word "based" are radical leftist transfolk, and there are like twelve of them?
Replies from: Ninety-Three↑ comment by Ninety-Three · 2023-03-12T01:15:10.883Z · LW(p) · GW(p)
I understood "based" to be a 4chan-ism but I didn't think very hard about the example, it is possible I chose a word that does not actually work in the way I had meant to illustrate. Hopefully the intended meaning was still clear.
↑ comment by Rafael Harth (sil-ver) · 2023-03-04T11:12:43.587Z · LW(p) · GW(p)
I think I should just add my own data point here, which is that Zack and I have been on polar opposites sites of a pretty emotional debate before, and I had zero complaints about their conduct. In fact ever since then, I think I'm more likely to click on a post if I see that Zack wrote it.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-04T16:54:55.713Z · LW(p) · GW(p)
Thanks for chiming in; this is encouraging to hear. I'm imagining the pretty emotional debate you're thinking of is the one on "My Dating Plan ala Geoffrey Miller" in July 2020 [LW(p) · GW(p)]? Interestingly, I think my behavior there was much ruder than anything Duncan's objected to from me, so I think your reaction is evidence that there's a lot of interpersonal variation in how much "softening" different people think is desirable or necessary.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2023-03-04T17:35:53.804Z · LW(p) · GW(p)
It was that general debate about content moderation. Pretty sure it wasn't all in the comments of that post (though that may have been the start); I don't remember the details. It's also possible that my recollection includes back and forth you had with [other people who defended my general position].
↑ comment by jimmy · 2023-03-01T18:41:13.772Z · LW(p) · GW(p)
I'm confused. It seems to me that you, Zack, and I all have a similar takes on the example you bring up, but the fact that you say this here suggests that you don't see us all as in clear agreement?
Replies from: Duncan_Sabien, Zack_M_Davis↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-01T20:10:54.202Z · LW(p) · GW(p)
I don't see us all as in clear agreement; I think we're at least somewhat in nominal agreement but I have found Zack to be ... I don't mean this as a contentless insult, I mean it as a literal attempt-to-model ... irrationally fixated on being anti-polite, and desperately fending off attempts to validate or encode any kind of standard or minimum bar of politeness.
By "irrationally" I mean that he seems to me to do so by irresistible reflex, with substantial compulsion/motive force, even when the resulting outcome is unambiguously contra his explicitly stated goals or principles.
To put things in Zack's terminology, you could say that he's (apparently) got some kind of self-reinforcing algorithmic intent to be abrasive and off-putting and over-emphatic. Even where more reserved language would be genuinely truer, less misleading to the audience, and more in line with clear and precise word usage (all goals which Zack ostensibly ranks pretty high in the priority list), there's (apparently) some kind of deep psychological pressure that reliably steers him in the other direction, and makes him vehemently object to putting forth the (often pretty minimal) effort required.
Similarly, even where marginally more polite language would be predictably and substantially more effective at persuading his audience of the truth of some point, or updating social consensus in his preferred direction, he (apparently) cannot help himself; cannot bear to do it; responds with a fervor that resembles the fervor of people trying to repel actual oppression. It is as if Zack sees, in the claim "hey, 'this seems insane to me' is both truer and more effective than 'this is insane', you should consider updating your overall language heuristics to account for this delta across all sorts of utterances" an attempt to imprison or brainwash him, much like the more stringent objections to pronoun preferences ("you can't MAKE ME SEE A WOMAN WHERE I SEE A MAN, GTFO OF MY HEAD, THERE ARE FOUR LIGHTS!"). On the surface, it has a lot in common with a trauma-response type overreaction.
This aspect of Zack's behavior seems to me to be beyond his control; it has enough motive force that e.g. he has now been inspired to write two separate essays taking [the dumbest and least-charitable interpretations of me and Rob recommending "maybe don't be a total dick?"] and then railing against those strawmen, at length.
(I would have a different reaction if either of Zack's so-called responses were self-aware about it, e.g. if they explicitly claimed "what Rob/Duncan recommends will degrade to this in practice, and thus discussing the strawman is material," or something like that. Zack does give a little lip-service to the idea that he might not have properly caught our points, saying stuff like ~"I mean, maybe they meant something not megadumb, but if so I'm utterly incapable of figuring out what" but does not evince any kind of Actual Effort to think along lines of "okay okay but if it were me who was missing something, and they had a real point, what might it be?" This lack of any willingness to put forth such effort is a major part of why I don't bother with effortfully and patiently deconfusing him anymore; if someone's just really lost then I'm often willing to help but if they're really lost and insisting that what I'm saying is badwrongdumbcrazy then I tend to lose interest in helping them connect the dots.)
Even in Zack's apology above, he's basically saying "I won't do this to you, Duncan, anymore, because you'll push back enough to make me regret it, but I refuse to entertain the possibility that maybe there's a generally applicable lesson for me, here." He's digging in his heels on this one; his mistakes repeat themselves and resist correction and reliably steer him in a consistent direction. There's something here that uses Zack as its puppet and mouthpiece, as opposed to some source of motive energy which Zack uses in pursuit of his CEV.
Replies from: Raemon, Zack_M_Davis, jimmy, SaidAchmiz, SaidAchmiz↑ comment by Raemon · 2023-03-01T20:41:45.690Z · LW(p) · GW(p)
FYI, having recently stated [LW(p) · GW(p)] "man I think Duncan and Zack should be seeing themselves more as allies", I do want to note I agree pretty strongly with this characterization. I think Zack probably also agrees with the above during his more self-aware moments, but often not in the middle of a realtime discussion.
I do think Zack should see this fact about himself as a fairly major flaw according to his own standards, although it's not obvious to me that the correct priority for him should be "fixing the surface-visible-part of the flaw", and I don't know what would actually be helpful.
My reasoning for still thinking it's sad for Zack/Duncan to not see each other more as allies routes primarily through what I think 'allyship' should mean, given the practicalities of the resources available in the world. I think the people who are capable of advancing the art of rationality are weird and spiky and often come with weird baggage, and... man, sorry those are the only people around, it's a very short list, if you wanna advance the art of rationality you need to figure out some way of dealing with that (When I reflect a bit, I don't actually think Duncan should necessarily be doing anything different here, I think not engaging with people who are obnoxious to deal with is fine. Upon reflection I'm mostly sad about the The Relational Stance [LW · GW] here, and idk, maybe that just doesn't matter)
...
(Also, I think the crux Zack lists in his other recent reply [LW(p) · GW(p)] is probably also pretty close to a real crux between him and Duncan, although not as much of a crux between Zack and me)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-02T02:13:37.592Z · LW(p) · GW(p)
I also think it's sad that Duncan and I apparently can't be allies (for my part, I like a lot of Duncan's work and am happy to talk with him), but I think there's a relevant asymmetry.
When my weird baggage leaks into my attempted rationality lessons, I think there's a corrective mechanism insofar as my weird baggage pushes me to engage with my critics even when I think they're being motivatedly dumb: if I get something wrong, gjm [LW · GW] will probably tell you about it. Sometimes I don't [LW(p) · GW(p)] have [LW(p) · GW(p)] time to reply, but I will never, ever ban gjm from commenting on my posts, or insist that he pre-emptively exert effort trying to think of reasons that he's the one who's missing something, or complain that interacting with him doesn't feel cooperative or collaborative.
When Duncan's weird baggage [LW · GW] leaks into his attempted rationality lessons, I think there's much less of a corrective mechanism insofar as Duncan feels free to ban critics that he thinks are being motivatedly dumb. If Duncan's judgements about this are correct, he saves a lot of time and emotional energy that he can spend doing other things. (I'm a bit jealous.) But if his judgements are ever wrong, he loses a chance to discover his mistakes.
Of course, I would say that! (The two paragraphs I just typed were clearly generated from my ideology; someone else with a different way of thinking might be able to think of reasons why I'm wrong, that I can't see by myself.)
If you do, I hope you'll let me know in the comments!
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T07:17:58.880Z · LW(p) · GW(p)
This whole comment is a psy-op. It was a mistake for me to leave a comment up above in the first place, and I came to my senses and deleted it literally less than a minute after I hit "enter," but that didn't stop Zack from replying twenty minutes later and now we have a thread so fine, whatever.
When Zack's weird baggage leaks into his attempted rationality lessons, he calls people insane and then writes multi-thousand-word screeds based on his flawed interpretations, which he magnanimously says the other person is perfectly welcome to correct! leaving people the following options:
- Spend hours and hours of their scant remaining lifetimes laboriously correcting the thousand-yard sprint he already took down the wrong trailhead, or
- Leave his uncanny valley misinterpretation there, unaddressed, where it will forever anchor subsequent interpretations, pulling them toward an attractor, and also make the author seem churlish or suspiciously unable-to-rebut (which lends his interpretation further apparent strength)
... which makes being here exhausting and intolerable.
Zack could, of course, just not do this. It's entirely within his power! He could (for instance), when he forms a knee-jerk interpretation of someone else's statement that he finds crazy or upsetting, simply ask whether that was the interpretation the author intended, before charging full-steam ahead with a preemptive critique or rebuttal.
(You know, the way you would if you were here to collaborate, and had a shred of respect for your interlocutors.)
This is even easier! It requires less effort! It doesn't require e.g. being charitable, which for some reason Zack would rather die than do.
But Zack does not do this, because, for whatever reason, Zack values [preserving his god-given right to be a jump-to-conclusions asshole] over things like that. He'll claim to be sad about our inability to communicate well, but he's not actually sad enough to cut it out, or even just cut back a little.
(I think it's wise to be suspicious of people who claim to feel an emotion like sorrow or regret, but do not behave in the ways that someone-feeling-sorrow or someone-feeling-regret would behave; often they have mislabeled something in their experience.)
After cutting him more slack than I cut anybody else for months, the essay where I finally gave up was clearly a partial draft posted by mistake (it petered out after two sections into random fragments and bullet points and scraps of unconnected paragraphs), and it literally said, at the top, sentences to the effect of "these initial short summaries allow for multiple interpretations, some of which I do not intend; please do not shoot off an objection before reading the expansion below, where that interpretation might already have been addressed."
But within mere minutes, Zack, willfully ignoring that straightforward request, ignoring the obvious incomplete nature of the essay, and ignoring the text wherein his objection was, in fact, addressed, shot off a comment saying that what I was recommending was insane, and then knocking over a strawman.
This was not an aberrant event. It was typical. It was one more straw on the camel's back. The most parsimonious explanation for why Zack does this instead of any number of more productive moves is that Zack wants his interlocutors to be wrong, so that Zack can be the one to go "look! Look!" and everyone will appreciate how clever he was in pointing out the hole in the argument. He wants me to be wrong badly enough that he'll distort my point however far he needs to in order to justify writing an essay allegedly "in response."
(I'm perfectly free to correct him! All I have to do is let him take my hours and my spoons hostage!)
And given that, and given that I eventually could not stand his counterproductive anti-epistemic soul-sucking time wastery any longer, he has the temerity to insinuate that he's morally superior because he'd never block anyone, no, sir. Or that it's my fault that we don't have a line of communication, because he's happy to talk with me.
FUCK.
Enjoy your sanctimony. You are choosing to make the world worse.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-02T17:57:04.026Z · LW(p) · GW(p)
Thanks for your thoughts. (Strong-upvoted.)
the essay where I finally gave up [...] This was not an aberrant event. [...] one more straw on the camel's back
Yes, that December 2021 incident was over the line. I'm sorry. In retrospect, I wish I hadn't done that—but if I had taken a few more moments to think, I would have been able to see it without retrospect. That was really stupid of me, and it made things worse for both of us.
You're also correct to notice that the bad behavior that I don't endorse on reflection can be seen as a more extreme version of milder behavior that I do endorse on reflection. (Thus the idiom "over the line", suggesting that things that don't go over the line are OK.) I wish I had been smart enough to only do the mild version, and never overshoot into the extreme version.
ignoring the text wherein his objection was, in fact, addressed
Are you referring to the paragraph that begins, "If two people disagree, it's tempting for them to attempt to converge with each other [...]"? In a comment to Phil H., I explained why that paragraph didn't satisfy me [LW(p) · GW(p)]. (Although, as I acknowledged to Phil, it's plausible that I should have quoted and acknowledged that paragraph in my post, to make it clearer to readers what you weren't saying; I'll probably do so if I get around to revising the post.)
If you're not referring to that paragraph, I'm not sure where you think my objection has been addressed.
(I think I would have noticed if that paragraph had been in the December 2021 version, but if you say it was, I'll take your word for it—which would imply that my December 2021 behavior was even worse than I've already admitted; I owe you a much bigger apology in that case.)
leaving people the following options
I agree that this is an unpleasant dilemma for an author to face, but to me, it seems like an inextricable feature of intellectual life? Sometimes, other authors have a perspective contrary to yours that they argue for in public, and they might sometimes refer to your writings in the course of arguing for their perspective. I don't see any "policy" solution here.
simply ask whether that was the interpretation the author intended
Why isn't this relevantly similar to what Said does [LW(p) · GW(p)], though?
I'm worried that your preferred norms make it way too easy for an author to censor legitimate criticisms. If the critic does too little interpretive labor (just asking questions and expecting the author to be able to answer, like Said), the author can dismiss them for not trying hard enough. If the critic does too much interpretive labor (writing multi-thousand word posts explaining in detail what they think the problem is, without necessarily expecting the author to have time to reply, like me), the author can dismiss them for attacking a strawman.
I imagine you don't agree with that characterization, but I hope you can see why it looks like a potential problem to me?
if you were here to collaborate
I'm not always here to collaborate. Sometimes I'm here to criticize. I endorse this on reflection.
had a shred of respect for your interlocutors.
I mean, I respect some of your work. For two concrete examples, I link to "In My Culture" sometimes, and I think "Split and Commit" [LW · GW] is a useful reminder to try to live in multiple possible worlds instead of assuming that your "max likelihood" map is equal to the territory. (I think I need to practice this one more.)
But more generally, when people think they're entitled to my respect rather than having to earn it, that does, actually, make me respect them less than I otherwise would. That might be what you're picking up on when you perceive that I don't have a shred of respect for my interlocutors?
suspicious of people who claim to feel an emotion like sorrow or regret, but do not behave in the ways that someone-feeling-sorrow or someone-feeling-regret would behave; often they have mislabeled something in their experience
That's a good observation, thanks. I think the thing I initially labeled as "sad" is really just ... wishing we weren't in this slapfight?
I would prefer to live in one of the possible worlds where you either addressed my objections, or silently ignored my posts, or said that you thought my posts were bad in ways that you didn't have the time to explain but without suggesting I'm culpable of strawmanning without specifically pointing to what I'm getting wrong [LW · GW].
Since you did try to hold me culpable, I had an interest in responding to that ... and here we are. This seems like a bad possible world to live in. I think it's unpleasant for both of us, and I think it's wasting a lot of time that both of us could spend doing more productive things.
I could have avoided this outcome by not writing posts about my objections to your Fifth Guideline or Rob's Ninth Element ... but then the Fifth Guideline and the Ninth Element would have been accepted into the local culture unchallenged. That would be a problem for me. I think it was worth my effort try to prevent that outcome, even though it resulted in this outcome, which is unpleasant and expensive.
I think that's what I meant by "sad", in more words.
so that Zack can be the one to go "look! Look!" and everyone will appreciate how clever he was in pointing out the hole in the argument
... I think I'm going to bite this bullet? Yes, I sometimes take pride and pleasure in criticizing work that I think is importantly mistaken. The reason this sits okay with my conscience is because I think I apply it symmetrically (I think it's fine if someone gets excited about poking holes in my arguments), and because I think the combination of me and the community is pretty good at distinguishing good criticisms from bad criticisms.
He wants me to be wrong badly enough
I don't think it's personal. As you've noticed, I'm hyper-sensitive to attempts to validate or encode any kind of standard or minimum bar of politeness, because I'm viscerally terrified of what I think they'll degrade to in practice. You'll notice that I also argued against Rob's Ninth Element, not just your Fifth Guideline. It's not about Rob. It's not about you. It's about protecting my interests.
You are choosing to make the world worse.
We're definitely steering the world in different directions. It looks like some of the things I think are net-positive (having more good aspects than bad aspects) are things that you think are net-negative, which puts us in conflict sometimes. That's unfortunate.
Replies from: Duncan_Sabien, Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T19:20:25.033Z · LW(p) · GW(p)
(I think I would have noticed if that paragraph had been in the December 2021 version, but if you say it was, I'll take your word for it—which would imply that my December 2021 behavior was even worse than I've already admitted; I owe you a much bigger apology in that case.)
It was. That's why I was (and remain) so furious with you (Edit: and also am by default highly mistrustful of your summaries of others' positions).
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-04T06:04:16.482Z · LW(p) · GW(p)
Thanks for telling me (strong-upvoted). That makes sense as a reason for you to be furious with me. As the grandparent says, I owe you a bigger apology than my previous apology, which appears below.
I hereby apologize for my blog comment of 4 December 2021, on an earlier revision of "Basics of Rationalist Discourse" [LW · GW]. In addition to the reasons that it was a bad comment in context that I listed in my previous apology [LW(p) · GW(p)], it was also a bad comment for failing to acknowledge that the text of the post contained a paragraph addressing the comment's main objection, which is a much more serious error. I am embarrassed at my negligence. To avoid such errors in the future, I will endeavor to take some time to emotionally cool down and read more carefully before posting a comment, when I notice that I'm tempted to post a comment while emotionally activated.
If you'd like me to post a variation of this in a more prominent location (like Facebook or Twitter), I'd be willing to do that. (I think I'd want to spend a few more minutes to rewrite the lesser reasons that the comment was bad in context as its own sentence, rather than linking to the previous apology.)
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-22T04:35:57.551Z · LW(p) · GW(p)
I don't know what to say in response. Empirically, this apology did zero to reduce the extremely strong deterrent of "God dammit, if I try to post something on LessWrong, one way or another Zack and Said are going to find a way to make that experience miserable and net negative," which, in combination with the energy that this thread burned up, has indeed resulted in me not posting, where counterfactually I would've posted three essays.
(I'm only here now because you're bumping the threads.)
(Like, there are three specific, known essays that I have not posted, because of my expectations coming off of this thread and the chilling effect of "I'll have to deal with Zack and Said's responses.")
(Also the reason my Basics post ended up being so long-winded was because, after my experience with the partial draft going up by mistake, I was trying quite hard to leave a future Zack no ways to make me regret publishing/no exposed surfaces upon which I could be attacked. I ended up putting in about 20 extra hours because of my past experience with you, which clearly did not end up paying off; I underestimated just how motivated you would be to adversarially interpret and twist things around.)
I tried blocking, and that wasn't enough to get you to leave me alone.
Sounds like you win.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T18:25:42.700Z · LW(p) · GW(p)
I'm worried that your preferred norms make it way too easy for an author to censor legitimate criticisms. If the critic does too little interpretive labor (just asking questions and expecting the author to be able to answer, like Said), the author can dismiss them for not trying hard enough. If the critic does too much interpretive labor (writing multi-thousand word posts explaining in detail what they think the problem is, without necessarily expecting the author to have time to reply, like me), the author can dismiss them for attacking a strawman.
Literally only you and Said have these twin problems (among long-lasting prolific LW participants). This is you saying "but but but if you claim ZERO is too little and a BILLION is too much, then how is there any room for legitimate criticism to exist?"
It's somewhere between zero and a billion, like every other person on LessWrong manages to do just fine all the time.
Late edit: we have a term for this thing; it's called "fallacy of the grey."
Replies from: Vaniver, Zack_M_Davis↑ comment by Vaniver · 2023-03-02T18:47:57.055Z · LW(p) · GW(p)
Literally only you and Said have these twin problems (among long-lasting prolific LW participants). This is you saying "but but but if you claim ZERO is too little and a BILLION is too much, then how is there any room for legitimate criticism to exist?"
It's somewhere between zero and a billion, like every other person on LessWrong manages to do just fine all the time.
I think it's important to note survivorship bias here; I think there are other people who used to post on LessWrong and do not anymore, and perhaps this was because of changes in norms like this one.[1] It also seems somewhat likely to me that Said and Zack think that there's too little legitimate criticism on LW. (I often see critical points by Zack or Said that I haven't yet seen made by others and which I agree with; are they just faster or are they counterfactual? I would guess the latter, at least some of the time.)
As well, Zack's worry is that even if the guideline is written by people who have a sense that criticism should be between 4 and 12, establishing the rule with user-chosen values (like, for example, LW has done for a lot of post moderation) will mean there's nothing stopping someone from deciding that criticism has to be above 8 and below 6; if it will be obvious to you when some other post author has adopted that standard, and you'll call them out on it in a way that protects Zack's ability to criticize them, that seems like relevant info from Zack's perspective.
(From this comment I instead have a sense that your position is "look, we're over here playing a game with everyone who understands these rules, and Zack and Said don't, which means they should stop playing our game.")
[1] To be clear, I don't miss everyone who has stopped posting on LW; the hope with rules and guidelines like this is that you filter well. I think that, to the extent you're trying to make the case that Said and Zack should shape their behavior or leave / not comment on your posts (and other people should feel social cover to block them from commenting as well), you should expect them to take exception to the rules that would cause them to change the most, and it's not particularly fair to request that they hold the debate over what rules should apply under your rules instead of neutral rules.
Replies from: Duncan_Sabien, SaidAchmiz, Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T19:33:18.620Z · LW(p) · GW(p)
I think that, to the extent you're trying to make the case that Said and Zack should shape their behavior or leave / not comment on your posts (and other people should feel social cover to block them from commenting as well), you should expect them to take exception to the rules that would cause them to change the most, and it's not particularly fair to request that they hold the debate over what rules should apply under your rules instead of neutral rules.
I don't think I am making this request.
I do strongly predict that if I made free to verbally abuse Zack in the same fashion Zack verbally abuses others, I would be punished more for it, in part because people would be like "well, yeah, but Zack just kinda is like that; you should do better, Duncan" and in part because people would be like "DUDE, Zack had a traumatic experience with the medical system, you calling him insane is WAY WORSE than calling someone else insane" and "well, if you're not gonna follow your own discourse rules, doesn't that make you a hypocrite?"
It's an asymmetric situation that favors the assholes; people tend not to notice "oh, Duncan rearmed with these weapons he advocates disarming because his interlocutors refused to join the peace treaty."
Replies from: Vaniver↑ comment by Vaniver · 2023-03-03T00:26:42.714Z · LW(p) · GW(p)
Sure, I buy that any functional garden doesn't just punish hypocrisy, but also failing to follow the rules of the garden, which I'm imputing as a motivation for your second and third paragraphs. (I also buy that lots of "let people choose how to be" approaches favor assholes.)
But... I think there's some other message in them, that I can't construct correctly? It seems to me like we're in a broader cultural environment where postmodern dissolution of moral standards means the only reliable vice to attack others for is hypocrisy. I see your second and third paragraphs as, like, a mixture of disagreeing with this ('I should not be criticized for hypocrisy as strongly as I predict I would be if I were hypocritical') and maybe making a counteraccusation of hypocrisy ('if there were evenly applied standards of conduct, I would be protected from Zack's misbehavior, but as is I am prevented from attacking Zack but the reverse is not true').
But I don't think I really agree with either of those points, as I understand them. I do think hypocrisy is a pretty strong argument against the proposed rules, and also that double standards can make sense (certainly I try to hold LW moderators to higher standards than LW users).
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-03T00:44:07.580Z · LW(p) · GW(p)
I'm saying:
"I'd like for us to not have a culture wherein it's considered perfectly kosher to walk around responding to other users' posts with e.g. 'This is insane' without clearing a pretty high bar of, y'know, the thing actually being insane. To the extent that Zack is saying 'hey, it's fine, you can verbally abuse me, too!' this is not a viable solution."
Fortunately, it seems that LessWrong generally agrees; both my suggested norms and Robbie's suggested norms were substantially more popular than either of Zack's weirdly impassioned defenses-of-being-a-jerk.
I guess I don't know what you mean by "neutral norms" if you don't mean either "the norms Duncan's proposing, that are in line with what Julia Galef and Scott Alexander and Dan Keys and Eric Rogstad and Oliver Habryka and Vaniver and so on and so forth would do by default," or "the norms Zack is proposing, in which you act like a dick and defend it by saying 'it's important to me that I be able to speak plainly and directly.'"
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-02T19:30:30.945Z · LW(p) · GW(p)
I think that, to the extent you’re trying to make the case that Said and Zack should shape their behavior or leave / not comment on your posts (and other people should feel social cover to block them from commenting as well), you should expect them to take exception to the rules that would cause them to change the most, and it’s not particularly fair to request that they hold the debate over what rules should apply under your rules instead of neutral rules.
I endorse this observation.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T19:15:11.738Z · LW(p) · GW(p)
I instead have a sense that your position is "look, we're over here playing a game with everyone who understands these rules, and Zack and Said don't, which means they should stop playing our game."
No, I'm not saying Zack and Said should stop playing the game, I'm saying they should stop being sanctimonious about their inability to do what the vast majority of people have a pretty easy time doing ("checking interpretations" and "sharing any of the interpretive labor at all", respectively).
I would be surprised to hear you claim that the valid critical points that Zack and Said make are contingent on them continuing to do the shitty things of (respectively) leaping to conclusions about A definitely implying B, or refusing to believe that A implies A until someone logically proves A→A. The times I've seen Zack and Said being useful or perceptive were when they weren't doing these useless and unproductive moves, but rather just saying what they thought.
When Zack says what he thinks, instead of going "hey, everybody, look how abhorrent my strawman of Rob's position is!" and trying to trick everyone into thinking that was Rob's position and that he is the sole bastion of epistemic virtue holding back the tides of evil, it's often useful.
When Said says what he thinks, instead of demanding that people rigorously define "sky," "blue," and "is" before allowing the conversation to move on from the premise "the sky is blue today," it's often useful.
There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?"
He could've even gone off and drafted his post while waiting on an answer; it needn't have even delayed his longer rant, if Rob failed to reply.
Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit, and I think Zack acts like it is. If you're that hostile to your fellow LWers, then I think you are making a mistake being here.
Replies from: Zack_M_Davis, SaidAchmiz, SaidAchmiz↑ comment by Zack_M_Davis · 2023-03-21T04:25:54.989Z · LW(p) · GW(p)
There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?" [...] Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit
It's not that I think refusing to employ the bare minimum of social grace is a virtue. It's that I wasn't aware—in fact, am still not aware—that confirming interpretations with the original author before publishing a critical essay constitutes the bare minimum of social grace. The idea that it's somehow bad behavior for intellectuals to publish essays about other intellectuals' essays without checking with the original author first is something I've never heard before; I think unilaterally publishing critical essays is a completely normal thing that intellectuals do all the time, and I see no particular reason for self-identified "rationalist" intellectuals to behave any differently.
For an arbitrary example from our local subculture, Yudkowsky once wrote "A Reply to Francois Chollet" criticizing Chollet's essay on the purported impossibility of an intelligence explosion. Did Yudkowsky first write an email to Chollet saying "did you mean X"? I don't know, but I would guess not; if Chollet stands by the text he published, and Yudkowsky doesn't feel uncertain about how to interpret the text, it's not clear how either of their interests would be served by Yudkowsky sending an email first rather than just publishing the post.
As far as my own work goes, "Aiming for Convergence" and "'Physicist Motors'" aren't the first times I've written reaction [LW(p) · GW(p)] posts to popular Less Wrong posts that I didn't like. Previously, I wrote "Relevance Norms" [LW · GW] in reaction to Chris Leong (following John Nerst) on contextualizing vs. decoupling norms [LW · GW], and "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" [LW · GW] in reaction to Yudkowsky on meta-honesty [LW · GW].
I've also written other commentary posts that said some critical things about an article, without being so negative overall, such as "Comment on 'Endogenous Epistemic Factionalization'" [LW · GW] (reacting to an article by University of California–Irvine professors James Weatherall and Cailin O'Connor) and "Comment on 'Propositions Concerning Digital Minds and Society'" [LW · GW] (reacting to an article by Nick Bostrom and Carl Shulman).
I didn't check with Leong beforehand. I didn't check with Yudkowsky beforehand. I didn't check with Weatherall or O'Connor or Bostrom or Shulman beforehand. No one told me I should have checked with Leong or Yudkowsky or Weatherall or O'Connor or Bostrom or Shulman beforehand. It's just never been brought up as a problem or an offense before, ever.
Most of these authors are much more important people than me who are probably very busy. If someone had told me I should have checked with the authors beforehand, I think I would have said, "Wouldn't that be disrespectful of their time?"
I do often notify the author after I've published a reaction piece. In the case of the current post, I unfortunately neglected to do so, but after seeing your comment, I did reach out to Rob, and he left a [LW(p) · GW(p)] few [LW(p) · GW(p)] comments [LW(p) · GW(p)]. Notably, in response to my comment about my motivations for writing this post, Rob writes [LW(p) · GW(p)]:
Seems great to me! I share your intuition that Goodwill seems a bit odd to include. I think it's right to push back on proposed norms like these and talk about how justified they are, and I hope my list can be the start of a conversation like that rather than the end.
This would seem to be pretty strong counterevidence against the claim that I failed to employ the bare minimum of social grace (at least as that minimum is construed by Rob himself)?
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-02T19:50:50.181Z · LW(p) · GW(p)
… inability to do what the vast majority of people have a pretty easy time doing (“checking interpretations” and “sharing any of the interpretive labor at all”, respectively).
My objection to this sort of claim is basically the same as my objection to this, from an earlier comment of yours:
[Interacting with Said] has never once felt cooperative or collaborative; I can make twice the intellectual progress with half the effort with a randomly selected LWer
And similar to my objection in a much earlier discussion (which I can’t seem to find now, apologies) about Double Crux (I think), wherein (I am summarizing from memory) you said that you have usually been able to easily explain and apply the concept when teaching it to people in person, as a CFAR instructor; to which I asked how you could distinguish between your interlocutor/student really understanding you, vs. the social pressure of the situation (the student/teacher frame, your personal charisma, etc.) causing them, perhaps, to persuade themselves that they’ve understood, when in fact they have not.
In short, the problem is this:
If “sharing interpretive labor”, “making intellectual progress”, etc., just boils down to “agreeing with you, without necessarily getting any closer to (or perhaps even getting further away from) the truth”, then of course you would observe exactly what you say you observe, yes?
And yet it would, in this scenario, be very bad if you self-selected into discussions where everyone had (it would seem to you) an easy time “sharing interpretive labor”, where you routinely made (or so you would think) plenty of “intellectual progress”, etc.
No doubt you disagree with this view of things. But on what basis? How can you tell that this isn’t what’s happening?
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-02T19:32:53.988Z · LW(p) · GW(p)
refusing to believe that A implies A until someone logically proves A→A.
demanding that people rigorously define “sky,” “blue,” and “is” before allowing the conversation to move on from the premise “the sky is blue today,”
I object to this characterization, which is inaccurate and tendentious.
↑ comment by Zack_M_Davis · 2023-03-21T04:26:41.078Z · LW(p) · GW(p)
That's not what I meant. I affirm Vaniver's interpretation ("Zack's worry is that [...] establishing the rule with user-chosen values [...] will mean there's nothing stopping someone from deciding that criticism has to be above 8 and below 6").
(In my culture, it's important that I say "That's not what I meant" rather than "That's a strawman", because the former is agnostic about who is "at fault". In my culture, there's a much stronger duty on writers to write clearly than there is on readers to maintain uncertainty about the author's intent; if I'm unhappy that the text I wrote led someone to jump to the wrong conclusion, I more often think that I should have written better text, rather than that the reader shouldn't have jumped.)
Another attempt to explain the concern (if Vaniver's "above 8 and below 6" remark wasn't sufficient): suppose there were a dishonest author named Mallory, who never, ever admitted she was wrong, even when she was obviously wrong. How can Less Wrong protect against Mallory polluting our shared map with bad ideas?
My preferred solution (it's not perfect, but it's the best I have) is to have a culture that values unilateral criticism and many-to-many discourse. That is, if Mallory writes a post that I think is bad, I can write a comment (or even a top-level reply or reaction [LW(p) · GW(p)] post, if I have a lot to say) explaining why I think the post is bad. The hope is that if my criticism is good, then people will upvote my criticism and downvote Mallory's post, and if my criticism is bad—for example, by mischaracterizing the text of Mallory's post—then Mallory or someone else can write a comment to me explaining why my reply mischaracterizes the text of Mallory's post, and people will upvote the meta-criticism and downvote my reply.
It's crucial to the functioning of this system that criticism does not require Mallory's consent. If we instead had a culture that enthusiastically supported Mallory banning commenters who (in Mallory's personal judgement) aren't trying hard enough to see reasons why they're the one that's missing something and Mallory is in the right, or who don't feel collaborative or cooperative to interact with (to Mallory), or who are anchoring readers with uncanny-valley interpretations (according to Mallory), I think that would be a problem, because there would be nothing to stop Mallory from motivatedly categorizing everyone who saw real errors in her thinking as un-collaborative and therefore unfit to speak.
The culture of unilateral criticism and many-to-many discourse isn't without its costs, but if someone wanted to persuade me to try something else, I would want to hear about how their culture reacts to Mallory.
Replies from: Duncan_Sabien, Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-22T03:28:07.718Z · LW(p) · GW(p)
This is ignoring the fact that you're highly skilled at deluding and confusing your audience into thinking that what the original author wrote was X, when they actually wrote a much less stupid or much less bad Y.
(e.g. repeatedly asserting that Y is tantamount to X and underplaying or outright ignoring the ways in which Y is not X; if you vehemently shout "Carthage delenda est" enough times people do indeed start becoming more and more afraid of Carthage regardless of whether or nor this is justified.)
You basically extort effort from people, with your long-winded bad takes, leaving the author with a choice between:
a) allowing your demagoguery to take over everyone's perceptions of their point, now that you've dragged it toward a nearby (usually terrible) attractor, such that even though it said Y everybody's going to subsequently view it through the filter of your X-interpretation, or
b) effortfully rebutting every little bit of your flood of usually-motivated-by-antipathy words.
Eventually, this becomes exhausting enough that the correct move is to kick Mallory out of the garden, where they do not belong and are making everything worse far disproportionate to their contribution.
Mallory can go write their rebuttals in any of the other ten thousand places on the internet that aren't specifically trying to collaborate on clear thinking, clear communication, and truth-seeking.
The garden of LessWrong is not particularly well-kept, though.
Replies from: dxu↑ comment by dxu · 2023-03-22T04:24:28.236Z · LW(p) · GW(p)
This is ignoring the fact that you're highly skilled at deluding and confusing your audience into thinking that what the original author wrote was X, when they actually wrote a much less stupid or much less bad Y.
This does not seem like it should be possible for arbitrary X and Y, and so if Zack manages to pull it off in some cases, it seems likely that those cases are precisely those in which the original post's claims were somewhat fuzzy or ill-characterized—
(not necessarily through the fault of the author! perhaps the subject matter itself is simply fuzzy and hard to characterize!)
—in which case it seems that devoting more cognitive effort (and words) to the topic might be a useful sort of thing to do, in general? I don't think one needs to resort to a hypothesis of active malice or antipathy to explain this effect; I think people writing about confusing things is generally a good thing (and if that writing ends up being highly upvoted, I'm generally suspicious of explanations like "the author is really, really good at confusing people" when "the subject itself was confusing to begin with" seems like a strictly simpler explanation).
Replies from: Zack_M_Davis, Duncan_Sabien↑ comment by Zack_M_Davis · 2023-03-26T21:52:46.841Z · LW(p) · GW(p)
(Considering the general problem of how forum moderation should work, rather than my specific guilt or innocence in the dispute at hand) I think positing non-truth-tracking motivations (which can be more general than "malice or antipathy") makes sense, and that there is a real problem here: namely, that what I called "the culture of unilateral criticism and many-to-many discourse" in the great-grandparent grants a structural advantage to people who have more time to burn arguing on the internet, analogously to how adversarial court systems grant a structural advantage to litigants who can afford a better lawyer.
Unfortunately, I just don't see any solutions to this problem that don't themselves have much more serious problems? Realistically, I think just letting the debate or trial process play out (including the motivated efforts of slick commenters or lawyers) results in better shared maps than trusting a benevolent moderator or judge to decide who deserves to speak.
To the extent that Less Wrong has the potential to do better than other forums, I think it's because our culture and userbase is analogous to a court with a savvier, more intelligent jury (that requires lawyers to make solid arguments, rather than just appealing to their prejudices), not because we've moved beyond the need for non-collaborative debate (even though idealized Bayesian reasoners would not need to debate).
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-22T04:27:06.929Z · LW(p) · GW(p)
(It's not a hypothesis; Zack makes his antipathy in these cases fairly explicit, e.g. "this is the egregore I'm fighting against tooth and nail" or similar. Generally speaking, I have not found Zack's writing to be confusion-inducing when it's not coming from his being triggered or angry or defensive or what-have-you.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-22T05:53:26.765Z · LW(p) · GW(p)
Separately: I'm having a real hard time finding a coherently principled position that says "that's a strawman" is off-limits because it's too accusatory and reads too much into the mind of the author, but is fine with "this is insane."
↑ comment by Zack_M_Davis · 2023-03-02T01:15:24.376Z · LW(p) · GW(p)
Thanks (strong-upvoted), this is a pretty good psychoanalysis of me; I really appreciate it. I have some thoughts about it which I will explain in the remainder of this comment, but I wouldn't particularly expect you to read or reply to it unless it's interesting to you; I agree that it makes sense for you to not expend patience and effort on people you don't think are worth it.
fending off attempts to validate or encode any kind of standard or minimum bar of politeness [...] trauma-response type overreaction. [...] two separate essays
Given that my traumatic history makes me extremely wary that attempts to validate or encode any kind of standard or minimum bar of politeness will in practice be weaponized to shut down intellectually substantive discussions, I think it makes sense for me to write critical essays in response to such attempts? It's true that someone without my traumatic history probably wouldn't have thought of the particular arguments I did. But having thought of the arguments, they seemed like a legitimate response to the text that was published.
The reason this sits okay with my conscience is because I think I apply it symmetrically. If someone else's traumatic history makes them motivated to come up with novel counterarguments to text that I published, I think that's great: if the counterarguments are good, then I learn something, and if the counterarguments are bad, then that's how I know I did a good job (that even someone motivated to find fault with my work, couldn't come up with anything good).
The reason I keep saying "the text" rather than "my views" is because I don't think my readers are under an obligation to assume that I'm not being megadumb, because sometimes I am being megadumb, and I think that insisting readers exert effort to think of reasons why I'm not, would be bad for my intellectual development.
As a concrete example of how I react to readers thinking I'm being megadumb, let's consider your reply to "Aiming for Convergence Is Like Discouraging Betting" [LW(p) · GW(p)].
Here, I think I have a very strong case that you were strawmanning me when you complained about "the implicit assertion [...] that because Zack can't think of a way to make [two things] compatible, they simply aren't." You can't seriously have thought that I would endorse "if Zack can't think of a way, there is no way" as a statement of my views!
But it didn't seem intellectually productive to try prosecute that as violation of anti-strawmanning norms.
In the next paragraph, you contest my claim that disagreements imply distrust of the other's epistemic process, offering "because you think they've seen different evidence, or haven't processed that evidence yet" as counterexamples.
And that's a totally legitimate criticism of the text I published! Sometimes people just haven't talked enough to resolve a disagreement, and my post was wrong to neglect that case as if it were unimportant or could go without saying. In my reply to you, I asked if inserting the word "persistent" (persistent disagreement) would suffice to address the objection, but on further thought, I don't think that's good enough; I think that whole section could use a rewrite to be clearer. I might not get around to it, but if I do, I'll thank you in a footer note.
And just—this is how I think intellectual discourse works: I post things; people try to point out why the things I posted were megadumb; sometimes they're right, and I learn things. Sometimes I think people are strawmanning me, and that's annoying, but I usually don't try to prosecute it except in the most egregious cases, because I don't think it's feasible to clamp down on strawmanning without shutting out legitimate objections that I just don't understand yet.
if they explicitly claimed "what Rob/Duncan recommends will degrade to this in practice, and thus discussing the strawman is material,"
That sounds like a great idea for a third essay! (Talking about how I'm worried about how things like your Fifth Guideline or Rob's ninth element will be degraded in practice, rather than arguing with the text of the guidelines themselves.) Thanks! I almost certainly won't get around to writing this (having lots of more important things to do in 2023), but if I ever do, I'll be sure to thank you for the idea in the footer.
He's digging in his heels on this one
The reason I'm digging in my heels is because I perceive a legitimate interest in defending my socially-legitimized right to sometimes say, "That's crazy" (as a claim about the territory) rather than "I think that's crazy" (as a claim about my map). I don't think I say this particularly often, and sometimes I say it in error, but I do think it needs to be sayable.
Again, the reason this sits okay with my conscience is because I think I apply it symmetrically: I also think people should have the socially-legitimized right to tell me "That's crazy" when they think I'm being crazy, even though it can hurt to be on the receiving end of that.
An illustrative anecdote: Michael Vassar is even more abrasive than I am, in a way that has sometimes tested my ideal of being thick-skinned. I once told him that I might be better at taking his feedback if he could "try to be gentler sometimes, hopefully without sacrificing clarity."
But in my worldview, it was important that that was me making a selfish request of him, asking him to accomodate my fragility. I wouldn't claim that it was in his overall interests to update his overall language heuristics to suit me. Firstly, because how would I know that? And secondly, because even if I were in a position to know that, that wouldn't be my real reason for telling him.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-03T01:03:48.512Z · LW(p) · GW(p)
The reason I'm digging in my heels is because I perceive a legitimate interest in defending my socially-legitimized right to sometimes say, "That's crazy" (as a claim about the territory) rather than "I think that's crazy" (as a claim about my map). I don't think I say this particularly often, and sometimes I say it in error, but I do think it needs to be sayable.
The problem is, you are an extremely untrustworthy judge of the difference between things being crazy in the actual territory versus them being crazy in your weird skewed triggered perceptions, and you should know this about yourself.
I agree 100% that sometimes things are crazy, and that when they are crazy it's right and proper to label them as such. "This is crazy" and "this seems crazy to me" are different statements, with different levels of confidence attached, just as "you are lying" and "it seems like you're lying" are different statements. This is how words work, in practice; if you expose similar populations to "X is lying" and "X seems like they're lying" the two populations will come away with reliably different impressions.
Your speech, though, erodes and invalidates this distinction; you say "X is crazy" when the actual claim you're justified to make is "X seems crazy to me." You are sufficiently blind to the distinction that you even think that me saying "treat these statements differently" is me generically trying to forbid you from saying one of them.
I'm not asking you to stop saying true things, I'm asking you to stop lying, where by lying I mean making statements that are conveniently overconfident [LW · GW]. When you shot from the hip with your "this is insane" comment at me, you were lying, or at the very least culpably negligent and failing to live up to local epistemic hygiene norms. "This sounds crazy to me" would have been true.
Replies from: Raemon, Zack_M_Davis↑ comment by Raemon · 2023-03-03T02:50:41.510Z · LW(p) · GW(p)
Speaking somewhat in my mod voice, I do basically also want to say "yes, Zack, I also would like you to stop lying by exaggeration/overconfidence".
My hesitation about speaking-in-mod voice is that I don't think it's "overconfidence as deceit" has really graduated to site norm (I know other LW team members who expressly don't agree with it, or have qualms about it). I think I feel kinda okay applying some amount of moderator force behind it, but not enough to attach a particular warning of moderator action at this point.
(I don't endorse Duncan's entire frame here, and I think I don't endorse the amount of upset he is. I honestly think this thread has a number of good points on both sides which I don't expect Duncan to agree (much?) with right now. But, when evaluating this complaint at Zack-in-particular I do think Zack should acknowledge his judgment here has not been good and the result is not living up to the standards that flow fairly naturally from the sequences)
Replies from: SaidAchmiz, Zack_M_Davis, Duncan_Sabien↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-03T04:51:40.967Z · LW(p) · GW(p)
Er, sorry, can you clarify—what, exactly, has Zack said that constitutes “lying by exaggeration/overconfidence”? Is it just that one “this is insane” comment, or are we talking about something else…?
Replies from: Raemon↑ comment by Raemon · 2023-03-04T08:12:21.760Z · LW(p) · GW(p)
Thinking a bit more, while I do have at least one [LW · GW] more example of Zack doing this thing in mind, and am fairly confident I would find more (and think they are add up to being bad), I'm not confident that if I were writing this comment for myself without replying to Duncan, I'd have ended up wording the notice the same way (which in this case I think was fairly overshadowed by Duncan's specific critique).
I'm fairly confident there are a collection of behaviors that add up to something Zack's stated values should consider a persistent problem, but not sure I have a lot of examples of any-particular-pattern that I can easily articulate offhand.
I do think Zack fairly frequently does a "Write a reply to a person's post as if it's a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about without engaging with what the original author was really talking about." In this particular post, I think there's a particular sleight-of-hand about word definitions I can point to as feeling particularly misleading. In Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think [LW · GW], I don't think there's a concrete thing that's deceptive, but something about it does feel slightly off.
Replies from: SaidAchmiz, Zack_M_Davis↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-04T09:22:05.001Z · LW(p) · GW(p)
while I do have at least one [LW · GW] more example of Zack doing this thing in mind
Did you mean to link to this comment [LW(p) · GW(p)]? Or another of his comments on that post…? It is not clear to me, on a skim of the comments, which specific thing that Zack wrote there might be an example of “lying by exaggeration/overconfidence” (but I could easily have missed it; there’s a good number of comments on that post).
I do think Zack fairly frequently does a “Write a reply to a person’s post as if it’s a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about without engaging with what the original author was really talking about.”
Hmm. Certainly the first part of that is true, but I’m not convinced of the second part (“without engaging with what the original author was really talking about”). For example, you mention the post “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” [LW · GW]. I found that said post expressed objections and thoughts that I had when reading Eliezer’s “Meta-Honesty” [LW · GW] post, so it seems strange to say that Zack’s post didn’t engage with what Eliezer wrote! (Unless you take the view that what Eliezer was “really talking about” was something different than anything that either Zack or I took from his post? But then it seems to me that it’s hardly fair to blame Zack / me / any other reader of the post; surely the reply to complaints should be “well, you failed to get across what you had in mind, clearly; unfortunate, but perhaps try again”.)
Of course, you do say that you “don’t think there’s a concrete thing that’s deceptive” about Zack’s “Firming Up Not Lying” post. Alright, then is there some non-concrete thing that’s deceptive? Is there any way in which the post can be said to be “deceptive”, such that a reasonable person would agree with the usage of the word, and that it’s a bad thing? The accusation can’t just be “it feels slightly off”. That’s not anything.
In this particular post, I think there’s a particular sleight-of-hand about word definitions I can point to as feeling particularly misleading.
This seems like exactly the sort of problem that’s addressed by writing a critical comment in the post’s comments section! (Which comment can then be replied to, by the post’s author and by other commenters, by means of which discussion we might all—or so one hopes—become less wrong.)
↑ comment by Zack_M_Davis · 2023-03-04T16:44:19.596Z · LW(p) · GW(p)
fairly frequently does a "Write a reply to a person's post as if it's a rebuttal to the post, which mostly goes off and talks about an unrelated problem/frame that Zack cares about
Would it help if we distinguished between a "reply" (in which a commentator explains the thoughts that they had in reaction to a post, often critical or otherwise negative thoughts) and a "rebuttal" (in which the commentator directly contradicts the original post, such that the original post and the rebuttal can't "both be right")? I often write replies that are not rebuttals, but I think this is fine.
Replies from: Ninety-Three, philh↑ comment by Ninety-Three · 2023-03-08T02:51:22.184Z · LW(p) · GW(p)
Everyone sometimes issues replies that are not rebuttals, but there is an expectation that replies will meet some threshold of relevance. Injecting "your comment reminds me of the medieval poet Dante Alighieri" into a random conversation would generally be considered off-topic, even if the speaker genuinely was reminded of him. Other participants in the conversation might suspect this speaker of being obsessed with Alighieri, and they might worry that he was trying to subvert the conversation by changing it to a topic no one but him was interested in. They might think-but-be-too-polite-to-say "Dude, no one cares, stop distracting from the topic at hand".
The behaviour Raemon was trying to highlight is that you soapbox. If it is line with your values to do so, it still seems like choosing to defect rather than cooperate in the game of conversation.
↑ comment by Zack_M_Davis · 2023-03-12T04:49:52.232Z · LW(p) · GW(p)
I mean, I agree that I have soapbox-like tendencies (I often have an agenda, and my contributions to our discourse often reflect my agenda), but I thought I've been meeting the commonsense relevance standard—being an Alighieri scholar who only brings it up when there happens to be a legitimate Alighieri angle on the topic, and not just randomly derailing other people's discussions.
I could be persuaded that I've been getting this wrong, but, again [LW(p) · GW(p)], I'm going to need more specific examples (of how some particular post I made misses the relevance standard) before I repent or change anything.
↑ comment by philh · 2023-03-06T10:52:15.818Z · LW(p) · GW(p)
We might distinguish between
- Reaction: I read your post and these are the thoughts it generated in me
- Reply: ...and these thoughts seem relevant to what the post was talking about
- Rebuttal: ...and they contradict what you said.
I've sometimes received comments where I'd have found it helpful to know which of these was intended.
(Of course a single comment can be all of these in different places. Also a reaction should still not misrepresent the original post.)
↑ comment by Zack_M_Davis · 2023-03-04T06:52:26.088Z · LW(p) · GW(p)
I do think Zack should acknowledge his judgment here has not been good and the result is not living up to the standards that flow fairly naturally from the sequences
Sorry, I'm going to need more specific examples of me allegedly "lying by exaggeration/overconfidence" before I acknowledge such a thing. I'm eager to admit my mistakes, when I've been persuaded that I've made a mistake. If we're talking specifically about my 4 December 2021 comment that started with "This is insane", I agree that it was a very bad comment that I regret very much [LW(p) · GW(p)]. If we're talking about a more general tendency to "lie by exaggeration/overconfidence", I'm not persuaded yet.
(I have more thoughts about things people have said in this thread, but they'll be delayed a few days, partially because I have other things to do, and partially because I'm curious to see whether Duncan will accept my new apology for the "This is insane" comment [LW(p) · GW(p)].)
Replies from: Raemon↑ comment by Raemon · 2023-03-04T08:12:30.430Z · LW(p) · GW(p)
The previous example I had onhand was in a private conversation where you described someone as "blatantly lying" [LW · GW] (you're anonymized in the linked post), and we argued a bit and (I recall) you eventually agreeing that 'blatantly lying' was not an accurate characterization of 'not-particularly-blatantly-rationalizing' (even if there was something really important about that rationalizing that people should notice). I think I recall you using pretty similar phrasing a couple weeks later, which seemed like there was something sticky about your process that generated the objection in the first place. I don't remember this second part very clearly though.
(I agree this is probably still not enough examples for you to update strongly at the moment if you're going entirely off my stated examples, and they don't trigger an 'oh yeah' feeling that prompts you to notice more examples on your own)
Replies from: Zack_M_Davis, Duncan_Sabien↑ comment by Zack_M_Davis · 2023-03-04T17:11:40.059Z · LW(p) · GW(p)
I think it's significant that the "blantant lying" example was an in-person conversation, rather than a published blog post. I think I'm much more prone to exaggerate in real-time conversations (especially emotionally-heated conversations) than I am in published writing that I have time to edit.
Replies from: Raemon↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-22T03:38:19.887Z · LW(p) · GW(p)
Here's one imo
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-03T04:33:23.759Z · LW(p) · GW(p)
(I'm not sure I quite endorse my level of anger either, but there really is something quite rich about the combination of:
- Zack having been so cavalier and rude that I blocked him because he was singlehandedly making LessWrong a miserable place to be, and making "publishing an essay" feel like touching an electric fence
- Zack then strawmanning exactly the part of my post that points out "hey, it's nice when people don't do that"
- Zack, rather than just making his arguments on their own merit, and pointing out the goodness of good things and the badness of bad things, instead painting a caricature of me (and later Rob) as the opposition and thus inextricably tying his stuff to mine/making it impossible to just get away from him
- (I do in fact think that his post anchored and lodestoned people toward his interpretation of that guideline; I recall you saying that after you read his summary/description you nodded and said to yourself "seems about right" but I'd bet $100 to somebody's $1 that if we had a time machine you wouldn't have produced that interpretation on your own; I think you got verbal overshadow'd into it and I think Zack optimizes his writing to verbally overshadow people/cast a spell of confusion in this way; he often relentlessly says "X is Y" in a dozen different ways in his pieces until people lose track of the ways in which X is not Y.)
- (Which confusion Zack then magnanimously welcomed me to burn hours of my life laboriously cleaning up.)
- Zack then being really smug about how he'd never block anybody and how he'd never try to force anybody to change (never mind that I tried to insulate myself from him in lieu of forcing him to change, and would've happily let him be shitty off in his own shitty corner forever)
... it really is quite infuriating. I don't know a better term for it than "rich;" it seems to be a central example of the sort of thing people mean when they say "that's rich.")
↑ comment by Zack_M_Davis · 2023-03-21T04:28:33.200Z · LW(p) · GW(p)
I agree that it often makes sense to write "This seems X to me" rather than "This is X" to indicate uncertainty or that the people I'm talking to are likely to disagree.
you even think that me saying "treat these statements differently" is me generically trying to forbid you from saying one of them.
Thanks for clarifying that you're not generically trying to forbid me from saying one of them. I appreciate it.
When you shot from the hip with your "this is insane" comment at me, you were [...] culpably negligent
Yes, I again agree that that was a bad comment on my part [LW(p) · GW(p)], which I regret.
(Thanks to Vaniver for feedback on an earlier draft of this comment.)
↑ comment by jimmy · 2023-03-05T19:18:55.882Z · LW(p) · GW(p)
I guess I meant "as it applies here, specifically", given that Zack was already criticizing himself for that specific thing, and arguing for rather than against politeness norms in the specific place that I commented. I'm aware that you guys haven't been getting along too well and wouldn't expect agreement more generally, though I hadn't been following closely.
It looks like you put some work and emotional energy into this comment so I don't want to just not respond, but it also seems like this whole thing is upsetting enough that you don't really want to be having these discussions. I'm going to err on the side of not getting into any object level response that you might not want, but if you want to know how to get along with Zach and not find it infuriating I think I do understand his perspective (having found myself in similar shoes) well enough to explain how you can do it.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-01T22:47:01.540Z · LW(p) · GW(p)
It is as if Zack sees, in the claim “hey, ‘this seems insane to me’ is both truer and more effective than ‘this is insane’, you should consider updating your overall language heuristics to account for this delta across all sorts of utterances” an attempt to imprison or brainwash him, much like the more stringent objections to pronoun preferences …
Isn’t it, though?
Probabilistically speaking, I mean. Usually, when people say such things (“you should consider updating your overall language heuristics”, etc.) to you, they are in fact your enemies, and the game-theoretically correct response is disproportionate hostility.
Now, that’s “usually”, and not “always”; and such things are in any case a matter of degree; and there are different classes of “enemies”; and “disproportionate hostility” may have various downsides, dictated by circumstances; and there are other caveats besides.
But, at the very least, you cannot truthfully claim that the all-caps sort of hostile response is entirely irrational in such cases—that it can only be caused by “a trauma-response type overreaction” (or something similar).
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T00:00:36.034Z · LW(p) · GW(p)
There probably exists a word or phrase for the disingenuous maneuver Said is making, here, of pretending as if we're not talking about interactions between me and Zack and Rob (or more broadly, interactions between individuals in the filtered bubble of LessWrong, with the explicit context of arguing about norms within that highly filtered bubble) and acting as if just straightforwardly importing priors and strategies from [the broader internet] or [people in general] is reasonable.
Probably, but I'm not calling it to mind as easily as I'm calling the word "strawmanning," which is what's happening in Said's last paragraph, where he pretends as if I had claimed that the all-caps sort of hostile response was entirely irrational in such cases, so as to make it seem like his assertion to the contrary is pushing back on my point.
(Strawmanning being where you pretend that your interlocutor said something sillier than they did, or that what they said is tantamount to something silly, so that you can easily knock it down; in this case, he's insinuating that I made a much bolder claim than I actually did, since that bolder claim is easier to object to than what I actually said.)
I gave Zack the courtesy of explicitly informing him that I was done interacting with him directly; I haven't actually done that with Said so I'll do it here:
I find that interacting with Said is overwhelmingly net negative; most of what he seems to me to do is sit back and demand that his conversational partners connect every single dot for him, doing no work himself while he nitpicks with the entitlement of a spoiled princeling. I think his mode of engagement is super unrewarding and makes a supermajority of the threads he participates in worse, by dint of draining away all the energy and recursively proliferating non-cruxy rabbitholes. It has never once felt cooperative or collaborative; I can make twice the intellectual progress with half the effort with a randomly selected LWer. I do not care to spend any more energy whatsoever correcting the misconceptions that he is extremely skilled at producing, ad infinitum, and I shan't do so any longer; he's welcome to carry on being however confused or wrong he wants to be about the points I'm making; I don't find his confusion to be a proxy for any of the audiences whose understanding I care about.
(I will not consider it rude or offensive or culturally incorrect for people to downvote this comment! It's not necessarily the-kind-of-comment I want to see more of on LW, either, but downvoted or not I feel it's worth saying once.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-02T00:17:25.681Z · LW(p) · GW(p)
… pretending as if we’re not talking about interactions between me and Zack and Rob (or more broadly, interactions between individuals in the filtered bubble of LessWrong, with the explicit context of arguing about norms within that highly filtered bubble) and acting as if just straightforwardly importing priors and strategies from [the broader internet] or [people in general] is reasonable
But why do you say that I’m pretending this…? I don’t think that I’ve said anything like this—have I?
(Also, of course, I think you somewhat underestimate the degree to which importing priors from the broader internet and/or people in general is reasonable…)
… pretends as if I had claimed that the all-caps sort of hostile response was entirely irrational in such cases, so as to make it seem like his assertion to the contrary is pushing back on my point
Sorry, what? Were you not intending to suggest that such a response is irrational…? That was my understanding of what you wrote. On a reread, I don’t see what other interpretation might be reasonable.
If you meant something else—clarify?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-02T00:32:35.117Z · LW(p) · GW(p)
Thank you for such a crisp, concise demonstration of exactly the dynamic. Goodbye, Said.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-01T22:51:21.655Z · LW(p) · GW(p)
[the dumbest and least-charitable interpretations of me and Rob recommending “maybe don’t be a total dick?”]
This is an extremely tendentious summary of the posts in question.
I find it very implausible to suppose that you’ve never encountered the sort of thing where someone says “all we’re saying is don’t be a dick, man”, but what they’re actually saying is something much more specific and also much more objectionable. Such things are a staple of modern political discourse on these here interwebs.
Well, now you’re doing it, and it’s no less dishonest when you do it than when random armchair feminists on Twitter do it. Surely this sort of distortion is not necessary.
↑ comment by Zack_M_Davis · 2023-03-01T19:53:12.206Z · LW(p) · GW(p)
I don't think Duncan and I are in clear agreement more generally (even if we agree that the particular comment I wrote that caused Duncan to give up on me was in fact a bad comment).
Here's my quick attempt to pass Duncan's Ideological Turing Test on what our feud is about: "one of the most important obstacles to having a culture of clear thinking and clear communication is the tendency for interlocutors to misrepresent one another, to jump to conclusions about what the other person is saying, and lash out at that strawman, instead of appropriately maintaining uncertainty, or split-and-committing [LW · GW] pending further evidence. These skills are a prerequisite for being able to have a sane discussion. Empirically, Zack doesn't seem to care about these skills much, if at all. As a result, his presence makes the discussion spaces he's in worse."
(I probably didn't pass, but I tried.)
My response to my-attempt-to-pass-Duncan's-ITT (which probably didn't succeed at capturing Duncan's real views) is that I strongly disagree that pro-actively modeling one's interlocutors should be a prerequisite for being able to have a discussion. As an author, it's often frustrating when critics don't understand my words the way I hoped they would, but ultimately, I think it's my responsibility to try to produce text that stands up to scrutiny. I would never tell a critic that they're not passing my ITT, because in my view, passing my ITT isn't their job; their job is to offer their real thoughts on the actual text I actually published. I don't accuse critics of strawmanning unless I expect to be able to convince third parties with an explanation of how the text the critic published substantively misrepresents the text I published [LW(p) · GW(p)]. I'm extremely wary that a culture that heavily penalizes not-sufficiently-modeling-one's-interlocutor, interferes with the process of subjecting each other's work to scrutiny.
Again, that's my interpretation of what the feud is about. I'm not claiming to have accurately understood Duncan. If he happens to see this comment and wants to correct where my ITT is falling short, he's welcome to. If not, that's fine, too: people are busy; no one is under any obligation to spend time arguing on the internet when they have better things to do!
Replies from: jimmy↑ comment by jimmy · 2023-03-05T19:17:11.744Z · LW(p) · GW(p)
Yeah, I didn't mean that I thought you two agreed in general, just on the specific thing he was commenting on. I didn't mean to insert myself into this feud and I was kinda asking how I got here, but now that I'm here we might as well have fun with it. I think I have a pretty good feel for where you're coming from, and actually agree with a lot of it. However, agreement isn't where the fun is so I'm gonna push back where I see you as screwing up and you can let me know if it doesn't fit.
These two lines stand out to me as carrying all the weight:
I strongly disagree that pro-actively modeling one's interlocutors should be a prerequisite for being able to have a discussion.
I'm extremely wary that a culture that heavily penalizes not-sufficiently-modeling-one's-interlocutor, interferes with the process of subjecting each other's work to scrutiny.
These two lines seem to go hand in hand in your mind, but my initial response to the two is very different.
To the latter, I simply agree that there's a failure mode there and don't fault you for being extremely wary of it. To the former though.... "I disagree that this thing should be necessary" is kinda a "Tough?". Either it's necessary or it isn't, and if you're focusing on what "should" be you're neglecting what is.
I don't think I have to make the case that things aren't going well as is. And I'm not going to try to convince you that you should drop the "should" and attend to the "is" so that things run more smoothly -- that one is up to you to decide, and as much as "should" intentionally looks away from "is" and is in a sense fundamentally irrational in that way, it's sometimes computationally necessary or prudent given constraints.
But I will point out that this "should" is a sure sign that you're looking away from truth, and that it fits Duncan's accusations of what you're doing to a T. "I shouldn't have to do this in order to be able to have a discussion" sounds reasonable enough if you feel able to back up the idea that your norms are better, and it has a strong tendency to lead towards not doing the thing you "shouldn't have to" do. But when you look back at reality, that combination is "I actually do have to do this in order to have a (productive) discussion, and I'm gonna not do it, and I'm going to engage anyway". When you're essentially telling someone "Yeah, know what I'm doing is going to piss you off, and not only am I going to do it anyway I am going to show that pissing you off doesn't even weigh into my decisions because your feelings are wrong", then that's pretty sure to piss someone off.
It's clear that you're willing to weigh those considerations as a favor to Duncan, the way you recount asking Michael Vassar for such a favor, and that in your mind if Duncan wants you to accommodate his fragility, he should admit that this is what he's asking for and that it's a favor not an obligation -- you know, play by your rules.
And it's clear that by just accommodating everyone in this way without having the costs acknowledged (i.e. playing by his rules), you'd be giving up something you're unwilling to give up. I don't fault you there.
I agree with your framing that this is actually a conflict. And there are inherent reasons why that isn't trivially avoidable, but that doesn't mean that there isn't a path towards genuine cooperation -- just that you can't declare same sidedness by fiat.
Elsewhere in the comments you gave an example of "stealing bread" as a conflict that causes "disagreements" and lying. The solution here isn't to "cooperatively" pursue conflicting goals, it's to step back and look at how to align goals. Specifically, notice that everyone is better off if there's less thieving, and cooperate on not-thieving and punishing theft. And if you've already screwed up, cooperate towards norms that make confession and rehabilitation more appealing than lying but less appealing than not-thieving in the first place.
I don't think our problems are that big here. There are conflicts of values, sure, but I don't think the attempts to push ones values over others is generally so deliberately antisocial. In this case, for example, I think you and Duncan both more or less genuinely believe that it is the other party who is doing the antisocial acts. And so rather than "One person is knowingly trying to get away with being antisocial, so of course they're not going to cooperate", I think it's better modeled as an actual disagreement that isn't able to be trivially resolved because people are resorting to trying to use conflict rather than cooperation to advance their (perceived as righteous) goals, and then missing the fact that they're doing this because they're so open to cooperating (within the norms which are objectively correct, according to themselves) and which the other person irrationally and antisocially isn't (by rules they don't agree with)!
I don't agree with the way that he used it, but Duncan is spot on calling your behavior "trauma response". I don't mean it as a big-T "Trauma" like "abused as a child", but trauma in the "1 grain is a 'heap'" sense is at is at the core of this kind of conflict and many many other things -- and it is more or less necessary for trauma response to exist on both sides for these things to not fizzle out. The analogy I like to give is that psychological trauma is like plutonium and hostile acts are like neutrons.
As a toy example to illustrate the point, imagine someone steps on your toes; how do you respond? If it's a barefoot little kid, you might say "Hey kid, you're standing on my toes" and they might say "Didn't mean to, sorry!" and step off. No trauma no problem. If it's a 300lb dude with cleats, you might shove him as hard as you can because the damage incurred from letting him stand on your toes until you can get his attention is less acceptable. And if he's sensitive enough, he might get pissed at you for shoving him and deck you. If it becomes a verbal argument, he might say "your toes shouldn't have been there", and now it's an explicit conflict about where you get to put your toes and whether he as a right to step on them anyway if they are where you put them.
In order to not allow things to degenerate into conflict as the less-than-perfectly-secure cleat wearing giant steps on your toes, you have to be able to withstand that neutron blast without retaliating with your own so much that it turns into a fight instead of a "I'm sorry, I didn't realize your toes were there. I'll step off them for now because I care about your toes, but we need to have a conversation about where your feet are okay to be".
This means:
1) orienting to the truth that your toes are going to take damage whether you like it or not, and that "should" can't make this untrue or unimportant.
2) maintaining connection with the larger perspective that tracks what is likely to cause conflict, what isn't, and how to cause the minimal conflict and maximum cooperation possible so that you best succeed at your goals with least sacrifice of your formerly-sacred-and-still-instrumentally-important values.
In some cases, the most truth-oriented and most effective response is going to be politely tapping the big guy on the shoulder while your feet bleed, and having a conversation after the fact about whether he needs to be more careful where he's stepping -- because acting like shoving this guy makes sense is willful irrationality.
In other cases he's smaller and more shove-able and it doesn't make sense to accept the damage, but instead of coming off like "I'm totally happy to apologize for anything I actually did wrong. I'm sorry I called you a jerk while shoving you; that was unnecessary and inappropriate [but I will conspicuously not even address the fact that you didn't like being shoved or that you spilled your drink, because #notmyproblem. I'll explain why I'm right to not give a fuck if you care to ask]", you'll at least be more able to see the value in saying things like "I'm sorry I had to shove you. I know you don't like being shoved, and I don't like doing it. You even spilled your drink, and that sucks. I wish I saw another way to protect our communities ability to receive criticism without shoving you".
This shouldn't need to be said but probably does (for others, probably not for you), so I'll say it. This very much is not me taking sides on the whole thing. It's not a "Zach is in the wrong for not doing this" or a "I endorse Duncan's norms relatively more" -- nor is it the opposite. It's just a "I see Zach as wanting me to argue that he's screwing up in a way that might end up giving him actionable alternatives that might get him more of what he wants, so I will".
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-01T06:53:53.916Z · LW(p) · GW(p)
It’s not clear to me that there’s any more “lack of epistemic hygeine” in Zack’s posts than in anyone else’s, yours (and mine) included. If the claim here is that Zack exhibits significantly less epistemic hygeine than… who? You? The average Less Wrong commenter? Either way, it does not seem plausible to me. In most of the cases where you’ve claimed him to be something like “loudly + overconfidently + rudely wrong”, it has always seemed to me that, at best, there was nothing more than a case of “reasonable people might disagree on this one”.
Do you disagree with this characterization?
comment by Ben (ben-lang) · 2023-02-27T11:01:43.716Z · LW(p) · GW(p)
Some people want motors that are efficient, high-power or similar. Some people might instead be making a kinetic sculpture out of lego and they actually are primarily interested in whether the motor's cycle looks psychedelic and it makes a pleasing noise. Neither group are wrong.
Some people want arguments that lead efficiently to a better view of the base reality. Some people are more interested in understanding the opposing side's philosophy and how they reason about it. Some people want the argument to be engaging, fun or dramatic. Some people prioritise still being friends when the argument is over. Some people like the idea of 'winning' an argument like winning a game. None of them are wrong.
Putting a label on the people who actually want their motors to move energy efficiently (calling them "physicsts" or "engineers") and contrasting them with "artists" or something might be a useful line to draw. Similarly, "rationalist discourse" might be a poor label, but if it is was called "truth seeking discussion" or similar I think it is actually carving our a fairly specific sub-part of the possible space.
comment by Rob Bensinger (RobbBB) · 2023-03-04T13:28:55.343Z · LW(p) · GW(p)
But this seems to contradict the element of Non-Deception. If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?
This is a good question!! Note that in the original footnote in my post, "on the same side [LW · GW]" is a hyperlink going to a comment by Val:
"Some version of civility and/or friendliness and/or a spirit of camaraderie and goodwill seems like a useful ingredient in many discussions. I'm not sure how best to achieve this in ways that are emotionally honest ('pretending to be cheerful and warm when you don't feel that way' sounds like the wrong move to me), or how to achieve this without steering away from candor, openness, 'realness', etc."
I think the core thing here is same-sidedness.
That has nothing to do directly with being friendly/civil/etc., although it'll probably naturally result in friendliness/etc.
(Like you seem to, I think aiming for cheerfulness/warmth/etc. is rather a bad idea [LW · GW].)
If you & I are arguing but there's a common-knowledge undercurrent of same-sidedness, then even impassioned and cutting remarks are pretty easy to take in stride. "No, you're being stupid here, this is what we've got to attend to" doesn't get taken as an actual personal attack because the underlying feeling is of cooperation. Not totally unlike when affectionate friends say things like "You're such a jerk."
This is totally different from creating comfort. I think lots of folk get this one confused. Your comfort is none of my business, and vice versa. If I can keep that straight while coming from a same-sided POV, and if you do something similar, then it's easy to argue and listen both in good faith.
I think this is one piece of the puzzle. I think another piece is some version of "being on the same side in this sense doesn't entail agreeing about the relevant facts; the goal isn't to trick people into thinking your disagreements are small, it's to make typical disagreements feel less like battles between warring armies".
I don't think this grounds out in simple mathematics that transcends brain architecture, but I wouldn't be surprised if it grounds out in pretty simple and general facts about how human brains happen to work. (I do think the principle being proposed here hasn't been stated super clearly, and hasn't been argued for super clearly either, and until that changes it should be contested and argued about rather than taken fully for granted.)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-03-04T23:44:53.568Z · LW(p) · GW(p)
Note that in the original footnote in my post, "on the same side" is a hyperlink going to a comment by Val
Thanks for pointing this out. (I read Val's comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.
the goal isn't to trick people into thinking your disagreements are small, it's to make typical disagreements feel less like battles between warring armies
I think the fact that disagreements often feel like battles between warring armies is because a lot of apparent "disagreements" are usefully modeled as disguised conflicts. That is, my theory about why predictable disagreements are so ubiquitous in human life (despite the fact that Bayesian reasoners can't forsee to disagree) is mostly conflict-theoretic rather than mistake-theoretic [LW · GW].
A simple example: I stole a loaf of bread. A policeman thinks I stole the bread. I claim that I didn't steal the bread. Superficially, this looks like a "disagreement" to an outside observer noticing the two of us reporting different beliefs, but what's actually going on is that I'm lying. Importantly, if I care more about not going to jail than I do about being honest, lying is rational. Agents have an incentive to build maps that reflect the territory because those are the maps that are most useful for computing effective plans ... but they also sometimes have an incentive to sabotage the maps of other agents with different utility functions.
Most interesting real-world disagreements aren't so simple as the "one party is lying" case. But I think the moral should generalize: predictable disagreements are mostly due to at least some parts of some parties' brains trying to optimize for conflicting goals, rather than just being "innocently" mistaken.
I'm incredibly worried that approaches to "cooperative" or "collaborative truth-seeking" that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, "I'll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies"—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that's what the elephant in the brain does by default. I'm more optimistic about approaches that are open to the possibility that conflicts exist, in the hopes that exposing hidden conflicts (rather than pretending they're "disagreements") makes it easier to find Pareto improvements.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-05T20:59:08.137Z · LW(p) · GW(p)
I’m incredibly worried that approaches to “cooperative” or “collaborative truth-seeking” that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, “I’ll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies”—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that’s what the elephant in the brain does by default.
Very strongly seconding this. (I have noticed this pattern on Less Wrong in the past, in fact, and more than once. It is no idle worry, but a very real thing that already happens.)
comment by Drake Morrison (Leviad) · 2023-02-27T22:01:56.480Z · LW(p) · GW(p)
Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:
- measure twice before you cut the steel
- Double check your fittings before you test the engine
- keep track of which direction the axle is supposed to be turning for the type of engine you are making
- etc.
The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working.
The point of "rationalist guidelines" isn't to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-27T23:30:25.508Z · LW(p) · GW(p)
Well, for one thing, we might reasonably ask whether these guidelines (or anything sufficiently similar to these guidelines to identifiably be “the same idea”, and not just “generic stuff that many other people have said before”) are, in fact, needed in order for a group of people to “stay connected to reality at all”. Indeed we might go further and ask whether these guidelines do, in fact, help a group of people “stay connected to reality at all”.
In other words, you say: “The guidelines are for helping people avoid [consistent, predictable mistakes]” (emphasis mine). Yes, the guidelines are “for” that—in the sense that they are intended to fulfill the stated function. But are the guidelines good for that purpose? It’s an open question, surely! And it’s one that merely asserting the guidelines’ intent does not do much to answer.
But, perhaps even more importantly, we might, even more reasonably, ask whether any particular guideline is any good for helping a group of people “stay connected to reality at all”. Surely we can imagine a scenario where some of the guidelines are good for that, but others of the guidelines are’t—yes? Indeed, it’s not out of the question that some of the guidelines are good for that purpose, but others of the guidelines are actively bad for it! Surely we can’t reject that possibility a priori, simply because the guidelines are merely labeled “guidelines for rationalist discourse, which are necessary in order to avoid consistent, predictable mistakes, and stay connected to reality at all”—right?
Replies from: Leviad↑ comment by Drake Morrison (Leviad) · 2023-02-28T20:40:30.338Z · LW(p) · GW(p)
I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures are explicit. There's less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code.
The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way.
People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can't, then the conversation doesn't go anywhere on that topic.
I don't see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-28T21:18:30.164Z · LW(p) · GW(p)
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
Yes, sure, we shouldn’t throw away the concept; but that’s not at all a reason to start with the presumption that these particular guidelines are any good!
As far as examples go… well, quite frankly, that’s what the OP is all about, right?
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team.
Apologies, but I am deliberately not responding to this analogy and inferences from it, because adding an argument about programming languages to this discussion seems like the diametric opposite of productive.
comment by Yoav Ravid · 2023-02-26T14:10:11.308Z · LW(p) · GW(p)
That being the normative math, why does the human world's enduringly dominant discourse algorithm take for granted the ubiquity of, not just disagreements, but predictable disagreements?
Well, the paper says disagreement is only unpredictable between agents with the same priors, so seems like that explains at least part of this?
comment by LVSN · 2023-02-26T20:24:28.664Z · LW(p) · GW(p)
Debate is also inefficient: for example, if the "defense" in the court variant happens to find evidence or arguments that would benefit the "prosecution", the defense has no incentive to report it to the court, and there's no guarantee that the prosecution will independently find it themselves.
Reporting such evidence will make you exceptional among people who typically hold the defense position; it will no longer be fair for people to say of them "well of course the defense would say that either way". And while you may care very much about the conclusion of the debate, you may also expect so strongly that reality will vindicate you that sharing such "harmful" information will bring you no harm.
If my faction is trying to get Society to adopt beliefs that benefit our faction onto the shared map, someone who comes to us role-playing being on our side, but who is actually trying to stop us from adding our beliefs to the shared map just because they think our beliefs don't reflect the territory, isn't a friend; they're a double agent, an enemy pretending to be a friend, which is worse than the honest enemy we expect to face before the judge in the debate hall.
But you'd only want to be on the side that you're on, I hope, because you believed it was The Good Side. Above all, the most important side I want to take over all conflicts is the good side. I think almost everyone would agree on that even if they had not been thinking of it in advance. Those who think they don't want to be on the side of good are defining 'good' without respect for the reflective equilibrium of all their memories, I expect.
A good friend might in fact pretend to be on what you have designated as your nominal side so that they can bring you closer to the good side. If your sincerely joined nominal side is opposed to good then you are worse at being a friend to yourself than someone who is trying to bring you to the good side.
comment by transhumanist_atom_understander · 2023-10-03T01:23:14.119Z · LW(p) · GW(p)
I think as usual with rationality stuff there's a good analogy to statistics.
I'm very happy I never took Stats 101 and learned what a p value was in a math department "Theory of Statistics" class. Because as I understood it, Stats 101 teaches recipes, rules for when a conclusion is allowed. In the math department, I instead learned properties of algorithms for estimation and decision. There's a certain interesting property of an estimation algorithm for the size of an effect: how large will that estimate be, if the effect is not there? Of a decision rule, you can ask: how often will the decision "effect is there" be made, if the effect is not there?
Frequentist statistical inference is based entirely on properties like these, and sometimes that works, and sometimes it doesn't. But frequentist statistical inference is like a set of guidelines. Whether or not you agree with those guidelines, these properties exist. And if you understand what they mean, you can understand when frequentist statistical inference works decently and when it will act insanely.
I think what statistics, and LessWrong-style rationality have in common, is taking the procedure itself as an object of study. In statistics, it's some algorithm you can run on a spreadsheet. On LessWrong, it tends to be something more vague, a pattern of human behavior.
My experience as a statistician among biologists was, honestly, depressing. One problem was power calculations. People want to know what power to plug into the sample size calculator. I would ask them, what probability are you willing to accept that you do all this work, and find nothing, even though the effect is really there? Maybe the problem is me, but I don't think I ever got any engagement on this question. Eventually people look up what other people are doing, which is 80%. If I ask, are you willing to accept a 20% probability that your work results in nothing, even though the effect you're looking for is actually present, I never really get an answer. What I wanted was not for them to follow any particular rule, like "only do experiments with 80% power", especially since that can always be achieved by plugging in a high enough effect size in the calculation they put in their grant proposal. I wanted them to actually think through whether their experiment will actually work.
Another problem--whenever they had complex data, but were still just testing for a difference between groups, my answer was always "make up a measure of difference, then do a permutation test". Nobody ever took me up on this. They were looking for a guideline to get it past the reviewers. It doesn't matter that the made-up test has exactly the same guarantee as whatever test they eventually find: only positive 5% of the time it's used in the absence of a real difference. But they don't even know that's the guarantee that frequentist tests come with.
I don't really get what was going on. I think the biologists saw statistics as some confusing formality where people like me would yell at them if they got it wrong. Whereas if they follow the guidelines, nobody will yell at them. So they come to me asking for the guidelines, and instead I tell them some irrelevant nonsense about the chance that their conclusion will be correct.
I just want people to have the resources to think through whether the process by which they're reaching a conclusion will reach the right conclusion. And use those resources. That's all I guess.
comment by SomeoneYouOnceKnew · 2023-02-26T07:05:14.310Z · LW(p) · GW(p)
As a relatively new person to lesswrong, I agree.
The number of conversations which I've read which end in either party noticeably updating one way or the other have been relatively rare. The one point I'm not sure if I agree with is being able to predict a particular disagreement is a problem?
I suppose being able to predict the exact way in which your interlocutors will disagree is the problem? If you can foresee someone disagreeing in a particular way, and then accounting for it in your argument, and then they disagree anyway, in the exact way you tried to address, that's generally just bad faith.
(though sometimes I do skim posts, by god)
Replies from: jimmy↑ comment by jimmy · 2023-02-27T06:32:45.290Z · LW(p) · GW(p)
Introducing "arguments" and "bad faith" can complicate and confuse things, and neither are necessary.
As a simple model, say we're predicting whether the next ball drawn from an urn is black, and we've each seen our own set of draws. When I learn that your initial prediction is a higher probability than mine, I can infer that you've seen a higher ratio of black than I have, so in order to take that into account I should increase my own probability of black. But how much? Maybe I don't know how many draws you've witnessed.
On the next iteration, maybe they say "Oh shoot, you said 30%? In that case I'm going to drop my guess from 95% to 35%". In that case, they're telling you that they expect you've seen many more draws than them. Alternatively, they could say "I guess I'll update from 95% to 94%", telling you the opposite. If you knew in advance which side of your new estimate they were likely to end up on, then you could have taken that into account last time, and updated further/less far accordingly until you can't expect to know what you will learn next time.
If you *know* that they're going to stick to 95% and not update based on your guess, then you know they don't view your beliefs as saying much. If *that* doesn't change your mind and make you think "Wow, they must really know the answer then!" and update to 95%, then you don't view their beliefs as saying much either. When you can predict that beliefs won't update towards convergence, you're predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
Replies from: SomeoneYouOnceKnew↑ comment by SomeoneYouOnceKnew · 2023-02-27T06:42:33.650Z · LW(p) · GW(p)
When you can predict that beliefs won't update towards convergence, you're predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
Are you saying that the interlocutors should instead change to attempting to resolve their lack of mutual respect?
Replies from: jimmy↑ comment by jimmy · 2023-02-27T07:08:10.288Z · LW(p) · GW(p)
Whether it's worth working to resolve any disagreement over appropriate levels of respect is going to depend on the context, but certainly below a certain threshold object level discourse becomes predictably futile. And certainly high levels of respect are *really nice*, and allow for much more efficient communication because people are actually taking each other seriously and engaging with each other's perspective.
There's definitely important caveats, but I generally agree with the idea that mutual respect and the ability to sort out disagreements about the appropriate level of respect are worth deliberately cultivating. Certainly if I am in a disagreement that I'd like to actually resolve and I'm not being taken as seriously as I think I ought to be, I'm going to seek to understand why, and see if I can't pass their "ideological test" on the matter.
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-26T06:51:22.921Z · LW(p) · GW(p)
Just noting that this entire post is an overt strawman; its title and central thesis rest on the exactly backward implication that both Rob's and my posts were based on ungrounded theory when they were both built entirely out of studying and attempting to model what actually works in practice, i.e. what are the observable behaviors of people who actually-in-practice consistently and reliably produce both a) clear thinking and b) effective communication of that clear thinking, in a way that is relatively domain-agnostic. In the analogy of "physicists" vs. "engineers," those posts were not written by physicists.
There are other flaws with it beyond that, but repeated past experience shows that further engagement would be extremely un-worthwhile; I just felt called to note aloud the core confusion-inducing move the author of this post is making, in case other people failed to recognize the spell being cast.
Replies from: SaidAchmiz, Zack_M_Davis↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-26T09:50:51.105Z · LW(p) · GW(p)
[the post’s] title and central thesis rest on the exactly backward implication that both Rob’s and my posts were based on ungrounded theory
Er, where does the OP say this…? I see no such implication. (Indeed, if anything, the OP seems to be saying that the posts in question are based on, so to speak, un-theory’d ground…)
Rob’s and my posts … were both built entirely out of studying and attempting to model what actually works in practice, i.e. what are the observable behaviors of people who actually-in-practice consistently and reliably produce both a) clear thinking and b) effective communication of that clear thinking, in a way that is relatively domain-agnostic.
Well… sure, you can say that. But then… anyone could say that, right? I could write a post that recommended the opposite of any given thing you recommend (e.g., “cultivating an adversarial attitude is good, while cultivating a cooperative attitude leads to worse outcomes”), and I could also claim that this recommendation was “built entirely out of studying and attempting to model what actually works in practice”. And then what would we have? Two competing claims, both backed up by exactly the same thing (i.e., nothing except assertion—“trust me, guys, I know what I’m talking about”), right?
So, Zack (as I understand him) is saying, roughly: “nah, that doesn’t seem like a good guideline, actually, for these-and-such reasons”. Clearly you have a different view, but what is the use of claiming that your recommendation is grounded in experience? I have my own view—and my view is grounded in experience. Zack has his view—and his view is presumably also grounded in experience. We can all claim this, with some justification. Zack is also providing an explanation for his view of the matter. No doubt you disagree with it, and that’s fine, but where is the confusion?
Replies from: Yoav Ravid, Duncan_Sabien↑ comment by Yoav Ravid · 2023-02-26T15:22:22.555Z · LW(p) · GW(p)
There's a question of whether there really is disagreement. If there isn't, then we can both trust that Duncan and Rob really based their guidelines on their experience (which we might also especially appreciate), and notice that it fits our own experience. If there's disagreement then it's indeed time to go beyond saying "it's grounded in experience" and exchange further information.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-26T16:37:55.585Z · LW(p) · GW(p)
Well, I certainly disagree! So, yes, there is disagreement.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2023-02-26T19:20:39.232Z · LW(p) · GW(p)
Ok then. I'm glad the last two paragraphs weren't just hypothetical for the sake of devil advocacy.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-26T17:22:07.090Z · LW(p) · GW(p)
Er, where does the OP say this…? I see no such implication. (Indeed, if anything, the OP seems to be saying that the posts in question are based on, so to speak, un-theory’d ground…)
Literally the title (and then the first few paragraphs).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-26T19:13:29.927Z · LW(p) · GW(p)
I don’t understand. How does the title imply this? How do the first few paragraphs?
I suppose you could read this implication into the title and introduction, if you were so inclined…? I didn’t, however. I don’t think your claim that “[the post’s] title and central thesis rest on” this implication is well-grounded in what the OP actually says.
↑ comment by Zack_M_Davis · 2023-02-26T18:34:57.961Z · LW(p) · GW(p)
Thanks for commenting!
I don't think the typical reader would interpret the title and opening paragraphs as claiming that you and Rob haven't tried to study and model what works in practice?
My intent was to play off an analogy Yudkowsky made between "rational-ists" (those who study rationality) and "physics-ists" (those who study physics). I'm saying that I don't want the study of rationality itself as a subject matter to be conflated with any particular set of discourse norms, because I think different discourse norms have different use-cases, much like how different motor designs or martial arts have different use-cases. That's totally compatible with you and Rob having put a lot of work into studying and modeling what actually works in practice!
The martial arts analogy seems apt: if I point out that different styles of martial arts exist, I'm not saying that some particular karate master (whose post happened to inspire mine) hasn't tried to study what works. I'm saying that ju-jitsu, boxing, tai quan dao, &c. also exist. The subject matter of "fighting" is bigger than what any one karate master knows.
(We might have a substantive disagreement about this, if you don't think a competing school of "rationalists" could have grounds to contest your guidelines?)
There are other flaws with it beyond that, but repeated past experience shows that further engagement would be extremely un-worthwhile
If you think it would help readers not be misled by my mistakes, feel free to point out the other flaws, too! Writing a comment under my post doesn't put you under any obligation to engage with me.
comment by Ben Pace (Benito) · 2024-12-05T20:36:31.686Z · LW(p) · GW(p)
I disagree with the first half of this post, and agree with the second half.
"Physicist Motors" makes sense to me as a topic. If I imagine it as a book, I can contrast it with other books like "Motors for Car Repair Mechanics" and "Motors for Hobbyist Boat Builders" and "Motors for Navy Contract Coordinators". These would focus on other aspects of motors such as giving you advice for materials to use and which vendors to trust or how to evaluate the work of external contractors, and give you more rules of thumb for your use case that don't rely on a great deal of complex mathematical calculations (e.g. "how to roughly know if a motor is strong enough for your boat as a function of the weight and surface area of the boat"). The "Physicist Motors" book would focus on the math of ideal motors and doing experiments to see the basic laws of physics at play.
Similarly, many places want norms of discourse, or have goals for discourse, and a rationalist focus would connect it to principles of truth-seeking more directly (e.g. in contrast with norms of "YouTube Discourse" or "Playful/Friendly Discourse").
So I don't believe that it is a confused thing to do, to outline practical heuristics or norms rationalist discourse as opposed to other kinds of discourse or other goals one might have with discourse.
In contrast, this critique seems of a valid type:
"A vague spirit of how to reason and argue" seems like an apt description of what "Basics of Rationalist Discourse" and "Elements of Rationalist Discourse" are attempting to codify—but with no explicit instruction on which guidelines arise from deep object-level principles of normative reasoning, and which from mere taste, politeness, or adaptation to local circumstances
Arguing that the principles/heuristics proposed are in conflict with the underlying laws of probability theory and such is a totally valid kind of critique. And I think the critique of the "goodwill" heuristic is pretty good.
My take is that if you positively vote on Bensinger's "Elements of Rationalist Discourse [LW · GW]" then it makes sense to also upvote this post in the review as it is a counterpoint that has a good critique, but I wouldn't otherwise, as I disagree with the core analogy.
comment by Shmi (shminux) · 2023-02-26T08:41:25.967Z · LW(p) · GW(p)
Hmm, when there is a disagreement somewhere, it is worth going back to first principles, isn't it?
If I remember correctly, Eliezer's motivations for starting the whole series of posts back on overcoming bias was "raising the sanity waterline" or something like that. Basically, realizing that you are an imperfect reasoner and striving to see your reasoning flaws and do better. This is an uphill battle, humans did not evolve to reason well at all, and different people have different classes of flaws, some are too combative, some are too accepting, the list is long and well documented all over the internet. Because not everyone fails to be "rational" in the same way, different techniques work better, depending on the person, and teaching an unsuitable "rationalist" technique can actually make the person "less rational". Scott Alexander alluded to it in https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/.
I suspect that the standard rationalist lore and curriculum is aimed at specific audience, and your argument is that it does not work for a different audience.
comment by romeostevensit · 2023-02-26T16:06:05.301Z · LW(p) · GW(p)
I've personally gotten the most out of people displaying epistemic technique in investigating their own problems so that I have existence proofs for all the myriad spot checks it's possible to run on one's own reasoning.
comment by TekhneMakre · 2023-02-26T07:34:13.626Z · LW(p) · GW(p)
If you're not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?
Because there's ambiguity, and there's self-fulfilling prophecies. When there's potential for self-fulfilling prophecies, there's free variable that's not a purely epistemic question; e.g. "Are we on the same side?". E.g., giving any answer to that question is in some cases implicitly deciding to add your weight to the existence of a conflict.
You role-play to add some driving force to the system--driving towards fixed points that involve sustained actual discourse. This leads to more correct conclusions. But, you're right that this is a very different sort of justification than Bayesian information processing, and needs better theorizing, and is mixed in which behavior that's simply deceptive.
↑ comment by Zack_M_Davis · 2023-03-02T06:04:14.528Z · LW(p) · GW(p)
Then it would appear that we're in a conflict over a shared resource: I want to post "Zack-type" things on Less Wrong—including long-form criticism of other posts on Less Wrong—and (assuming I'm reading your comment correctly; feel free to correct me if not) it seems like you want me to not do that.
It looks like we can't both get what we want at the same time. That's a very unfortunate situation for us to be in. If you have any suggestions for Pareto improvements, I'm listening. I'm not sure what else I can say.
comment by johnlawrenceaspden · 2023-02-27T15:19:58.492Z · LW(p) · GW(p)
A distant relative of mine (I assume, the name is rare), Dr Harold Aspden, by all accounts a well-respected and successful engineer, spent the latter part of his life advocating an 'over-unity motor'.
There are quite a lot of people who think that you can use a system of mirrors to concentrate sunlight in order to achieve temperatures higher than the surface of the sun. I myself am not sufficiently confident that this is impossible to actually be seriously surprised if someone works out a way to do it.
I think 'non-physicist motors' are a thing.
comment by Review Bot · 2024-02-13T18:52:31.586Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Mo Putera (Mo Nastri) · 2023-02-27T11:31:27.089Z · LW(p) · GW(p)
if there is any way of fixing this mess, it's going to involve clarifying conflicts rather than obfuscating them
This immediately brought to mind John Nerst's erisology. I've been paying attention to it for a while, but I don't see it much here (speaking as a decade-long lurker); I wonder why.
Replies from: Frederic Janssens↑ comment by Frederic Janssens · 2023-03-01T05:31:15.195Z · LW(p) · GW(p)
Thanks for the pointer. John Nerst's approach is similar to mine.
The way I would formulate it here :
De facto, people have different priors.
If there is a debate/discussion, the most fruitful result would come by construing, in common if possible, a more encompassing reference frame, where both sets of priors can be expressed to their respective satisfaction.
It is not easy. Some priors will be incompatible as such.
A real dialogue supposes a readiness to examine ones priors and eventually adjust them to be less restrictive.
A static defense of one's priors is mostly a waste of time (or a show).
Caveat : bad faith exists, people, and groups, have vulnerabilities they will protect. So a real dialogue is not always possible, or only very partially.
The idea is to at least try.
comment by thefirechair · 2023-02-27T02:23:42.702Z · LW(p) · GW(p)
This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it.
You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas?
When you talk about making a new faction - that is what this place is. And naming it Rationalists says something very direct to those who don't agree - they're Irrationalists.
Perhaps looking to other communities is the useful path forward. Over on reddit there's science and also askhistorians. Both have had "scandals" of a sort that resulted in some of the most iron-fisted moderation that site has to offer. The moderators are all in alignment about what is okay and not. Those communities function extremely well because a culture is maintained.
LessWrong has posts where nanites will kill us all. A post where someone is afraid, apparently, of criticizing Bing ChatGPT because it might come kill them later on.
There is moderation here but I can't help to think of those reddit communities and ask whether a post claiming someone is scared of criticizing Bing ChatGPT should be here at all.
When I read posts like that I think this isn't about rationality at all. Some of them are a kind of written cosplay, hyped up fiction, which when it remains, attracts others. Then we end up with someone claiming to be an AI running on a meat substrate... when in fact they're just mentally ill.
I think those posts should have been removed entirely. Same for those gish gallop posts of AI takeover where it's nanites or bioweapons and whatever else.
But at the core of it, they won't be and will remain in the future because the bottom level of this website was never about raising the waterline of sanity - it was AI is coming, it will kill us, and here's all the ways it will kill us.
It's a keystone, a basic building block. It cannot be removed. It's why you see so few posts here saying "hey, AI probably won't kill us and even if something gets out of hand, we'll be able to easily destroy it".
When you have fundamental keystones in a community, sure there will be posts pointing out things but really the options become leave or stay.
Replies from: SomeoneYouOnceKnew↑ comment by SomeoneYouOnceKnew · 2023-02-27T07:59:01.989Z · LW(p) · GW(p)
Do you believe encouraging the site maintainers to implement degamification [LW · GW] techniques on the site would help with your criticisms?
comment by TekhneMakre · 2023-02-26T07:41:45.146Z · LW(p) · GW(p)
How, specifically, are rough-and-tumble spaces less "rational", more prone to getting the wrong answer, such that a list of "Elements of Rationalist Discourse" has the authority to designate them as non-default?
You may be right that this one sticks out and hasn't been abstracted properly. But I do think there are truth-tracking reasons for this that are pretty general. (I think whether these reasons actually hold water is pretty dubious; rough-and-tumble spaces would very plausibly be significantly more truth-tracking than current rationalist norms; I'm just saying that it's not a type error to put this one on the list.)
1. Carrots are maybe less likely to alienate people than sticks. Alienating people decreases the amount of computing power and information that you have available, making it harder to get the truth.
2. Sticks are more likely to cause schisms and moloch traps, where punishment of non-punishment takes flight into coalitions.
comment by TekhneMakre · 2023-02-26T07:35:47.830Z · LW(p) · GW(p)
doesn't stick after I'm not talking to them anymore
"Aim for long-run mental engineering / truth-tracking information processing, not short term appearance of rule-following", or some better version, seems like an important element of truth-tracking discourse.