Sam Altman and Ezra Klein on the AI Revolution 2021-06-27T04:53:17.219Z
Reply to Nate Soares on Dolphins 2021-06-10T04:53:15.561Z
Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems 2021-05-03T04:31:23.547Z
Communication Requires Common Interests or Differential Signal Costs 2021-03-26T06:41:25.043Z
Less Wrong Poetry Corner: Coventry Patmore's "Magna Est Veritas" 2021-01-30T05:16:26.486Z
Unnatural Categories Are Optimized for Deception 2021-01-08T20:54:57.979Z
And You Take Me the Way I Am 2020-12-31T05:45:24.952Z
Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda 2020-12-10T08:30:19.126Z
Scoring 2020 U.S. Presidential Election Predictions 2020-11-08T02:28:29.234Z
Message Length 2020-10-20T05:52:56.277Z
Msg Len 2020-10-12T03:35:05.353Z
Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem 2020-09-17T02:23:58.869Z
Maybe Lying Can't Exist?! 2020-08-23T00:36:43.740Z
Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle 2020-07-14T06:03:17.761Z
Optimized Propaganda with Bayesian Networks: Comment on "Articulating Lay Theories Through Graphical Models" 2020-06-29T02:45:08.145Z
Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning 2020-06-07T07:52:09.143Z
Comment on "Endogenous Epistemic Factionalization" 2020-05-20T18:04:53.857Z
"Starwink" by Alicorn 2020-05-18T08:17:53.193Z
Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis 2020-05-11T06:00:24.836Z
A Book Review 2020-04-28T17:43:07.729Z
Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence 2020-04-16T03:44:06.940Z
Zeynep Tufekci on Why Telling People They Don't Need Masks Backfired 2020-03-18T04:34:09.644Z
The Heckler's Veto Is Also Subject to the Unilateralist's Curse 2020-03-09T08:11:58.886Z
Relationship Outcomes Are Not Particularly Sensitive to Small Variations in Verbal Ability 2020-02-09T00:34:39.680Z
Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution 2020-01-21T06:28:33.854Z
Less Wrong Poetry Corner: Walter Raleigh's "The Lie" 2020-01-04T22:22:56.820Z
Don't Double-Crux With Suicide Rock 2020-01-01T19:02:55.707Z
Speaking Truth to Power Is a Schelling Point 2019-12-30T06:12:38.637Z
Stupidity and Dishonesty Explain Each Other Away 2019-12-28T19:21:52.198Z
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think 2019-12-27T05:09:22.546Z
Funk-tunul's Legacy; Or, The Legend of the Extortion War 2019-12-24T09:29:51.536Z
Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk 2019-12-21T00:49:02.862Z
Curtis Yarvin on A Theory of Pervasive Error 2019-11-26T07:27:12.328Z
Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary 2019-11-22T06:18:59.497Z
Algorithms of Deception! 2019-10-19T18:04:17.975Z
Maybe Lying Doesn't Exist 2019-10-14T07:04:10.032Z
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists 2019-09-24T04:12:07.560Z
Schelling Categories, and Simple Membership Tests 2019-08-26T02:43:53.347Z
Status 451 on Diagnosis: Russell Aphasia 2019-08-06T04:43:30.359Z
Being Wrong Doesn't Mean You're Stupid and Bad (Probably) 2019-06-29T23:58:09.105Z
What does the word "collaborative" mean in the phrase "collaborative truthseeking"? 2019-06-26T05:26:42.295Z
The Univariate Fallacy 2019-06-15T21:43:14.315Z
Tal Yarkoni: No, it's not The Incentives—it's you 2019-06-11T07:09:16.405Z
"But It Doesn't Matter" 2019-06-01T02:06:30.624Z
Minimax Search and the Structure of Cognition! 2019-05-20T05:25:35.699Z
Where to Draw the Boundaries? 2019-04-13T21:34:30.129Z
Blegg Mode 2019-03-11T15:04:20.136Z
Change 2017-05-06T21:17:45.731Z
An Intuition on the Bayes-Structural Justification for Free Speech Norms 2017-03-09T03:15:30.674Z
Dreaming of Political Bayescraft 2017-03-06T20:41:16.658Z


Comment by Zack_M_Davis on How to Sleep Better · 2021-07-17T19:49:34.476Z · LW · GW

Strongly recommended: rose-colored glasses are Redshift/f.lux for the whole world, not just screens.

I'm mildly disappointed with my Oura ring, although I still use it and don't actively regret purchasing it; I like having "objective" information on my sleep (gathered from actual sensors on the ring, not just me looking at the clock before going to bed and having to guess how long it took for me to go down), but the distillation from objective sensor data to the information the app shows isn't super high quality: the sleep-stages breakdown barely registers any REM at all (which I don't think is biologically plausible), and sometimes it fails to register periods of sleep (e.g., if I woke up at 5 a.m., but am sure I dozed off again from about 6 to 7:30, the latter doesn't register as part of my night's sleep, although sometimes it gets picked up as "Rest"). It's interesting that I seem to spend less time "objectively" asleep than I thought, while still seeming to function OK: the app claims I'm averaging 4 hours, 9 minutes of sleep in 5 hours, 21 minutes in bed per night, which, while surely an underestimate, is less than I would have expected after taking into account how often it seems I was clearly asleep but the app didn't pick it up. I appreciate the developer API on principle, even if I don't have any practical need for it. (I did write a program to Tweet about my sleep, but that was also just on principle.)

Comment by Zack_M_Davis on Permitted Possibilities, & Locality · 2021-07-04T04:18:15.587Z · LW · GW
  1. Programmers operating with partial insight, create a mind that performs a number of tasks very well, but can't really handle self-modification let alone AI theory [...] This scenario seems less likely to my eyes, but it is not ruled out by any effect I can see.

Twelve and a half years later, does new evidence for the scaling hypothesis make this scenario more plausible? If we're in the position of being able to create increasingly capable systems without really understanding how they work by throwing lots of compute at gradient descent, then won't those systems themselves likely also be in the position of not understanding themselves enough to "close the loop" on recursive self-improvement?

Comment by Zack_M_Davis on Musings on general systems alignment · 2021-07-02T01:27:31.696Z · LW · GW

Yeah, the exaggeration didn't seem like a crux for anything important.

Comment by Zack_M_Davis on Musings on general systems alignment · 2021-07-01T16:32:59.617Z · LW · GW

Okay, I can see that, but as a writing tip for the future, rhetoric in the vein of "We are the great hope of our civilization" looks heavily optimized for the feeling-good-about-group-identification thing, rather than merely noticing the startling fact of being somewhat influential. And the startling fact of being somewhat influential makes it much more critical not to fall into the trap of valuing the group's brand, if the reputational pressures of needing to protect the brand make us worse at thinking.

Comment by Zack_M_Davis on Musings on general systems alignment · 2021-07-01T16:31:57.670Z · LW · GW
Comment by Zack_M_Davis on Musings on general systems alignment · 2021-06-30T22:23:03.324Z · LW · GW

us, this community [...] We are the great hope of our civilization. Us, here, in this community

This kind of self-congratulatory bluster helps no one. You only get credit for having good ideas and effective plans, not being "One of us, the world-saving good guys."

Comment by Zack_M_Davis on [Letter] Imperialism in the Rationalist Community · 2021-06-25T06:10:13.169Z · LW · GW

I have claimed in the past that moving an argument from LW to a private blog constitutes an escalation so hostile the mere threat of doing so constitutes adequate grounds for banning a user from LW

What's the reasoning here?? I usually consider "not on this website" a de-escalation.

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-21T04:24:11.165Z · LW · GW

Thanks. I regret letting my emotions get the better of me. I apologize.

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-18T00:15:53.977Z · LW · GW

haha yeah

For the record, the poor capitalization and informal tone there (and in the preceding tweet) were intended to be tells that those tweets were still being written from the "shitposting" frame

I just want to say that this "haha yeah" is really disrespectful. Straightening out the so-called "rationalist" community's collective position on the cognitive function of categorization (culminating in January's 10,000-word capstone post "Unnatural Categories Are Optimized for Deception") has been the major project of my life for the past forty months, with dolphins in particular as my specific central example. You don't know how many tears I've cried and how long I've suffered over this.

How would you feel if you sunk forty months of your life into deconfusing a philosophical issue that had huge, life-altering practical stakes for you, and the response to your careful arguments from community authorities was a dismissive "haha yeah"? Would you, perhaps, be somewhat upset?

I've barely been able to accomplish anything at my dayjob for the past ten days because I've been so furious about this. I think I want to develop my unpublished draft reply in progress into a followup post that will more carefully explain the case that paraphyletic categories are doing useful cognitive work, the fact that the colloquial and botanical senses of the word "berry" have coexisted for some time, and my socio-psychological theory of how we got in this absurd situation in the first place.

I don't know how long this will take me to finish. It's possible that I should take a break from this topic for week—or two—and finish the draft when I'm in a more stable state of mind. But when I do—and I will—if you have any scrap of human decency in your brain, you will not shitpost at me. You will reply with the seriousness implied by the fact that your fellow rationalists and any interested ancestor-simulators are watching you. If your time is sufficiently valuable to you that you have no further interest in this matter without additional incentives, the $2000 cheerful price offer mentioned above will remain open.

This isn't a "pretend to agree with me to appease my untreated mental illness" move. I don't want people to pretend to agree with me if I'm wrong! If I get things wrong and you or others point out the specific things that I'm actually wrong about, that's great! That's how we all become less wrong together. But the process of using the beautiful weapons of reasoned argument to become less wrong together, only works if both sides are being honest; the discourse algorithm doesn't produce accurate maps if one side is allowed to shitpost.

I have the honor to be your obedient servant.

Comment by Zack_M_Davis on Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning · 2021-06-15T05:56:52.971Z · LW · GW

As a longtime Internet ratsphere person, but not a traditional philosophy nerd, the idea [...] never occurred to me.

Are you sure that's not the other way around?? (I don't think Brian Skyrms is a traditional philosopher.)

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-13T17:58:19.911Z · LW · GW

(I've drafted a 3000 word reply to this, but I'm waiting on feedback from a friend before posting it.)

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-12T20:29:17.507Z · LW · GW

you tend to get a bit worked up sometimes

Well, yes. I've got Something to Protect.

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-12T18:51:20.973Z · LW · GW

Thanks, you are right and the thing I originally typed is wrong. I edited the comment.

Comment by Zack_M_Davis on Reply to Nate Soares on Dolphins · 2021-06-11T03:57:27.880Z · LW · GW

Thanks for the reply! (Strong-upvoted.) I've been emotionally trashed today and didn't get anything done at my dayjob, which arguably means I shouldn't be paying attention to Less Wrong, but I feel the need to type this now in the hopes of getting it off my mind so that I can do my dayjob tomorrow.

In your epistemic-status thread, you express sadness at "the fact that nobody's read A Human's Guide to Words or w/e". But, with respect, you ... don't seem to be behaving as if you've read it? Specifically, entry #30 on the list of "37 Ways Words Can Be Wrong" is—I'll quote it in full—

  1. Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list. (Where to Draw the Boundary?)

That is, in 2008, as part of "A Human's Guide to Words", Eliezer Yudkowsky explicitly uses this specific example of whether dolphins are fish, and characterizes the position that dolphins are fish as "playing nitwit games" (!). This didn't seem particularly controversial at the time?

Then, thirteen years later, in the current year, you declare that "The definitional gynmastics required to believe that dolphins aren't fish are staggering" (staggering!), and Yudkowsky retweets you. (In general, retweets are not necessarily endorsements—sometimes people just want to draw attention to some content without further comment or implied approval—but I'm inclined to read this instance as implying approval, partially because this doesn't seem like the kind of thing someone would retweet for attention-without-approval, and partially because of the working relationship between you and Yudkowsky.)

But this is pretty strange, right? It would seem that sometime between 2008 and the current year, the rationalist "party line" (as observed in the public statements of SingInst/MIRI leadership) on whether dolphins are fish shifted from (my paraphrases) "No; despite the surface similarities, that categorization doesn't carve reality at the joints; stop playing nitwit games" to "Yes, because of the surface similarities; those who contend otherwise are the ones playing nitwit games." A complete 180° reversal, on this specific example! Why? What changed? Surely if "cognitively useful categories should carve reality at the joints, and dolphins being fish doesn't do that" was good philosophy in 2008, it should still be good philosophy in 2021?

It would make sense if people's opinions changed due to new arguments—if people's opinions changed because of reasons. Indeed, Yudkowsky's original "stop playing nitwit games" dismissal was sloppy and flawed, and I ended up having the occasion to elaborate on the specific senses in which dolphins both do, and do not, cluster with fish in my 2019 "Where to Draw the Boundaries?"

(Get it? "... Boundaries?", plural, in contrast to "... Boundary?", singular, because I'm talking about how you can legitimately have multiple different category systems depending on which subspace of configuration space is decision-relevant in a particular context.)

But when I look at the thing you posted and Yudkowsky retweeted (even if it was a shitpost, your epistemic-status followup thread still contends "but also y'all know i'm right"), it doesn't look like the party line about dolphins changed because of reasons. You didn't even acknowledge the reversal, despite explicitly lamenting (in the followup thread) that people haven't read "A Human's Guide to Words".

Am I the only one creeped out by this? To illustrate why I'm freaked out—why I've been freaked out to a greater or lesser degree almost constantly for the past five years—imagine that in a fictional 2008, the Singularity Institute for Artificial Intelligence were at war with Eastasia for harboring the terrorist unFriendly AI Emmanuel GoldstAIn. It would make sense if, on 21 November 2014, Luke Muehlhauser were to announce:

We're making some changes! First, we're now going to be the Machine Intelligence Research Institute, or MIRI for short, instead of SingInst. And the reason for that is, the old name is no longer appropriate because we're no longer unambiguously "for Artificial Intelligence" after we figured out that it's probably going to destroy all value in our future lightcone. Second, a leaked pastebin revealed that Emmanuel GoldstAIn is actually being harbored by Eurasia, not Eastasia. Whoops! We'll be winding down our war with Eastasia with the hope to be ready to declare war on Eurasia in time for our winter fundraiser. Third, we're calling it "aligned" instead of "Friendly" AI now, and the reason for that is because Stuart Russell convinced us it's a less goofy name.

That would make sense, because in this story, Luke is acknowledging the changes, and giving reasons for why it's correct for the things to change. If Luke were to just say out of the blue on 21 November 2014 that the war with Eurasia is going well, without any indication that anything had changed for any reason, you would expect someone to notice.

Or, imagine if in 2014, Yudkowsky suddenly started saying the Copenhagen interpretation of quantum mechanics is correct, without acknowledging that anything had changed. That's how weird this is. (Revised: Adele Lopez points out that this is wrong.)

And on this classification-of-dolphins issue (specifically, literally, dolphins in particular), it seems like something has changed, and everyone is pretending not to have noticed. Why? What changed? I have my theory, but I could be biased—I want to hear yours! I want to hear yours in public. Do you have a cheerful price for this? I could go up to $2000 for a public reply.

Comment by Zack_M_Davis on How concerned are you about LW reputation management? · 2021-05-19T04:37:42.642Z · LW · GW

Did ... did you save this table a long time ago?? Weak 3-votes have been gone since February 2020 for privacy reasons.

Comment by Zack_M_Davis on How concerned are you about LW reputation management? · 2021-05-19T04:27:25.357Z · LW · GW

The karma-to-strong-vote-power-mapping can be found in the site's open-sourced codebase, and Issa Rice's alternative viewer has the list of actual user vote-powers.

Comment by Zack_M_Davis on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda · 2021-05-16T04:23:48.782Z · LW · GW

Okay. I give up. I really liked your 11 May comment, and it made me optimistic that this conversation would lead somewhere new and interesting, but I'm not feeling optimistic about that anymore. (You probably aren't, either.) This was fun, though: thanks! You're very good at what you do!

Comment by Zack_M_Davis on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda · 2021-05-16T02:12:40.362Z · LW · GW

I'm not sure exactly what distinction you're appealing to

Thanks for asking! More detail: if you're building a communication system to transmit information from one place to another, the signals/codewords you use are arbitrary in the sense that it doesn't matter which you use as long as the reciever of the signals knows what they mean (the conditions under which they are sent).

(Well, the codeword lengths turn out to matter, but not the codewords themselves.)

If I'm publishing a weather report on my website about whether it's "sunny" or "cloudy" today, it doesn't matter whether I give it to you in JSON and English ({"weather": "sunny"}/{"weather": "cloudy"}), or HTML and Spanish <h1>soleado</h1>/<h1>nublado</h1>: whichever one I choose, you can use it to make the same predictions about what you'll experience when you go outside.

In contrast, the choice of where I draw the boundary between what constitutes a "sunny" vs. a "cloudy" day does make a difference to your predictions. What a signal like {"weather": "sunny"} means is different if I only send it when there's not a single cloud in the sky, or if I only don't send it when it's completely overcast.

The choice between Fred and George, or between Uluru and Ayers Rock, is analogous to the difference between {"weather": "sunny"} and <h1>soleado</h1>; I consider the psychology of why a human might prefer the sound of one name over another to be out of scope of the hidden-Bayesian-structure-of-cognition thing I've been trying to talk about for the last thirty-eight months.

being asked to use pronouns that you find inappropriate for the people they refer to, which you say amounts to asking you to lie

Not "lying" exactly, but rather that actually-existing-English-speakers naturally interpret she and he as conveying sex-category information, even if this seems like a weird or bad design, if we were somehow in the position of designing a natural language from scratch. (You could propose that it shouldn't work that way, but the language is already "widely deployed"; the act of proposing a change doesn't automatically update how 370,000,000 people interpret their native language.)

That's why trans people care about being referred to with the correct pronoun in the first place! If the pronouns didn't convey sex-category information and you could just choose one or the other arbitrarily with no difference in meaning, then there would be no reason to care, unless you had an æsthetic preference for the voiceless palato-alveolar fricative, or for words with two letters rather than three.

because in the particular case that sparked this particular discussion that didn't happen

Yes, it did: did you see the "Related: Timeless Slate Star Codex / Astral Codex Ten piece" link at the bottom of the post? That's why I commented. Do I feel a little guilty now that the OP author expressed dissatisfaction with the semi-derailed thread? A little! But, ultimately, I think it's morally right to pay the cost of being a litle annoying from time to time to try to halt the spread of this bonkers meme that continues to exert influence to this day! (I'm guessing that in the counterfactual where "... Not Man for the Categories" was never published, but Eukaryote still wrote the (excellent) post about trees, it wouldn't have contained the advice to "Acknowledge that all of our categories are weird and a little arbitrary", which is a very different and (I claim) much worse pedagogical emphasis from what was laid out in the Sequences.)

Eliezer Yudkowsky once wrote about dark side epistemology: wrong lessons about how to think that people only have an incentive to invent in order to force a conclusion that they can't get on the merits. That's what I think is happening here: if we actually had magical sex-change technology such that people who wanted to change sex could do so, then everyone else would use the corresponding language because it was straightforwardly true, and no one would have invented this deranged "gerrymandered categories are Actually Fine" argument in the first place!

neither rationality nor rationalism includes any obligation to optimize your words for the exact same thing as your thoughts.

But—don't you want the language you speak to your friends to be the same as the language you use to organize your own thoughts? How can you accept a wall between the world you see, and the world you're allowed to talk about? Doesn't your soul die a little bit?

"person whose gender-related characteristics are, collectively, more like those of the average woman than like those of the average man". (That definition is kinda-circular but in a way that does no actual harm

Tangential, but—what's even the motivation for the circularity here? What's wrong with "adult human female"?

Okay, I get it: we want to be trans-inclusive. But the clean way to do that is, "Adult human females, plus sufficiently successful male mimics thereof in the context that I'm using the word." (See the explanation of mimickry and story about robot ducks in "... Optimized for Deception.") We can accomodate mimics in the domain that their mimickry is successful without trashing our ability to acknowledge the existence of the original thing!

ambiguities around the edges unless the person using that word tells you explicitly and in detail where they draw the boundaries. [...] exactly how "towns" shade into "villages" and "cities" for me [...] how "beautiful" shades into weaker terms like "pretty" or "sexy" or "elegant", or how I categorize turquoise-ish colours

Sex actually seems significantly disanalogous to all of these examples, because municipality size, beauty, and color are all continuous (you know, like, for all ε, there exists a δ, such that if |color1 − color2| < δ, then |greenness(color1) − greenness(color2)| < ε), whereas sex is functionally binary: there's a morph that procduces eggs, and a morph that produces sperm, but no continuum between them that produces a continuum of intermediate-sized gametes. The existence of various intersex conditions (which some authors call "disorders of sex development"), and cross-sex hormone replacement therapy, don't substantially change this picture, because they're the result of some specific thing "going wrong" with one of the two-and-only-two evolved developmental processes: you end up with various DSDs and females-on-masculinizing-HRT and males-on-feminizing-HRT each being their own tiny clusters in configuration space that you can draw a category boundary around. True, there is going to be a continuum of HRT dosages (or, say, the degree of how severe a polycystic ovary syndrome case is, if you want to count that), but my point is that the taxonicity means that we don't need to specify edge cases in detail in order to stick a category boundary between the taxons: males-on-feminizing-HRT aren't part of the female taxon! They just aren't!

whether that person would prefer to be treated as male or female.

What does that even mean? (Also, isn't that kind of sexist?) If Official Gender doesn't matter once you get to know someone, why would people have this mysterious deep-seated preference to be "treated as male or female"?

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-15T19:26:59.350Z · LW · GW

Yeah, not-loving the way this thread turned out makes sense. Sorry. Please make sure to downvote any comments that you think are bad.

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-15T02:43:39.078Z · LW · GW

Let's take this to my containment thread.

Comment by Zack_M_Davis on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda · 2021-05-15T02:43:18.571Z · LW · GW

(A reply to gjm, split off from the comments on "There's No Such Thing as a Tree")

would you care either to argue for that principle or explain what weaker principle you are implicitly appealing to here?

No, not really. What actually happened here was, I was annoyed at being accused of not understanding something I've been obsessively explaining and re-explaining for multiple years—notice the uncanny resemblance between your comment ("If I and the people I need to talk to about pumpkins spend our days [...]") and one of my replies from March 2019 (!) to you (!!) ("If I want to use sortable objects [...] If I'm running a factory [...]")—so I fired off a snippy reply daring you to engage with my latest work. It wasn't very principled. (It worked, though!)

Affirm your summary points 1–6.

Suppose someone's legal name, given by their parents, is George, but they hate the way that sounds [...] Suppose our interlocutor actually thinks his name is Fred [...] the big rock in Australia named Uluru, but your interlocutor is an Englishman stuck in the past who insists that it is, and must always be, called Ayers Rock

But these examples are all about which proper name to use, which is not the philosophy-of-language subtopic I've been writing about at all! The communicative function of proper names (an arbitrary symbol/"pointer" to refer to some entity) is different from the cognitive function of categories (whereby different entities are considered instances of the "same kind" of thing)! Why did any of these examples seem relevant to you?

where everyone else seems to be drawing them

Yes, I wrote about this in "Schelling Categories, and Simple Membership Tests". (Pet-topic object-level application: "Self-Identity Is a Schelling Point".)

I no longer think that's quite what's going on, but I do think you're objecting to more than your more nuanced analyses of category boundaries (e.g., in UCAOFD) justify

I think the actual pouncing algorithm is, "If someone favorably cites Scott Alexander's 'The Categories Were Made for Man, Not Man for the Categories', then pounce."

I don't feel guilty about this because that post is utterly pants-on-fire mendacious. To Scott's credit, the disinformation situation there is at least slightly less bad after he added the edit-note at the bottom after I spent Christmas Day 2019 yelling at him, but I think most readers will fail to notice how much the edit-note undermines the grand moral of the post: if "[i]n most cases plausible definitions will be limited to a few possibilities suggested by the territory" (as the edit-note finally admits), then it's not true that one "ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female [...] There's no rule of rationality saying that I shouldn't" (as the main text claims). There are rules!

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-11T20:53:49.782Z · LW · GW

the point is that all these things require some sort of notion of distance, size, etc., in concept-space.

I agree. Did ... did you read "Unnatural Categories Are Optimized for Deception"? The post says this very explicitly in quite a lot of detail with specific numerical examples! (Ctrl-F for "metric".)

If you're going to condescend to me like this, I think I deserve an answer: did you read the post, yes or no? I know, it's kind of long (just under 10,000 words). But ... if you're going to put in the effort to write 500 words allegedly disproving what I "keep saying", isn't it worth ... actually reading what I say?

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-08T19:39:05.246Z · LW · GW

Or I'm speaking a slightly different dialect of English from you?? As a point of terminology, I think "fuzzy" is a better word than "arbitrary" for this kind of situation, where I agree that, as a human having a casual conversation, my response to "Is a pumpkin a fruit?" is usually going to be something like "Whatever; if it matters in context, I'll ask for more specifics", but as a philosopher of science, I claim that there definite mathematical laws governing the relationship between what communication signals are sent, and what probabilistic inferences a receiver can infer, and the laws permit things like soft k-means clustering, where given some set of data points representing data about plants, the algorithm could say that this-and-such plant has a membership coefficient of 0.34 in the "shrub" cluster and 0.66 in the "tree" cluster, and there would be nothing arbitrary about those numbers as the definite, precise result of what happens when you run this particular clustering algorithm against that particular data. (But the number 0.34 in this blog comment is arbitrary, because I made it up for concreteness while trying to explain what fuzzy clustering is; there's no reason I couldn't have chosen a different coefficient.)

But then when I actually look up "arbitrary" and "fuzzy" on Wiktionary, it seems common usage is not unequivocally on my side: your usage of arbitrary fits with the first part of definition 1 ("Based on individual discretion or judgment"), whereas my usage is centered on the second part of definition 1 ("not based on any objective distinction, perhaps even made at random"), with influence from the mathematician's usage, definition 3 ("Any, out of all that are possible"). And the meaning of fuzzy I want barely even makes the list as a technical reference ("Employing or relating to fuzzy logic") ...

Agreed on weirdness.

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-04T14:51:25.581Z · LW · GW

On the specific example of trees, John Wentworth recently pointed out that neural networks tend to learn a "tree" concept: a small, local change to the network can add or remove trees from generated images. That kind of correspondence between human and unsupervised (!) machine-learning model concepts is the kind of thing I'd expect to happen if trees "actually exist", rather than trees being weird and a little arbitrary. (Where things are closer to "actually existing" rather than being arbitrary when different humans and other AI architectures end up converging on the same concept in order to compress their predictions.)

(Now I'm wondering if there's some sort of fruitful analogy to be made between convergence of tree concepts in different maps, and convergent evolution in the territory; in some sense, the fact that evolution keeps rediscovering the tree strategy makes them less "arbitrary" than if trees had only been "invented once" and all descended from the same ur-tree ...)

Comment by Zack_M_Davis on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T16:48:07.063Z · LW · GW

I expected you to realize how wrong everything you said was

What parts, specifically, are wrong? What is the evidence that shows that those parts are wrong? Please tell me! If I'm wrong about everything, I want to know!

Comment by Zack_M_Davis on There’s no such thing as a tree (phylogenetically) · 2021-05-03T04:54:10.840Z · LW · GW

Acknowledge that all of our categories are weird and a little arbitrary

That is not the moral! The moral is that the cluster-structure of similarities induced by phylogenetic relatedness exists in a different subspace from the cluster-structure of similarities induced by convergent evolution! (Where the math jargon "subspace" serves as a precise formalization of the idea that things can be similar in some aspects ("dimensions") while simultaneously being different in other aspects.) This shouldn't actually be surprising if you think about what the phrase "convergent evolution" means!

For more on the relevant AI/philosophy-of-language issues, see "Where to Draw the Boundaries?" and "Unnatural Categories Are Optimized for Deception".

Comment by Zack_M_Davis on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T20:06:36.735Z · LW · GW

actual trans people, or perverts willing to pretend to be trans if it allows them to sneak into female toilets

It gets worse: if the dominant root cause of late-onset gender dysphoria in males is actually a paraphilic sexual orientation, this is a false dichotomy! (It's not "pretending" if you sincerely believe it.)

Comment by Zack_M_Davis on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T17:51:14.516Z · LW · GW

So, I started writing an impassioned reply to this (draft got to 850 words), but I've been trying to keep my culture war efforts off this website (except for the Bayesian philosophy-of-language sub-campaign that's genuinely on-topic), so I probably shouldn't take the bait. (If nothing else, it's not a good use of my time when I have lots of other things to write for my topic-specific blog.)

If I can briefly say one thing without getting dragged into a larger fight, I would like to note that aggressively encouraging people to consider whether they might be trans is potentially harmful if the popular theory of what "trans" is, is actually false; even if you're a liberal who wants people to have the freedom to decide how to live their lives unencumbered by oppressive traditions, people might make worse decisions in an environment full of ideologically-fueled misinformation. (I consider trans activism to have been extremely harmful to me and people like me on this account.)

Comment by Zack_M_Davis on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-13T18:15:00.346Z · LW · GW

The effective altruist case for regime change??

Comment by Zack_M_Davis on Why We Launched LessWrong.SubStack · 2021-04-01T16:39:29.790Z · LW · GW

Has anyone tried buying a paid subscription? I would assume the payment attempt just fails unless your credit card has a limit over $60,000, but I'm scared to try it.

Comment by Zack_M_Davis on On future people, looking back at 21st century longtermism · 2021-03-23T01:21:21.336Z · LW · GW

I imagine them going: "Whoa. Basically all of history, the whole thing, all of everything, almost didn't happen."

But this kind of many-worldeaters thinking is already obsolete. It won't be that it "almost" didn't happen; it's that it mostly didn't happen. (The future will have the knowledge and compute to say what the distribution of outcomes was for a specified equivalence class of Earth-analogues across the multiverse.)

Comment by Zack_M_Davis on Unnatural Categories Are Optimized for Deception · 2021-03-20T01:03:19.862Z · LW · GW

This gave me a blog story idea!

Comment by Zack_M_Davis on Viliam's Shortform · 2021-03-19T23:39:37.853Z · LW · GW

YouTube lets me watch the video (even while logged out). Is it a region thing?? (I'm in California, USA). Anyway, the video depicts

dirt, branches, animals, &c. getting in Rapunzel's hair as it drags along the ground in the scene when she's frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, "Okay, this is getting weird; I'm just gonna go."

If you want to know how it really ends, check out the sequel series!

Comment by Zack_M_Davis on Unnatural Categories Are Optimized for Deception · 2021-03-19T21:22:08.227Z · LW · GW

So, I like this, but I'm still not sure I understand where features come from.

Say I'm an AI, and I've observed a bunch of sensor data that I'm representing internally as the points (6.94, 3.96), (1.44, -2.83), (5.04, 1.1), (0.07, -1.42), (-2.61, -0.21), (-2.33, 3.36), (-2.91, 2.43), (0.11, 0.76), (3.2, 1.32), (-0.43, -2.67).

The part where I look at this data and say, "Hey, these datapoints become approximately conditionally independent if I assume they were generated by a multivariate normal with mean (2, -1), and covariance matrix [[16, 0], [0, 9]][1]; let me allocate a new concept for that!" makes sense. (In the real world, I don't know how to write a program to do this offhand, but I know how to find what textbook chapters to read to tell me how.)

But what about the part where my sensor data came to me already pre-processed into the list of 2-tuples?—how do I learn that? Is it just, like, whatever transformations of a big buffer of camera pixels let me find conditional independence patterns probably correspond to regularities in the real world? Is it "that easy"??

  1. In the real world, I got those numbers from the Python expression ', '.join(str(d) for d in [(round(normal(2, 4), 2), round(normal(-1, 3), 2)) for _ in range(10)]) (using scipy.random.normal). ↩︎

Comment by Zack_M_Davis on Unnatural Categories Are Optimized for Deception · 2021-03-19T18:18:35.800Z · LW · GW

(Thinking out loud about how my categorization thing will end up relating to your abstraction thing ...)

200-word recap of my thing: I've been relying on our standard configuration space metaphor, talking about running some "neutral" clustering algorithm on some choice of subspace (which is "value-laden" in the sense that what features you care about predicting depends on your values). This lets me explain how to think about dolphins: they simultaneously cluster with fish in one subspace, but also cluster with other mammals in a different subspace, no contradiction there. It also lets me explain what's wrong with a fake promotion to "Vice President of Sorting": the "what business cards say" dimension is a very "thin" subspace; if it doesn't cluster with anything else, then there's no reason we care. As my measurement of what makes a cluster "good", I'm using the squared error, which is pretty "standard"—that's basically what, say, k means clustering is doing—but also pretty ad hoc: I don't have a proof of why squared error and only squared error is the right calculation to be doing given some simple deciderata, and it probably isn't. (In contrast, we can prove that if you want a monotonic, nonnegative, additive measure of information, you end up with entropy: the only free choice is the base of the logarithm.)

What I'm hearing from the parent and your reply to my comment on "... Ad Hoc Mathematical Definitions?": talking about looking for clusters in some pre-chosen subspace of features is getting the actual AI challenge backwards. There are no pre-existing features in the territory; rather, conditional-independence structure in the territory is what lets us construct features such that there are clusters. Saying that we want categories that cluster in a "thick" subspace that covers many dimensions is like saying we want to measure information with "a bunch of functions like , sin(Y), , &c., and require that those also be uncorrelated": it probably works, but there has to be some deeper principle that explains why most of the dimensions and ad hoc information measures agree, why we can construct a "thick" subspace.

To explain why "squiggly", "gerrymandered" categories are bad, I said that if you needed to make a decision that depended on how big an integer is, categorizing by parity would be bad: the squared-error score quantifies the fact that 2 is more similar to 3 than 12342. But notice that the choice of feature (the decision quality depending on magnitude, not parity) is doing all the work: 2 is more similar to 12342 than 3 in the mod-2 quotient space!

So maybe the exact measure of "closeness" in the space (squared error, or whatever) is a red herring, an uninteresting part of the problem?—like the choice of logarithm in the definition of entropy. We know that there isn't any principled reason why base 2 or base e is better than any others. It's just that we're talking about how uncertainty relates to information, so if we use our standard representation of uncertainty as probabilities from 0 to 1 under which independent events multiply, then we have a homomorphism from multiplication (of probability) to addition (of information), which means you have to pick a base for the logarithm if you want to work with concrete numbers instead of abstract nonsense.

If this is a good analogy, then we're looking for some sort of deeper theorem about "closeness" and conditional independence "and stuff" that explains why the configuration space metaphor works—after which we'll be able to show that the choice of metric on the "space" will be knowably arbitrary??

Comment by Zack_M_Davis on What's So Bad About Ad-Hoc Mathematical Definitions? · 2021-03-16T06:58:06.088Z · LW · GW

This is related to something I never quite figured out in my cognitive-function-of-categorization quest. How do we quantify how good a category is at "carving reality at the joints"?

Your first guess would be "mutual information between the category-label and the features you care about" (as suggested in the Job parable in April 2019's "Where to Draw the Boundaries?"), but that actually turns out to be wrong, because information theory has no way to give you "partial credit" for getting close to the right answer, which we want. Learning whether a number between 1 and 10 inclusive is even or odd gives you the same amount of information (1 bit) as learning whether it's over or under 5½, but if you need to make a decision whose goodness depends continuously on the magnitude of the number, then the high/low category system is useful and the even/odd system is not: we care about putting probability-mass "close" to the right answer, not just assigning more probability to the exact answer.

In January 2021's "Unnatural Categories Are Optimized for Deception", I ended up going with "minimize expected squared error (given some metric on the space of features you care about)", which seems to work, but I didn't have a principled justification for that choice, other than it solving my partial-credit problem and it being traditional. (Why not the absolute error? Why not exponentiate this feature and then, &c.?)

Another possibility might have been to do something with the Wasserstein metric, which reportedly fixes the problem of information theory not being able to award "partial credit". (The logarithmic score is the special case of the Kullback–Leibler divergence when the first distribution assigns Probability One to the actual answer, so if there's some sense in which Wasserstein generalizes Kullback–Leibler for partial credit, then maybe that's what I want.)

My intuition doesn't seem adequate to determine which (or something else) formalization captures the true nature of category-goodness, to which other ideas are a mere proxy.

Comment by Zack_M_Davis on Trapped Priors As A Basic Problem Of Rationality · 2021-03-13T04:43:20.538Z · LW · GW

Maybe it's unfortunate that the same word is overloaded to cover "prior probability" (e.g., probability 0.2 that dogs are bad), and "prior information" in the sense of "a mathematical object that represents all of your starting information plus the way you learn from experience."

Comment by Zack_M_Davis on Where does the phrase "central example" come from? · 2021-03-12T06:21:20.422Z · LW · GW


Comment by Zack_M_Davis on Where does the phrase "central example" come from? · 2021-03-12T06:20:29.139Z · LW · GW

Implied by "the noncentral fallacy"? (I'm surprised at the search engine results (Google, DuckDuckGo); I didn't realize this was a Less Wrong-ism.)

Comment by Zack_M_Davis on Defending the non-central fallacy · 2021-03-10T06:11:14.083Z · LW · GW

And a more natural clustering would reflect that.

What subspace are you doing your clustering in, though? Both the pro-capital-punishment and anti-capital-punishment side should be able to agree that capital punishment and "central" murder are similar in the "intentional killing of a human" aspects, but differ in the "motives and decision mechanism of the killer" aspects (where the "central" murderer is an individual, rather than a judicial institution). Each side has an incentive to try to bind the murder codeword in their shared language to a subspace that makes their own side's preferred policy look natural.

Comment by Zack_M_Davis on Unconvenient consequences of the logic behind the second law of thermodynamics · 2021-03-07T19:34:01.059Z · LW · GW

if entropy is decreasing maybe your memory is just working "backwards"

I think the key to the puzzle is likely to be here: there's likely to be some principled reason why agents embedded in physics will perceive the low-entropy time direction as "the past", such that it's not meaningful to ask which way is "really" "backwards".

Comment by Zack_M_Davis on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2021-03-03T04:12:07.346Z · LW · GW

Cade Metz hadn't had this much trouble with a story in years. Professional journalists don't get writer's block! Ms. Tam had rejected his original draft focused on the subject's early warnings of the pandemic. Her feedback hadn't been very specific ... but then, it didn't need to be.

For contingent reasons, the reporting for this piece had stretched out over months. He had tons of notes. It shouldn't be hard to come up with a story that would meet Ms. Tam's approval.

The deadline loomed. Alright, well, one sentence at a time. He wrote:

In one post, he aligned himself with Charles Murray, who proposed a link between race and I.Q. in "The Bell Curve."

Metz asked himself: Is this statement actually and literally true?

Yes! The subject had aligned himself with Charles Murray in one post: "The only public figure I can think of in the southeast quadrant with me is Charles Murray.".

In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."

Metz asked himself: Is this statement actually and literally true?

Yes! The subject had pointed that out in another post: "Consider Charles Murray saying that he believes black people are genetically less intelligent than white people."

Having gotten started, the rest of the story came out easily. Why had he been so reluctant to write the new draft, as if in fear of some state of sin? This was his profession—to seek out all the news that's fit to print, and bring it to the light of the world!

For that was his mastery.

Comment by Zack_M_Davis on Anna and Oliver discuss Children and X-Risk · 2021-02-27T20:35:24.596Z · LW · GW

being the-sort-of-person-who-chooses-to-have-kids

What years were most of these biographies about? Sexual marketplace and family dynamics have changed a lot since, say, 1970ish. (Such that a lot of people today who don't think of themselves as the-sort-of-person-who-chooses-to-have-kids would absolutely be married with children had someone with their genotype grown up in an earlier generation.)

Comment by Zack_M_Davis on Anna and Oliver discuss Children and X-Risk · 2021-02-27T19:32:31.906Z · LW · GW

Two complementary pro-natalist considerations I'd like to see discussed:

  • Eugenics! It doesn't seem like there are any technical barriers to embryo selection for IQ today. If longtermist parents disproportionately become early adopters of this tech in the 2020s, could that help their children be a disproportionate share of up-and-coming AI researchers in the 2040s?

  • Escaping our Society's memetic collapse. We are the children of a memetic brood-parasite strategy. It's a lot easier to recruit new longtermists out of universal culture than it is from Mormonism, but universal culture triumphed not because its adherents had more children than everyone else, but by capturing the school and media institutions that socialize everyone else's children: horizontal meme transmission rather than vertical. If social-media-era universal culture is no longer as conducive to Reason as its 20th-century strain, maybe we need to switch to a more Mormon-like strategy (homeschooling, &c.) if we want there to be top reasoners in the 2040s.

Comment by Zack_M_Davis on Above the Narrative · 2021-02-26T08:26:22.270Z · LW · GW

Consider adapting this into a top-level post? I anticipate wanting to link to it (specifically for the "smaller audiences offer more slack" moral).

Comment by Zack_M_Davis on Google’s Ethical AI team and AI Safety · 2021-02-22T02:10:29.649Z · LW · GW

people are afraid to engage in speech that will be interpreted as political [...] nobody is actually making statements about my model of alignment deployment [...] try to present the model at a further disconnect from the specific events and actors involved

This seems pretty unfortunate insofar as some genuinely relevant real-world details might not survive the obfuscation of premature abstraction.

Example of such an empirical consideration (relevant to the "have some members that keep up with AI Safety research" point in your hopeful plan): how much overlap and cultural compatibility is there between AI-ethics-researchers-as-exemplified-by-Timnit-Gebru and AI-safety-researchers-as-exemplified-by-Paul-Christiano? (By all rights, there should be overlap and compatibility, because the skills you need to prevent your credit-score AI from being racist (with respect to whatever the correct technical reduction of racism turns out to be) should be a strict subset of the skills you need to prevent your AGI from destroying all value in the universe (with respect to whatever the correct technical reduction of value turns out to be).)

Have you tried asking people to comment privately?

Comment by Zack_M_Davis on “PR” is corrosive; “reputation” is not. · 2021-02-17T08:05:47.579Z · LW · GW

Thanks for the detailed reply! I changed my mind; this is kind of interesting.

This is not about "tone policing." This is about the fundamental thrust of the engagement. "You're wrong, and I'mm'a prove it!" vs. "I don't think that's right, can we talk about why?"

Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?

"You're wrong" and "I don't think that's right" are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken ("you" vs. "that"), and the latter uses the idiom of talking about the map rather than the territory ("I think X" rather than "X") to indicate uncertainty. The semantics of "I'mm'a prove it!" and "Can we talk about why?" differ more, but both indicate that a criticism is about to be presented.

In my culture, "You're wrong, and I'mm'a prove it!" indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas "I don't think that's right, can we talk about why?" indicates less confidence and less interest.

In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.

the exact same information cooperatively/collaboratively

Can you say more about what the word collaborative means to you in this context? I asked a question about this once!

implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you're being adversarial because you genuinely believe that's the best way forward [...] you tell yourself that it's virtuous so that you don't have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas

Oh, it's definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that's been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I'm positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn't interested in trying to change a trait in themselves, are they therefore claiming it a "virtue"? Ambiguous!

I defy you to say, with a straight face, "a supermajority of rationalists

Hold on. I categorically reject the epistemic authority of a supermajority of so-called "rationalists". I care about what's actually true, not what so-called "rationalists" think.

To be sure, there's lots of specific people in the "rationalist"-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the "rationalist" brand name is worth—maybe not nothing, but certainly less than, I don't know, graduating from the University of Chicago.

the hypothesis which best explains my first response

Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people's false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I've been focused on for the last four years.

You can't rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.

EpicNamer's comment seems bad (the -7 karma is unsurprising), but I don't feel strongly about it, because, like Oli, I don't understand it. ("[A]t the expense of A"? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.

it's disingenuous and sneaky to act like what's being requested here is that you "obfuscate your thoughts through a gentleness filter."

In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was "actually helps anyone", which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!

Comment by Zack_M_Davis on “PR” is corrosive; “reputation” is not. · 2021-02-15T23:41:30.999Z · LW · GW

I also object to "would be very bad" in the subjunctive ... I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of "I apologize IF I offended anybody," when one clearly did offend.

So, I think it's important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn't have taken it personally?

I guess I just don't believe that thoughts end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked. Criticism that persistently "plays dumb" with lame "gotcha"s in order to appear to land attacks in front of an undiscriminating audience are bad, but I think it's not hard to distinguish between persistently playing dumb, and "clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them"?

We might actually have an intellectually substantive disagreement about priors on human variation! Exploring that line of discussion is potentially interesting! In contrast, tone-policing replies about not being sufficiently nurturing is ... boring? I like you, Duncan! You know I like you! I just ... don't see how obfuscating my thoughts through a gentleness filter actually helps anyone?

more willing to believe that your nitpicking was principled if you'd spared any of it for the top commenter

Well, I suppose it's not "principled" in the sense that my probability of doing it varies with things other than the severity of the "infraction". If it's not realistic for me to not engage in some form of "selective enforcement" (I'm a talking monkey that types blog comments when I feel motivated, not an AI neutrally applying fixed rules over all comments), I can at least try to be transparent about what selection algorithm I'm using?

I'm more motivated to reply to Duncan Sabien (former CfAR instructor, current MIRI employee) than I am to EpicNamer27098 (1 post, 17 comments, 20 karma, joined December 2020). (That's a compliment! I'm saying you matter!)

I'm more motivated to reply to appeals to assumed-to-exist individual variation, than the baseline average of comments that don't do that, because that's a specific pet peeve of mine lately for psychological reasons beyond the scope of this thread.

I'm more motivated to reply to comments that seem to be defending "even the wonderful cream-of-the-crop rationalists" than the baseline average of comments that don't do that, for psychological reasons beyond the scope of this thread.

Comment by Zack_M_Davis on “PR” is corrosive; “reputation” is not. · 2021-02-15T22:52:41.004Z · LW · GW

there are humans who do not laugh [...] humans who do not shiver when cold

Are there? I don't know! Part of where my comment was coming from is that I've grown wary of appeals to individual variation that are assumed to exist without specific evidence. I could easily believe, with specific evidence, that there's some specific, documented medical abnormality such that some people never develop the species-typical shiver, laugh, cry, &c. responses. (Granted, I am relying on the unstated precondition that, say, 2-week-old embryos don't count.) If you show me the Wikipedia page about such a specific, documented condition, I'll believe it. But if I haven't seen the specific Wikipedia page, should I have a prior that every variation that's easy to imagine, actually gets realized? I'm skeptical! The word human (referring to a specific biological lineage with a specific design specified in ~3·10⁹ bases of the specific molecule DNA) is already pointing to a very narrow and specific set of configurations (relative to the space of all possible ways to arrange 10²⁷ atoms); by all rights, there should be lots of actually-literally universal generalizations to be made.

Comment by Zack_M_Davis on “PR” is corrosive; “reputation” is not. · 2021-02-15T20:46:10.731Z · LW · GW

Oh. I agree that introducing a burden on saying anything at all would be very bad. I thought I was trying to introduce a burden on the fake precision of using the phrase "many orders of magnitude" without being able to supply numbers that are more than 100 times larger than other numbers. I don't think I would have bothered to comment if the great-grandparent had said "a sign that you're wrong" rather than "a sign that you are many orders of magnitude more likely to be wrong than right".

The first paragraph was written from an adversarial perspective, but, in my culture, the parenthetical and "I can empathize with ..." closing paragraph were enough to display overall prosocial and cooperative intent on my part? An opposing lawyer's nitpicking in the courtroom is "adversarial", but the existence of adversarial courts (where opposing lawyers have a duty to nitpick) is "prosocial"; I expect good lawyers to be able to go out for friendly beers after the trial, secure in the knowledge that uncharity while court is in session is "part of the game", and I expect the same layered structure to be comprehensible within a single Less Wrong comment?