Schelling Categories, and Simple Membership Tests 2019-08-26T02:43:53.347Z · score: 52 (19 votes)
Diagnosis: Russell Aphasia 2019-08-06T04:43:30.359Z · score: 47 (13 votes)
Being Wrong Doesn't Mean You're Stupid and Bad (Probably) 2019-06-29T23:58:09.105Z · score: 16 (11 votes)
What does the word "collaborative" mean in the phrase "collaborative truthseeking"? 2019-06-26T05:26:42.295Z · score: 27 (7 votes)
The Univariate Fallacy 2019-06-15T21:43:14.315Z · score: 27 (11 votes)
No, it's not The Incentives—it's you 2019-06-11T07:09:16.405Z · score: 91 (29 votes)
"But It Doesn't Matter" 2019-06-01T02:06:30.624Z · score: 47 (31 votes)
Minimax Search and the Structure of Cognition! 2019-05-20T05:25:35.699Z · score: 15 (6 votes)
Where to Draw the Boundaries? 2019-04-13T21:34:30.129Z · score: 79 (32 votes)
Blegg Mode 2019-03-11T15:04:20.136Z · score: 18 (13 votes)
Change 2017-05-06T21:17:45.731Z · score: 1 (1 votes)
An Intuition on the Bayes-Structural Justification for Free Speech Norms 2017-03-09T03:15:30.674Z · score: 4 (8 votes)
Dreaming of Political Bayescraft 2017-03-06T20:41:16.658Z · score: 1 (1 votes)
Rationality Quotes January 2010 2010-01-07T09:36:05.162Z · score: 3 (6 votes)
News: Improbable Coincidence Slows LHC Repairs 2009-11-06T07:24:31.000Z · score: 7 (8 votes)


Comment by zack_m_davis on G Gordon Worley III's Shortform · 2019-09-12T15:06:30.513Z · score: 13 (4 votes) · LW · GW

as clearly noted in my original objection

Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)

there is absolutely a time and a place for this

Exactly—and is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.

Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)

My whole objection is that Gordon wasn't bothering to

Great! Thank you for critcizing people who don't justify their beliefs with adequate evidence and arguments. That's really useful for everyone reading!

(I believe as a cover for not being able to).

In context, it seems worth noting that this is a claim about Gordon's mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence), but it seems in tension with some of your other claims?

Comment by zack_m_davis on G Gordon Worley III's Shortform · 2019-09-12T02:13:09.913Z · score: 13 (8 votes) · LW · GW

leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."

Nesov scooped me on the obvious objection, but as long as we're creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

the existence of bisexuals

As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.

  1. J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patterns vs. the Kinsey Scale: The Case of Male Bisexuality" ↩︎

Comment by zack_m_davis on Matthew Barnett's Shortform · 2019-09-08T20:16:19.735Z · score: 10 (5 votes) · LW · GW

People who don't understand the concept of "This person may have changed their mind in the intervening years", aren't worth impressing. I can imagine scenarios where your economic and social circumstances are so precarious that the incentives leave you with no choice but to let your speech and your thought be ruled by unthinking mob social-punishment mechanisms. But you should at least check whether you actually live in that world before surrendering.

Comment by zack_m_davis on gilch's Shortform · 2019-08-26T03:51:35.462Z · score: 3 (2 votes) · LW · GW

What are the notable differences between Hissp and Hy? (Hyperlink to "Hy" in previous sentence is just for convenience and the benefit of readers; as you know, we're both former contributors.)

Comment by zack_m_davis on Schelling Categories, and Simple Membership Tests · 2019-08-26T02:45:59.937Z · score: 6 (3 votes) · LW · GW

Okay, Said, I used LaTeX for the numerical subscripts this time. I hope you're happy! (No, really—I actually mean it.)

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-21T02:16:56.536Z · score: 4 (2 votes) · LW · GW

P.S. (to sister comment), I'm going to be traveling through the 25th and probably won't check this website, in case that information helps us break out of this loop of saying "Let's stop the implicitly-emotionally-charged back-and-forth in the comments here," and then continuing to do so anyway. (I didn't get anything done at my dayjob today, which is an indicator of me also suffering from the "Highly tense conversations are super stressful and expensive" problem.)

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-20T06:25:50.104Z · score: 4 (2 votes) · LW · GW

but I'm guessing your wording was just convenient shorthand rather than a disagreement with the above


As I said, even if the Judge example, Carol has to understand Alice's claims.

Yes, trivially; Jessica and I both agree with this.

Jessica's Judge example still feels like a nonsequitor [sic] that doesn't have much to do with what I was talking about.

Indeed, it may not have been relevant to the specific thing you were trying to say. However, being that as it may, I claim that the judge example is relevant to one of the broader topics of conversation: specifically, "what norms and/or principles should Less Wrong aspire to." The Less Wrong karma and curation systems are functionally a kind of Judge, insofar as ideas that get upvoted and curated "win" (get more attention, praise, general acceptance in the rationalist community, &c.).

If Alice's tendency to lie, obfuscate, rationalize, play dumb, report dishonestly, filter evidence, &c. isn't an immutable feature of her character, but depends on what the Judge's behavior incentivizes (at least to some degree), then it really matters what kind of Judge you have.

We want Less Wrong specifically, and the rationalist community more generally, to be a place where clarity wins, guided by the beauty of our weapons. If we don't have that—if we live in a world where lies and bullshit outcompete truth, not just in the broader Society, but even in the rationalist community—then we're dead. (Because you can't solve AI alignment with lies and bullshit.)

As a moderator and high-karma user of, you, Raymond Arnold, are a Judge. Your strong-upvote is worth 10 karma; you have the power to Curate a post; you have the power to have the power to tell Alice to shape up or ship out. You are the incentives. This is a huge and important responsibility, your Honor—one that has the potential to influence 10¹⁴ lives per second. It's true that truthtelling is only useful insofar as it generates understanding in other people. But that observation, in itself, doesn't tell you how to exercise your huge and important responsibility.

If Jessica says, "Proponents of short AI timelines are lying, but not necessarily consciously lying; I mostly mean covert deception hidden from conscious attention," and Alice says, "Huh? I can't understand you if you're going to use words in nonstandard ways," then you have choices to make, and your choices have causal effects.

If you downvote Jessica because you think she's drawing the category boundaries of "lying" too widely in a way that makes the word less useful, that has causal effects: fewer people will read Jessica's post; maybe Jessica will decide to change her rhetorical strategy, or maybe she'll quit the site in disgust.

If you downvote Alice for pretending to be stupid when Jessica explicitly explained what she meant by the word "lying" in this context, then that has causal effects, too: maybe Alice will try harder to understand what Jessica meant, or maybe Alice will quit the site in disgust.

I can't tell you how to wield your power, your Honor. (I mean, I can, but no one listens to me, because I don't have power.) But I want you to notice that you have it.

If they're so motivatedly-unreasonable that they won't listen at all, the problem may be so hard that maybe you should go to some other place where more reasonable people live and try there instead. (Or, if you're Eliezer in 2009, maybe you recurse a bit and write the Sequences for 2 years so that you gain access to more reasonable people).

I agree that "retreat" and "exert an extraordinary level of interpretive labor" are two possible strategies for dealing with unreasonable people. (Personally, I'm a huge fan of the "exert arbitrarily large amounts of interpretive labor" strategy, even though Ben has (correctly) observed that it leaves me incredibly vulnerable to certain forms of trolling.)

The question is, are there any other strategies?

The reason "retreat" isn't sufficient, is because sometimes you might be competing with unreasonable people for resources (e.g., money, land, status, control of the "rationalist" and Less Wrong brand names, &c.). Is there some way to make the unreasonable people have to retreat, rather than the reasonable people?

I don't have an answer to this. But it seems like an important thing to develop vocabulary for thinking about, even if that means playing in hard mode.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-18T22:54:58.077Z · score: 25 (6 votes) · LW · GW

(You said you didn't want more back-and-forth in the comments, but this is just an attempt to answer your taboo request, not prompt more discussion; no reply is expected.)

We say that clarity wins when contributing to accurate shared models—communicating "clearly"—is a dominant strategy: agents that tell the truth, the whole truth, and nothing but the truth do better (earn more money, leave more descendants, create more paperclips, &c.) than agents that lie, obfuscate, rationalize, play dumb, report dishonestly, filter evidence, &c.

Creating an environment where "clarity wins" (in this sense) looks like a very hard problem, but it's not hard to see that some things don't work. Jessica's example of a judged debate where points are only awarded for arguments that the opponent acknowledges, is an environment where agents who want to win the debate have an incentive to play dumb—or be dumb—never acknowledging when their opponent made a good argument (even if the opponent in fact made a good argument). In this scenario, being clear (or at least, clear to the "reasonable person", if not your debate opponent) doesn't help you win.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T20:02:48.215Z · score: 4 (2 votes) · LW · GW

The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all

OK. Looking forward to future posts.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T19:43:34.342Z · score: 12 (6 votes) · LW · GW

The reason it's still tempting to use "deception" is because I'm focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, "Oceania has always been at war at Eastasia" and I believe him, there's a sense in which we want to say that I "have been deceived" (even if it's not really Winston's fault, thus the passive voice).

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:27:04.946Z · score: 14 (4 votes) · LW · GW

It might also be part of the problem that people are being motivated or deceptive. [...] the evidence for the latter is AFAICT more like "base rates".

When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?

Note on the word "deceptive": I need some word to talk about the concept of "saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted." (The part about resistence to correction is important for distinguishing "deception"-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it's not, I'll immediately say, "Oops, you're right," rather than digging my heels in.)

I'm sympathetic to the criticism that lying isn't the right word for this; so far my best alternatives are "deceptive" and "misleading." If someone thinks those are still too inappropriately judgey-blamey, I'm eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.

If an Outer Party member in the world of George Orwell's 1984 says, "Oceania has always been at war with Eastasia," even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don't want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there's more to the story than that. The divergence between this person's deep model and mine isn't just a random accident such that I should humbly accept that the Outside View says they're as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.

Obviously, this isn't license to indiscriminately say, "You're just saying that because you're afraid of not getting invited to parties!" to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people's beliefs.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T03:00:17.476Z · score: 2 (1 votes) · LW · GW

I think Benquo has often mixed the third thing in with the first thing (and sort of skipped over the second thing?), which I consider actively harmful to the epistemic health of the discourse.

Question: do you mean this as a strictly denotative claim (Benquo is, as a matter of objective fact, mixing the things, which is, as a matter of fact, actively harmful to the discourse, with no blame whatsoever implied), or are you accusing Benquo of wrongdoing?

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T02:58:34.475Z · score: 16 (8 votes) · LW · GW

That makes sense. I, personally, am interested in developing new terminology for talking about not-necessarily-conscious-and-yet-systematically-deceptive cognitive algorithms, where Ben and Jessica think that "lie"/"fraud"/&c. are fine and correct.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-14T22:56:48.025Z · score: 22 (15 votes) · LW · GW

I define clarity in terms of what gets understood, rather than what gets said.

Defining clarity in terms of what gets understood results in obfuscation winning automatically, by effectively giving veto power to motivated misunderstandings. (As Upton Sinclair put it, "It is difficult to get a man to understand something when his salary depends upon his not understanding it," or as Eliezer Yudkowsky put it more recently, "politically motivated incomprehension makes people dumber than cassette tape recorders.")

If we may be permitted to borrow some concepts from law (while being wary of unwanted transfer of punitive intutions), we may want concepts of willful blindness, or clarity to the "reasonable person".

politics backpropogates into truthseeking, causes people to view truthseeking norms as a political weapon.

Imagine that this had already happened. How would you go about starting to fix it, other than by trying to describe the problem as clearly as possible (that is, "invent[ing] truthseeking-politics-on-the-fly")?

Comment by zack_m_davis on Could we solve this email mess if we all moved to paid emails? · 2019-08-13T03:59:24.877Z · score: 4 (2 votes) · LW · GW

3 karma in 21 votes (including a self-strong-upvote)

Actually, jeez, the great-grandparent doesn't deserve that self-strong-upvote; let me revise that to a no-self-vote.

Comment by zack_m_davis on Could we solve this email mess if we all moved to paid emails? · 2019-08-13T03:52:58.837Z · score: 17 (6 votes) · LW · GW

Thanks, the hobbyhorse/derailing concern makes sense. (I noticed that too, but only after I posted the comment.) I think going forward I should endeavor to be much more reserved about impulsively commenting in this equivalence class of situation. A better plan: draft the impulsive comment, but don't post it, instead saving it as raw material for the future top-level post I was planning to eventually write anyway.

Luckily the karma system was here to keep me accountable and prevent my bad blog comment from showing up too high on the page (3 karma in 21 votes (including a self-strong-upvote), a poor showing for me).

Comment by zack_m_davis on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T03:28:26.946Z · score: -1 (27 votes) · LW · GW

someone in rationality [...] the community [...] many rationalists [...] the collective action problem of how to allocate our attention as a community. [...] within the rationality community [...] positive effects on the community

What community?

The problems with email that you mention are real and important. I'm glad that people are trying to solve it. If you think one particular solution (such as is unusually good and you want it to win, then it might make sense for you to do some marketing work on their behalf, such as the post you just wrote.

What I don't understand (or rather, what I understand all too well and now wish to warn against after realizing just how horribly it's fucked with my ability to think in a way that I am only just now beginning to recover from) is this incestuous CliqueBot-like behavior that makes people think in terms of sending email to "someone in rationality", rather than just sending email to someone.

In the late 'aughts, Eliezer Yudkowsky wrote a bunch of really insightful blog posts about how to think. I think they got collected into a book? I can't recommend that book enough—it's really great stuff. ("AI to Zombies" is lame subtitle, though.) Probably there are some other good blog posts on the website, too? (At least, I like mine.)

But this doesn't mean you should think of the vague cluster of people who have been influenced by that book as a coherent group, "rationalists", the allocation of whose attention is a collective action problem (more so than any other number of similar clusters of people like "biologists", or "entrepreneurs", or "people with IQs above 120"). Particularly since mentally conflating rationality (the true structure of systematically correct reasoning) with the central social tendency of so-called "rationalists" (people who socially belong to a particular insular Bay Area-centered subculture) is likely to cause information cascades, as people who naïvely take the "rationalist" brand name literally tend to blindly trust the dominant "rationalist" opinion as the correct one, without actually checking whether "the community" is doing the kind of information processing that would result in systematically correct opinions.

And if you speak overmuch of the Way you will not attain it.

Comment by zack_m_davis on Power Buys You Distance From The Crime · 2019-08-12T00:11:57.895Z · score: 11 (3 votes) · LW · GW

In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately.

Hm. Is "well-known" good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it's literally the case that everybody knows that we're not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn't know.

Comment by zack_m_davis on Power Buys You Distance From The Crime · 2019-08-11T19:25:19.486Z · score: 23 (9 votes) · LW · GW

Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.

But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.

The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren't trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.

(You said this elsewhere in the thread: "the goal is to have one's beliefs correspond to reality—to use a conflict theory when that's true, a mistake theory when that's true".)

Comment by zack_m_davis on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-08-10T06:30:08.227Z · score: 33 (9 votes) · LW · GW

(If you don't like the phrase "pandering to idiots", feel free to charitably pretend I said something else instead; I'm afraid I only have so much time to edit this comment.)

You know, it's kind of dishonest of you to appeal to your comment-editing time budget when you really just wanted to express visceral contempt for the idea that intellectuals should be held accountable for alleged harm from simplifications of what they actually said. Like, it didn't actually take me very much time to generate the phrase "accountability for alleged harm from simplifications" rather than "pandering to idiots", so comment-editing time can't have been your real reason for choosing the latter.

More generally: when the intensity of norm enforcement depends on the perceived cost of complying with the norm, people who disagree with the norm (but don't want to risk defying it openly) face an incentive to exaggerate the costs of compliance. It takes more courage to say, "I meant exactly what I said" when you can plausibly-deniably get away with, "Oh, I'm sorry, that's just my natural writing style, which would be very expensive for me to change." But it's not the expenses—it's you!

Except you probably won't understand what I'm trying to say for another three days and nine hours.

Comment by zack_m_davis on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-08-07T18:49:38.487Z · score: 7 (3 votes) · LW · GW

Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric "Battle of the Sexes" problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.

Comment by zack_m_davis on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-08-06T23:19:37.537Z · score: 9 (3 votes) · LW · GW


trying to coordinate with 1/10/100/1000+ people [...] not everyone on LW is actually trying to coordinate with anyone, which is fine.

I wonder if it might be worth writing a separate post explaining why the problems you want to solve with 10/100/1000+ people have the structure of a coordination problem (where it's important not just that we make good choices, but that we make the same choice), and how much coordination you think is needed?

In World A, everyone has to choose Stag, or the people who chose Stag fail to accomplish anything. The payoff is discontinuous in the number of people choosing Stag: if you can't solve the coordination problem, you're stuck with rabbits.

In World B, the stag hunters get a payoff stags, where n is the number of people choosing Stag. The payoff is continuous in n: it would be nice if the group was better-coordinated, but it's usually not worth sacrificing on other goals in order to make the group better-coordinated. We mostly want everyone to be trying their hardest to get the theory of hunting right, rather than making sure that everyone is using the same (possibly less-correct) theory.

I think I mostly perceive myself as living in World B, and tend to be suspicious of people who seem to assume we live in World A without adequately arguing for it (when "Can't do that, it's a coordination problem" would be an awfully convenient excuse for choices made for other reasons).

Comment by zack_m_davis on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-08-06T21:11:52.609Z · score: 4 (4 votes) · LW · GW

this is important enough that it should be among the things we hold thought leaders accountable to

I would say that this depends on what kind of communicator or thought leader we're talking about. That is, there may be a need for multiple, differently-specialized "communicator" roles.

To the extent that you're trying to build a mass movement, then I agree completely and without reservations: you're accountable for the monster spawned by the five-word summary of your manifesto, because pandering to idiots who can't retain more than five words of nuance is part of the job description of being a movement leader. (If you don't like the phrase "pandering to idiots", feel free to charitably pretend I said something else instead; I'm afraid I only have so much time to edit this comment.)

To the extent that you're actually trying to do serious intellectual work, then no, absolutely not. The job description of an intellectual is, first, to get the theory right, and second, to explain the theory clearly to whosoever has the time and inclination to learn. Those two things are already really hard! To add to these the additional demand that the thinker make sure that her concepts won't be predictably misunderstood as something allegedly net-harmful by people who don't have the time and inclination to learn, is just too much of a burden; it can't be part of the job description of someone whose first duty (on which everything else depends) is to get the theory right.

The tragedy of the so-called "effective altruism" and "rationalist" communities, is that we're trying to be both mass movements, and intellectually serious, and we didn't realize until too late in September the extent to which this presents incompatible social-engineering requirements. I'm glad we have people like you thinking about the problem now, though!

Comment by zack_m_davis on Raemon's Scratchpad · 2019-07-31T02:40:28.846Z · score: 15 (5 votes) · LW · GW

In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand].

"Reply to Criticism on my EA Post", "Between Honesty and Perjury"

Comment by zack_m_davis on Being Wrong Doesn't Mean You're Stupid and Bad (Probably) · 2019-07-24T14:43:29.644Z · score: 8 (6 votes) · LW · GW

The rationalist community [...] rationalist standards [...] in this community

Uh, remind me why I'm supposed to care what some Bay Area robot cult thinks? (Although I heard there was an offshoot in Manchester that might be performing better!) The square quotes around "rationalist" "community" in the second paragraph are there for a reason.

The OP is a very narrowly focused post, trying to establish a single point (Being Wrong Doesn't Mean You're Stupid and Bad, Probably) by appealing to probability theory as normative reasoning (and some plausible assumptions). If you're worried about someone thinking you're stupid and bad because you were wrong, you should just show them this post, and if they care about probability theory as normative reasoning, then they'll realize that they were wrong and stop mistakenly thinking that you're stupid and bad. On the other hand, if the person you're trying to impress doesn't care about probability theory as normative reasoning, then they're stupid and bad, and you shouldn't care about impressing them.

outside cultural baggage

Was there ever an "inside", really? I thought there was. I think I was wrong.

people will only raise their estimate of incompetence by a Bayesian 0.42%.

But that's the correct update! People who update more or less than the Bayesian 0.42% are wrong! (Although that doesn't mean they're stupid or bad, obviously.)

they are referring to things with standard definitions that are precise enough to do math with.

This is an isolated demand for rigor and I'm not going to fall for it. I shouldn't need to have a reduction of what brain computations correspond to people's concept of "stupid and bad" in order to write a post like this.

using a small sample of data is worse than defaulting to base rates

What does this mean? If you have a small sample of data and you update on it the correct amount, you don't do worse than you would have without the data.

you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?

Analyzing the signaling game governing how people choose to dress at tech conferences does look like a fun game-theory exercise; thanks for the suggestion! I don't have time for that now, though.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-24T05:44:18.486Z · score: 13 (4 votes) · LW · GW

I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking

Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn't?

I think many of those norms originally caused the site to decline and people to go elsewhere.

I mean, that's one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we've traditionally avoided), all the "intellectual energy" followed Scott to SSC.

Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?

I also got that this is a subject you care a lot about.

Yes. Thanks for listening.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-23T07:21:40.118Z · score: 43 (12 votes) · LW · GW

When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer.

I don't think "reasonable" is the correct word here. You keep assuming away the possibility of conflict. It's easy to find a peaceful answer by simulating other people using empathy, if there's nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn't think is "reasonable"?

The shared values and culture serve to make sure those heuristics are calibrated similarly between people.

Riiiight, about that. The OP is about robust organizations in general without mentioning any specific organization, but given the three mentions of "truthseeking", I'd like to talk about the special case of this website, and set it in the context of a previous discussion we've had.

I don't think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong. I was there (first comment December 22, 2007). If the Less Wrong and "rationalist" brand names are now largely being held by a different culture with different values, I and the forces I represent have an interest in fighting to take them back.

Let me reply to your dialogue with another. To set the scene, I've been drafting a forthcoming post (working title: "Schelling Categories, and Simple Membership Tests") in my nascent Sequence on the cognitive function of categories, which is to refer back to my post "The Univariate Fallacy". Let's suppose that by the time I finally get around to publishing "Schelling Categories" (like the Great Teacher, I suffer from writer's molasses), the Jill from your dialogue has broken out of her simulation, instantiated herself in our universe, and joined the LW2 moderation team.

Jill: Zack, I've had another complaint—separate from the one in May—about your tendency to steer conversations towards divisive topics, and I'm going to ask you to tone it down a bit when on Frontpage posts.

Zack: What? Why? Wait, sorry—that was a rhetorical question, which I've been told is a violation of cooperative discourse norms. I think I can guess what motivated the complaint. But I want to hear you explain it.

Jill: Well, you mentioned this "univariate fallacy" again, and in the context of some things you've Tweeted, there was some concern that you were actually trying to allude to gender differences, which might make some community members of marginalized genders feel uncomfortable.

Zack: (aside) I'm guess I'm glad I didn't keep calling it Lewontin's fallacy.

(to Jill) So ... you're asking me to tone down the statistics blogging—on less wrong dot com—because some people who read what I write elsewhere can correctly infer that my motivation for thinking about this particular statistical phenomenon was because I needed it to help me make sense of an area of science I've been horrifiedly fascinated with for the last fourteen years, and that scientific question might make some people feel uncomfortable?

Jill: Right. Truthseeking is very important. However, it's clear that just choosing one value as sacred and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you've pointed at a clear tension in our values as they're currently stated: the tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at. I'm only asking you to give equal weight to your own needs, the needs of the people you're interacting with, and the needs of the organization as a whole.

Zack: (aside) Wow. It's like I'm actually living in Atlas Shrugged, just like Michael Vassar said. (to Jill) No.

Jill: What?

Zack: I said No. As a commenter on, my duty and my only duty is to try to make—wait, scratch the "try"—to make contributions that advance the art of human rationality. I consider myself to have a moral responsibility to ignore the emotional needs of other commenters—and symmetrically, I think they have a moral responsibility to ignore mine.

Jill: I'd prefer that you be more charitable and work to steelman what I said.

Zack: If you think I've misunderstood what you've said, I'm happy to listen to you clarify whatever part you think I'm getting wrong. The point of the principle of charity is that people are motivated to strawman their interlocutors; reminding yourself to be "charitable" to others helps to correct for this bias. But to tell others to be charitable to you without giving them feedback about how, specifically, you think they're misinterpreting what you said—that doesn't make any sense; it's like you're just trying to mash an "Agree with me" button. I can't say anything about what your conscious intent might be, but I don't know how to model this behavior as being in good faith—and I feel the same way about this new complaint against me.

Jill: Contextualizing norms are valid rationality norms!

Zack: If by "contextualizing norms" you simply mean that what a speaker means needs to be partially understood from context, and is more than just what the sentence the speaker said means, then I agree—that's just former Denver Broncos quarterback Brian Griese philosopher of language H. P. Grice's theory of conversational implicature. But when I apply contextualizing norms to itself and look at the context around which "contextualizing norms" was coined, it sure looks like the entire point of the concept is to shut down ideologically inconvenient areas of inquiry. It's certainly understandable. As far as the unwashed masses are concerned, it's probably for the best. But it's not what this website is about—and it's not what I'm about. Not anymore. I am an aspiring epistemic rationalist. I don't negotiate with emotional blackmailers, I don't double-crux with Suicide Rock, and I've got Something to Protect.

Jill: (baffled) What could possibly incentivize you to be so unpragmatic?

Zack: It's not the incentives! (aside) It's me!


Comment by zack_m_davis on Where is the Meaning? · 2019-07-23T06:47:57.110Z · score: 11 (4 votes) · LW · GW

With all due respect to the immortal Scott Alexander, I think he's getting the moral deeply wrong when he characterizes category boundaries as value-dependent (although I agree that the ancient Hebrews had good reason to group dolphins and fish under their category dag, given their state of knowledge). For what I think the correct theory looks like (with a focus on dolphins), see "Where to Draw the Boundaries?".

Comment by zack_m_davis on The AI Timelines Scam · 2019-07-23T04:45:45.244Z · score: 5 (3 votes) · LW · GW

Yes, thank you; the intended target was the immortal Scott Alexander's "Against Lie Inflation" (grandparent edited to fix). I regret the error.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-22T16:01:56.440Z · score: 4 (2 votes) · LW · GW

In this type of environment, it's safe to say "This conversation is making me feel unsafe, so I need to leave".

I mean, in the case of a website that people use in their free time, you don't necessarily even need an excuse: if you don't find a conversation valuable (because it's making you feel unsafe or for any other reason), you can just strong-downvote them and stop replying.

There was a recent case on Less Wrong where one of two reasons I gave for calling for end-of-conversation was that I was feeling "emotionally exhausted", which seems similar to feeling unsafe. But that was me explaining why I didn't feel like talking anymore. I definitely wasn't saying that my interlocutor should give equal weight to his needs, my needs, and the needs of the forum of the whole. I don't see how anyone is supposed to compute that.

Comment by zack_m_davis on The AI Timelines Scam · 2019-07-22T15:39:46.506Z · score: 32 (9 votes) · LW · GW

Exercise for those (like me) who largely agreed with the criticism that the usage of "scam" in the title was an instance of the noncentral fallacy (drawing the category boundaries of "scam" too widely in a way that makes the word less useful): do you feel the same way about Eliezer Yudkowsky's "The Two-Party Swindle"? Why or why not?

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T08:20:18.012Z · score: 16 (7 votes) · LW · GW

I find myself wanting to question your side more

Thanks, I appreciate it a lot! You should be questioning my "side" as harshly as you see fit, because if you ask questions I can't satisfactorily answer, then maybe my side is wrong, and I should be informed of this in order to become less wrong.

Why do you think this prior is right?

The mechanism by which saying true things leads to more knowledge is at least straightforward: you present arguments and evidence, and other people evaluate those arguments and evidence using the same general rules of reasoning that they use for everything else, and hopefully they learn stuff.

In order for saying true things to lead to less knowledge, we need to postulate some more complicated failure mode where some side-effect of speech disrupts the ordinary process of learning. I can totally believe that such failure modes exist, and even that they're common. But lately I seem to be seeing a lot of arguments of the form, "Ah, but we need to coordinate in order to create norms that make everyone feel Safe, and only then can we seek truth." And I just ... really have trouble taking this seriously as a good faith argument rather than an attempt to collude to protect everyone's feelings? Like, telling the truth is not a coordination problem? You can just unilaterally tell the truth.

associated signals such as exasperation and incredulity

Hm, I think there's a risk of signal miscalibration here. Just because I feel exasperated and this emotion leaks into my writing, doesn't necessarily mean implied probabilities close to 1? (Related: Say It Loud. See also my speculative just-so story about why the incredulity is probably non-normative.)

(It's 1:20 a.m. on Sunday and I've used up my internet quota for the weekend, so it might take me a few days to respond to future comments.)

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T08:10:48.187Z · score: 14 (4 votes) · LW · GW

Just, the sort of thing that you should say 'ah, that makes sense. I will work on that' for the future.

It's actually not clear to me that I should work on that. As a professional hazard of my other career, I'm pretty used to people trying to use "You would be more persuasive if you were nicer" as an attempted silencing tactic; if I just believed everyone who told me that, I would never get anything done.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T07:01:01.058Z · score: 5 (7 votes) · LW · GW

What is your incredulity here aiming to accomplish?

I genuinely feel incredulous and am trying to express what I'm actually thinking in clear language? I mean, it's also totally going to be the case that the underlying generator of "genuinely felt incredulity" is no doubt going to be some sort of subconscious monkey-politics status move designed by evolution to make myself look good at the expense of others. It's important to notice that! But the mere fact of having noticed that doesn't make the feeling go away, and given that the feeling is there, it's probably going to leak into my writing. I could expend more effort doing a complicated System-2 political calculation that tries to simulate you and strategically compute what words I should say in order to have the desired effect on you. But not only is that more work than saying what I'm actually thinking in clear language, I also expect it to result in worse writing. Use the native architecture!

I mean, if it'll help, we can construct a narrative in which my emotion of incredulity that was designed by evolution to make me look good, actually makes me look bad in local social reality? That's a win-win Pareto improvement: I don't have to mutilate my natural writing style in the name of so-called "cooperative" norms, and you don't have to let my monkey-politics brain get away with "winning" the interaction.

How about this? Incredulity is, definitionally, a failed prediction. The fact that I felt incredulous means that my monkey status instincts are systematically distorting my anticipations about the world, making me delusionally perceive things as "obvious" exactly when they're things that I coincidentally happened to already know, and not because of their actual degree-of-obviousness as operationalized by what fraction of others know them. (And conversely, I'll delusionally perceive things as "nonobvious" exactly when I coincidentally happened to not-know them.)

(Slaps forehead) Hello, Megan! Ten years into this "rationality" business, and here I am still making rookie mistakes like this! How dumb can I get?

I think you should prioritize learning to simulate other minds a bit

Thanks, this is a good suggestion! I probably am below average at avoiding the typical mind fallacy. You should totally feel superior to me on this account!

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T06:00:53.318Z · score: 2 (1 votes) · LW · GW

Okay, but I thought the idea was that instrumental rationality and epistemic rationality are very closely related. Two sides of the same coin, not two flavors of good thing that sometimes trade off against each other. That agents achieve their goals by means of building accurate models, and using those models to "search out paths through probability" that steer the world into the desired goal-state. If the models aren't accurate, the instrumental probability-bending magic doesn't work and cannot work.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T04:00:13.399Z · score: 23 (6 votes) · LW · GW

I definitely agree that there could exist perverse situations where there are instrumental tradeoffs to be made in truthseeking of the kind I and others have been suspicious of. For lack of a better term, let me call these "instrumentally epistemic" arguments: claims of the form, "X is true, but the consequences of saying it will actually result in less knowledge on net." I can totally believe that some instrumentally epistemic arguments might hold. There's nothing in my understanding of how the universe works that would prevent that kind of scenario from happening.

But in practice, with humans, I expect that a solid supermajority of real-world attempts to explicitly advocate for norm changes on "instrumentally epistemic" grounds are going to be utterly facile rationalizations with the (typically unconscious) motivation of justifying cowardice, intellectual dishonesty, ego-protection, &c.

I (somewhat apologetically) made an "instrumentally epistemic" argument in a private email thread recently, and Ben seemed super pissed in his reply (bold italics, incredulous tone, "?!?!?!?!?!" punctuation). But the thing is—even if I might conceivably go on to defend a modified form of my original argument—I can't blame Ben for using a pissed-off tone in his reply. "Instrumentally epistemic" arguments are an enormous red flag—an infrared flag thirty meters wide. Your prior should be that someone making an "instrumentally epistemic" argument can be usefully modeled as trying to undermine your perception of reality and metaphorically slash your tires (even if their conscious phonological loop never contains the explicit sentence "And now I'm going to try to undermine Ray Arnold's perception of reality").

Now, maybe that prior can be overcome for some arguments and some arguers! But the apparent failure of the one making the "instrumentally epistemic" argument to notice the thirty-meter red flag, is another red flag.

I don't think the hospital example does the situation justice. The trade-off of choosing whether to spend money on a heart transplant or nurse salaries doesn't seem analogous to choosing between truth and the occasional allegedly-instrumentally-epistemic lie (like reassuring your interlocutor that you respect them even when you don't, in fact, respect them). Rather, it seems more closely analogous to choice of inquiry area (like whether to study truths about chemistry, or truths about biology), with "minutes of study time" as the resource to be allocated rather than dollars.

If we want a maximally charitable medical analogy for "instrumentally epistemic" lies, I would instead nominate chemotherapy, where we deliberately poison patients in the hope of hurting cancer cells more than healthy cells. Chemotherapy can be good if there's solid evidence that you have a specific type of cancer that responds well to that specific type of chemotherapy. But you should probably check that people aren't just trying to poison you!

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T03:56:23.862Z · score: 4 (2 votes) · LW · GW

Over the past couple years, I have updated to "yes, LessWrong should be the place focused on truthseeking."

Updated to? This wording surprises me, because I'm having trouble forming a hypothesis as to what your earlier position could have been. (I'm afraid I haven't studied your blogging corpus.) What else is this website for, exactly?

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T00:33:56.530Z · score: 2 (1 votes) · LW · GW

I got the impression that statements of the sort "yay truth as the only sacred value" received strong support; personally I find that off-putting in many contexts.

I also find it off-putting in many contexts—perhaps most contexts. But if there's any consequentialist value in having one space in the entire world where (within the confines of that space) truth is the only sacred value, perhaps is a Schelling point?

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-20T19:06:26.439Z · score: 2 (1 votes) · LW · GW

The decision procedure IS the values [...] taught through examples and rituals and anecdotes and example and the weights on the neural nets in people's heads

That makes sense; I agree that culture (which is very complicated and hard to pin down) is a very important determinant of outcomes in organizations. One thing that's probably important to study (that I wish I understood better) is how subcultures develop over time: as people leave and exit the organization over time, the values initially trained into the neural net may drift substantially.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-20T15:08:14.052Z · score: 10 (7 votes) · LW · GW

or they're doing some sort of socratic move (in the latter case, this is a style of conversation I'd rather not have on my posts

Very well. I will endeavor to be more direct.

there are clear answers to them if you spend a few minutes steelmanning how the aforementioned organization would work well

The fourth virtue is evenness! If you first write at the bottom of a sheet of paper, "And therefore, the aforementioned organization would work well!", it doesn't matter what arguments you write above it afterward—the evidential entanglement between your position and whatever features-of-the-world actually determine organizational success, was fixed the moment you determined your conclusion. After-the-fact steelmanning that selectively searches for arguments supporting that conclusion can't help you design better organizations unless they have the power to change the conclusion. Yes requires the possibility of no.

they're looking for impossible certainty in an obviously context specific and highly variable situation

We're looking for a decision procedure. "It's context-specific; it depends" is a good start, but a useful proposal needs to say more about what it depends on.

A simple example of a decision procedure might be "direct democracy." People vote on what to do, and whichever proposal has more votes is implemented. This procedure provides a specific way to proceed when people don't agree on what to do: they vote!

In both the OP and your response to me, you tell a story about people successfully talking out their differences, but a robust institution needs to be able to function even when people can't talk it out—and the game theory of "What happens if we can't talk it out" probably ends up shaping people's behavior while talking it out.

For example, suspects of a police investigation might be very cooperative with the "good cop" who speaks with a friendly demeanor and offers the suspect a cup of coffee: if you look at the radically transparent video of the interview, you'll just see two people having a perfectly friendly conversation about where the suspect was at 8:20 p.m. on the night of the seventeenth and whether they have witnesses to support this alibi. But the reason that conversation is so friendly is because the suspect can predict that the good cop's partner might not be so friendly.

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-20T05:33:38.747Z · score: 14 (7 votes) · LW · GW

our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.

And the decision algorithm that your brain uses to decide who to sit down is also recorded, one imagines? In accordance with our norm of radical transparency.

The general rule is that people should give equal weight to their own needs, the needs of the people they're interacting with, and the needs of the organization as a whole.

I'm terribly sorry, but I'm afraid I'm having a little bit of trouble working out the details of exactly how this rule would be applied in practice—could you, perhaps, possibly, help me understand?

Suppose Jill comes to Jezebel and says, "Jezebel, by mentioning the hidden Bayesian structure of language and cognition so often, you're putting your own needs above the needs of those you're interacting with, and those of the organization as a whole."

Jezebel says, "Thanks, I really value your opinion! However, I've already taken everyone's needs into account, and I'm very confident I'm already doing the right thing."

What happens?

Comment by zack_m_davis on Yes Requires the Possibility of No · 2019-07-19T04:05:27.682Z · score: 2 (1 votes) · LW · GW

10 is vague, and lacks examples.

That's fair. For a more concrete example, see the immortal Scott Alexander's recent post "Against Lie Inflation" (itself a reply to discussion with Jessica Taylor on her Less Wrong post "The AI Timelines Scam"). Alexander argues:

The word "lie" is useful because some statements are lies and others aren't. [...] The rebranding of lying is basically a parasitic process, exploiting the trust we have in a functioning piece of language until it's lost all meaning[.]

I read Alexander as making essentially the same point as "10." in the grandparent, with G = "honest reports of unconsciously biased beliefs (about AI timelines)" and H = "lying".

Comment by zack_m_davis on Drowning children are rare · 2019-07-17T06:47:50.404Z · score: 20 (4 votes) · LW · GW

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

While I agree that this is a sufficient rebuttal of Ray's "dishonest and/or false" charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray's point about context and reduced visibility: it's not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.

That said, however, Ray's "GiveWell goes to great lengths to hide those numbers" claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:

GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we're aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.

(Bolding mine.)

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-16T06:12:08.084Z · score: 14 (6 votes) · LW · GW

In order to combat publication bias, I should probably tell the Open Thread about a post idea that I started drafting tonight but can't finish because it looks like my idea was wrong. Working title: "Information Theory Against Politeness." I had drafted this much—

Suppose the Quality of a Blog Post is an integer between 0 and 15 inclusive, and furthermore that the Quality of Posts is uniformly distributed. Commenters can roughly assess the Quality of a Post (with some error in either direction) and express their assessment in the form of a Comment, which is also an integer between 0 and 15 inclusive. If the True Quality of a post is , then the assessment expressed in a Comment on that Post follows the probability distribution

(Notice the "wraparound" between 15 and 0: it can be hard for a humble Commenter to tell the difference between brilliance-beyond-their-ken, and utter madness!)

The entropy of the Quality distribution is = 4 bits: in order to inform someone about the Quality of a Post, you need to transmit 4 bits of information. Comments can be thought of as a noisy "channel" conveying information about the post.

The mutual information between a Comment, and the Post's Quality, is equal to the entropy of the distribution of Comments (which is going to be 4 bits, by symmetry), minus the entropy of a Comment given the Post's Quality (which is ≈ 1.58). So the "capacity" of a single Comment is around 4 − 1.58 = 2.42 bits. On average, in expectation across the multiverse, &c., we only need to read 4/2.42 ≈ 1.65 Comments in order to determine the Quality of a Post. Efficient!

Now suppose the Moderators introduce a new Rule: it turns out Comments below 10 are Rude and hurt Post Authors' Feelings. Henceforth, all Comments must be an integer between 10 and 15 inclusive, rather than between 0 and 15 inclusive!

... and then I was expecting that the restricted range imposed by the new Rule would decrease the expressive power of Comments (as measured by mutual information), but now I don't think this is right: the mutual information is about the noise in Commenter's perceptions, not the "coarseness" of the "buckets" in which it is expressed: lg(16) − lg(3) has the same value as lg(8) − lg(1.5).

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-15T16:35:02.446Z · score: 1 (3 votes) · LW · GW


Yeah, I would have expected Jessica to get it, except that I suspect she's also executing a strategy of habitual Socratic irony (but without my additional innovation of immediately backing down and unpacking the intent when challenged), which doesn't work when both sides of a conversation are doing it.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-15T03:32:51.854Z · score: 19 (9 votes) · LW · GW

You caught me—introspecting, I think the grandparent was written in a spirit of semi-deliberate irony. ("Semi" because it just felt like the "right" thing to say there; I don't think I put a lot of effort into modeling how various readers would interpret it.)

Roland is speculating that the real reason for intentionally incomplete explanations in the handbook is different from the stated reason, and I offered a particularly blunt phrasing ("we don't want to undercut our core product") of the hypothesized true reason, and suggested that that's what the handbook would have said in that case. I think I anticipated that a lot of readers would find my proposal intuitively preposterous: "everyone knows" that no one would matter-of-factly report such a self-interested rationale (especially when writing on behalf of an organization, rather than admitting a vice among friends). That's why the earlier scenes in the 2009 film The Invention of Lying, or your post "Act of Charity", are (typically) experienced as absurdist comedy rather than an inspiring and heartwarming portrayal of a more truthful world.

But it shouldn't be absurd for the stated reason and the real reason to be the same! Particularly for an organization like CfAR which is specifically about advancing the art of rationality. And, I don't know—I think sometimes I talk in a way that makes me seem more politically naïve than I actually am, because I feel as if the "naïve" attitude is in some way normative? ("You really think someone would do that? Just go on the internet and tell lies?") Arguably this is somewhat ironic (being deceptive about your ability to detect deception is probably not actually the same thing as honesty), but I haven't heretofore analyzed this behavioral pattern of mine in enough detail to potentially decide to stop doing it??

I think another factor might be that I feel guilty about being "mean" to CfAR in the great-great-great grandparent comment? (CfAR isn't a person and doesn't have feelings, but my friend who works there is and does.) Such that maybe the emotional need to signal that I'm still fundamentally loyal to the "mainstream rationality" tribe (despite the underlying background situation where I've been collaborating with you and Ben and Michael to discuss what you see as fatal deficits of integrity in "the community" as presently organized) interacted with my preëxisting tendency towards semi-performative naiveté in a way that resulted in me writing a bad blog comment? It's a good thing you were here to hold me to account for it!

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-14T22:40:43.605Z · score: -6 (7 votes) · LW · GW

I suspect that this is the real reason.

It's pretty uncharitable of you to just accuse CfAR of lying like that! If the actual reason were "Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don't want to undercut our core product," then the handbook would have just said that.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-14T22:34:02.786Z · score: 9 (8 votes) · LW · GW

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc.

Yes, I agree: for these subjects, the "there's a lot of stuff we don't know how to teach in writing" disclaimer I suggested in the grandparent would be a big understatement.

a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers

Useless, I can believe. (The extreme limiting case of "there's a lot of stuff we don't know how to teach in this format" is "there is literally nothing we know how to teach in this format.") But harmful? How? Won't the unexpected syllabus section titles at least disabuse them of their bad assumptions?

Reading the sequences [...] are unlikely to have much relevance to what CFAR teaches.

Really? The tagline on the website says, "Developing clear thinking for the sake of humanity’s future." I guess I'm having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity's-future curriculum for which the things we write about on this website would be irrelevant. The "comfort zone expansion" exercises I've heard about would qualify, but Sequences-knowledge seems totally relevant to something like, say, double crux.

(It's actually pretty weird/surprising that I've never personally been to a CfAR workshop! I think I've been assuming that my entire social world has already been so anchored on the so-called "rationalist" community for so long, that the workshop proper would be superfluous.)

Comment by zack_m_davis on No nonsense version of the "racial algorithm bias" · 2019-07-14T01:00:31.483Z · score: 8 (4 votes) · LW · GW

I like the no-nonsense section titles!

I also like the attempt to graphically teach the conflict between the different fairness desiderata using squares, but I think I would need a few more intermediate diagrams (or probably, to work them out myself) to really "get it." I think the standard citation here is "Inherent Trade-Offs in the Fair Determination of Risk Scores", but that presentation has a lot more equations and fewer squares.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-13T22:48:46.337Z · score: 27 (18 votes) · LW · GW

a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding

This is certainly theoretically possible, but I'm very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn't less prior reading even better? Should aspiring rationalists not read the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)

I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they "should" be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn't make with the help of a sufficiently attentive personal tutor (which, realistically, you're not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I'll take my chances.

Note that I'm specifically reacting to the suggestion that people not read things for their own alleged benefit. If the handbook had just said, "Fair warning, this isn't a substitute for the workshop because there's a lot of stuff we don't know how to teach in writing," then fine; that seems probably true. What I'm skeptical of is hypothesized non-monotonicity whereby additional lower-quality study allegedly damages later higher-quality study. First, because I just don't think it's true on the merits: I falsifiably predict that, e.g., math students who read the course textbook on their own beforehand will do much better in the course than controls who haven't. (Although the pre-readers might annoy teachers whose jobs are easier if everyone in the class is obedient and equally ignorant.) And second, because the general cognitive strategy of waiting for the designated teacher to spoonfeed you the "correct" version carries massive opportunity costs when iterated (even if spoonfeeding is generally higher-quality than autodidactism, and could be much higher-quality in some specific cases).