Comment by zack_m_davis on Drowning children are rare · 2019-07-17T06:47:50.404Z · score: 20 (4 votes) · LW · GW

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

While I agree that this is a sufficient rebuttal of Ray's "dishonest and/or false" charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray's point about context and reduced visibility: it's not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.

That said, however, Ray's "GiveWell goes to great lengths to hide those numbers" claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:

GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we're aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.

(Bolding mine.)

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-16T06:12:08.084Z · score: 12 (5 votes) · LW · GW

In order to combat publication bias, I should probably tell the Open Thread about a post idea that I started drafting tonight but can't finish because it looks like my idea was wrong. Working title: "Information Theory Against Politeness." I had drafted this much—

Suppose the Quality of a Blog Post is an integer between 0 and 15 inclusive, and furthermore that the Quality of Posts is uniformly distributed. Commenters can roughly assess the Quality of a Post (with some error in either direction) and express their assessment in the form of a Comment, which is also an integer between 0 and 15 inclusive. If the True Quality of a post is , then the assessment expressed in a Comment on that Post follows the probability distribution

(Notice the "wraparound" between 15 and 0: it can be hard for a humble Commenter to tell the difference between brilliance-beyond-their-ken, and utter madness!)

The entropy of the Quality distribution is = 4 bits: in order to inform someone about the Quality of a Post, you need to transmit 4 bits of information. Comments can be thought of as a noisy "channel" conveying information about the post.

The mutual information between a Comment, and the Post's Quality, is equal to the entropy of the distribution of Comments (which is going to be 4 bits, by symmetry), minus the entropy of a Comment given the Post's Quality (which is ≈ 1.58). So the "capacity" of a single Comment is around 4 − 1.58 = 2.42 bits. On average, in expectation across the multiverse, &c., we only need to read 4/2.42 ≈ 1.65 Comments in order to determine the Quality of a Post. Efficient!

Now suppose the Moderators introduce a new Rule: it turns out Comments below 10 are Rude and hurt Post Authors' Feelings. Henceforth, all Comments must be an integer between 10 and 15 inclusive, rather than between 0 and 15 inclusive!

... and then I was expecting that the restricted range imposed by the new Rule would decrease the expressive power of Comments (as measured by mutual information), but now I don't think this is right: the mutual information is about the noise in Commenter's perceptions, not the "coarseness" of the "buckets" in which it is expressed: lg(16) − lg(3) has the same value as lg(8) − lg(1.5).

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-15T16:35:02.446Z · score: 0 (2 votes) · LW · GW

obvious

Yeah, I would have expected Jessica to get it, except that I suspect she's also executing a strategy of habitual Socratic irony (but without my additional innovation of immediately backing down and unpacking the intent when challenged), which doesn't work when both sides of a conversation are doing it.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-15T03:32:51.854Z · score: 18 (8 votes) · LW · GW

You caught me—introspecting, I think the grandparent was written in a spirit of semi-deliberate irony. ("Semi" because it just felt like the "right" thing to say there; I don't think I put a lot of effort into modeling how various readers would interpret it.)

Roland is speculating that the real reason for intentionally incomplete explanations in the handbook is different from the stated reason, and I offered a particularly blunt phrasing ("we don't want to undercut our core product") of the hypothesized true reason, and suggested that that's what the handbook would have said in that case. I think I anticipated that a lot of readers would find my proposal intuitively preposterous: "everyone knows" that no one would matter-of-factly report such a self-interested rationale (especially when writing on behalf of an organization, rather than admitting a vice among friends). That's why the earlier scenes in the 2009 film The Invention of Lying, or your post "Act of Charity", are (typically) experienced as absurdist comedy rather than an inspiring and heartwarming portrayal of a more truthful world.

But it shouldn't be absurd for the stated reason and the real reason to be the same! Particularly for an organization like CfAR which is specifically about advancing the art of rationality. And, I don't know—I think sometimes I talk in a way that makes me seem more politically naïve than I actually am, because I feel as if the "naïve" attitude is in some way normative? ("You really think someone would do that? Just go on the internet and tell lies?") Arguably this is somewhat ironic (being deceptive about your ability to detect deception is probably not actually the same thing as honesty), but I haven't heretofore analyzed this behavioral pattern of mine in enough detail to potentially decide to stop doing it??

I think another factor might be that I feel guilty about being "mean" to CfAR in the great-great-great grandparent comment? (CfAR isn't a person and doesn't have feelings, but my friend who works there is and does.) Such that maybe the emotional need to signal that I'm still fundamentally loyal to the "mainstream rationality" tribe (despite the underlying background situation where I've been collaborating with you and Ben and Michael to discuss what you see as fatal deficits of integrity in "the community" as presently organized) interacted with my preëxisting tendency towards semi-performative naiveté in a way that resulted in me writing a bad blog comment? It's a good thing you were here to hold me to account for it!

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-14T22:40:43.605Z · score: -4 (6 votes) · LW · GW

I suspect that this is the real reason.

It's pretty uncharitable of you to just accuse CfAR of lying like that! If the actual reason were "Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don't want to undercut our core product," then the handbook would have just said that.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-14T22:34:02.786Z · score: 8 (6 votes) · LW · GW

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc.

Yes, I agree: for these subjects, the "there's a lot of stuff we don't know how to teach in writing" disclaimer I suggested in the grandparent would be a big understatement.

a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers

Useless, I can believe. (The extreme limiting case of "there's a lot of stuff we don't know how to teach in this format" is "there is literally nothing we know how to teach in this format.") But harmful? How? Won't the unexpected syllabus section titles at least disabuse them of their bad assumptions?

Reading the sequences [...] are unlikely to have much relevance to what CFAR teaches.

Really? The tagline on the website says, "Developing clear thinking for the sake of humanity’s future." I guess I'm having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity's-future curriculum for which the things we write about on this website would be irrelevant. The "comfort zone expansion" exercises I've heard about would qualify, but Sequences-knowledge seems totally relevant to something like, say, double crux.

(It's actually pretty weird/surprising that I've never personally been to a CfAR workshop! I think I've been assuming that my entire social world has already been so anchored on the so-called "rationalist" community for so long, that the workshop proper would be superfluous.)

Comment by zack_m_davis on No nonsense version of the "racial algorithm bias" · 2019-07-14T01:00:31.483Z · score: 6 (3 votes) · LW · GW

I like the no-nonsense section titles!

I also like the attempt to graphically teach the conflict between the different fairness desiderata using squares, but I think I would need a few more intermediate diagrams (or probably, to work them out myself) to really "get it." I think the standard citation here is "Inherent Trade-Offs in the Fair Determination of Risk Scores", but that presentation has a lot more equations and fewer squares.

Comment by zack_m_davis on Open Thread July 2019 · 2019-07-13T22:48:46.337Z · score: 29 (14 votes) · LW · GW

a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding

This is certainly theoretically possible, but I'm very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn't less prior reading even better? Should aspiring rationalists not read the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)

I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they "should" be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn't make with the help of a sufficiently attentive personal tutor (which, realistically, you're not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I'll take my chances.

Note that I'm specifically reacting to the suggestion that people not read things for their own alleged benefit. If the handbook had just said, "Fair warning, this isn't a substitute for the workshop because there's a lot of stuff we don't know how to teach in writing," then fine; that seems probably true. What I'm skeptical of is hypothesized non-monotonicity whereby additional lower-quality study allegedly damages later higher-quality study. First, because I just don't think it's true on the merits: I falsifiably predict that, e.g., math students who read the course textbook on their own beforehand will do much better in the course than controls who haven't. (Although the pre-readers might annoy teachers whose jobs are easier if everyone in the class is obedient and equally ignorant.) And second, because the general cognitive strategy of waiting for the designated teacher to spoonfeed you the "correct" version carries massive opportunity costs when iterated (even if spoonfeeding is generally higher-quality than autodidactism, and could be much higher-quality in some specific cases).

Comment by zack_m_davis on No, it's not The Incentives—it's you · 2019-07-13T19:48:01.046Z · score: 4 (2 votes) · LW · GW

As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:

[...] I think the point I'm making actually works well for speeding too: when you get pulled over by a police officer for going 10 over the limit, nobody is going to take you seriously if your objection to the ticket is "but I'm incentivized to go 10 over, because I can get home a little faster, and hardly anyone ever gets pulled over at that speed!" The way we all think about speeding tickets is that, sure, there may be reasons we choose to break the law, but it's still our informed decision to do so. We don’t try shirk the responsibility for speeding by pretending that we’re helpless in the face of the huge incentive to get where we're going just a little bit faster than the law actually allows. I think if we looked at research practice the same way, that would be a considerable improvement.

Comment by zack_m_davis on You Are A Brain · 2019-07-07T18:01:49.518Z · score: 15 (7 votes) · LW · GW

So, I agree with this criticism, but you really should have led with the criticism, instead of starting out with the impudent demand (well, "request"—you did say "please") that Liron change his presentation, and then only explaining the rationale when questioned. A criticism that is stated can then be argued with (I bet you didn't anticipate that Liron was presenting to a boys' group!), whereas a request backed by an unstated rationale (of which it is assumed that "everyone knows") is more likely to be functioning as a social threat: "Do as I say, or I'll attack your moral character in the ensuing interaction (rather than arguing in good faith)."

Understanding these dynamics may turn out to be surprisingly relevant to your interests—although you probably won't understand what I'm talking about for another ten years, two months.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-04T20:24:25.134Z · score: 8 (4 votes) · LW · GW

I thought it made sense to use the word "cult" pejoratively in the specific context of what the grandparent was trying to say, but it was a pretty noncentral usage (as the hyperlink to "Every Cause Wants To Be ..." was meant to indicate); I don't think the standard advice is going to directly apply well to the case of my disappointment with what the rationalist community is in 2019—although the standard advice might be a fertile source of ideas for how to diversify my "portfolio" of social ties, which is definitely worth doing independently of the Sorites problem about where to draw the category boundary around "cults". (I was wondering if anyone was going to notice the irony of the grandparent mentioning the sunk cost fallacy!)

I have at least two more posts to finish about the cognitive function of categories (working titles: "Schelling Categories, and Simple Membership Tests" and "Instrumental Categories, and War") that need to go on this website because they're part of a Sequence and don't make sense anywhere else. After that, I might reallocate attention back to my other avocations.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-04T20:19:50.080Z · score: 6 (3 votes) · LW · GW

(This may be another case where it would make sense to detach this derailed thread into its own post in order to avoid polluting the comments on "Causal Reality vs. Social Reality", if that's cheap to do.)

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-04T09:30:39.058Z · score: 25 (10 votes) · LW · GW

I do not expect people namespace considers interesting to be afraid of making their interesting contributions due to fear of being banned

It's important to think on the margin—not only do actions short of banning (e.g., "mere" threats of banning) have an impact on users' behavior (as Said pointed out), they can also have different effects on users with different opportunity costs. I expect the people Namespace is thinking of face different opportunity costs than me: their voice/exit trade-off between writing for Less Wrong and their second-best choice of forum looks different from mine.

In the past month-and-a-half, we've had:

  • A 135-comment meta trainwreck that started because a MIRI Research Associate found a discussion-relevant reference to my work on the philosophy of language "unpleasant" (because my interest in that area of philosophy was motivated by my need to think about something else); and,

  • A 34-comments-and-counting meta trainwreck that started because a Less Wrong moderator found my use of a rhetorical question, exclamation marks, and reference hyperlinks to be insufficiently "collaborative."

Neither of these discussions left me with a fear of being banned—insofar as both conversations had an unfortunately inextricable political component, I count them both as decisive "victories" for me (judging by the karma scores and what was said)—but they did suck up an enormous amount of my time and emotional energy that I could have spent doing other things. Someone otherwise like me but with lower opportunity costs would probably be smarter to just leave and try to have intellectual discussions in some other venue where it wasn't necessary to decisively win a political slapfight on whether philosophers should consider each other's feelings while discussing philosophy. Arguably I would be smarter to leave, too, but I'm stuck, because I joined a cult ten years ago when I was twenty-one years old, and now the cult owns my soul and I don't have anywhere else to go.

I was at the first Overcoming Bias meetup in Millbrae in February 2008. I did the visual design for the 2009 and 2010 Singularity Summit program booklets. The first time I was paid money for programming work was when I wrote some Python scripts to help organize the Singularity Institute's donor database in 2011. In 2012, I designed PowerPoint slides for the preliminary "kata" (about the sunk cost fallacy) for what would soon be dubbed the Center for Applied Rationality, to which I would later donate $16,500 between 2013 and 2016 after I got a real programming job. Today, I live in Berkeley and all of my friends are "rationalists."

I mention all this (mostly, hopefully) not to try to pull rank—you really shouldn't be making moderation decisions based on seniority!—but to illustrate exactly how serious a threat "removal from our communal places of discussion" is to me. My entire adult life only makes sense in the context of this website. If the forces of blandness want me gone because I use too many exclamation points (or perhaps some other reason), I in particular have an unusally strong incentive to either stand my ground or die trying.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-03T05:47:56.292Z · score: 19 (7 votes) · LW · GW

I accept your apology.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-01T20:20:52.013Z · score: 29 (7 votes) · LW · GW

This comment contains no italics and no exclamation points. (I didn't realize that was the implied request—as Wei intuited, I was trying to show that that's just how I talk sometimes for complicated psychological reasons, and that I didn't think it should be taken personally. Now that you've explicitly told me to not do that, I will. As you've noticed, I'm not always very good at subtext, but I should hope to be capable of complying with explicit requests.)

That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you'd simply said "I respect your thinking" at some earlier point.

I don't think that would be strong evidence. Anyone could have said "I respect your thinking" in order to be nice (or to deescalate the conflict), even if they didn't, in fact, respect you. The Mnemosyne cards are stronger evidence because they already existed.

you'd come in order to do me the favor of informing me I was flat-out, no questions about it, wrong

I came to offer relevant arguments and commentary in response to the OP. Whether or not my arguments and commentary were pursasive (or show that you were "wrong") is up for each individual reader to decide for themselves.

I am strongly tempted to ban you from commenting on any of my posts to save myself further aggravation

That's fine with me. (I've done this once with one user whose comments I didn't like; it would be hypocritical for me to object if someone else did it to me because they didn't like my comments.)

this further exchange about conversational norms has been the absolute lowlight of my weekend (indeed, receiving your comments has made my whole week feel worse) [...] I'm at my limit of willingness to talk to you.

Yes, this meta exchange about discourse norms has been quite stressful for me, too. (The conversation about the post itself was fine for me.) I hope you feel better soon.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-01T13:03:39.345Z · score: 2 (1 votes) · LW · GW

I thought of a way to provide evidence that I respect you as a thinker! I liked your "planning is recursive" post back in March, to the extent that I made two flashcards about it for my Mnemosyne spaced-repetition deck, so that I wouldn't forget. Here are some screenshots—

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-07-01T04:01:06.477Z · score: 11 (5 votes) · LW · GW

Thanks for the informative writing feedback!

As you said yourself, this was rhetorical

I think the occasional rhetorical question is a pretty ordinary part of the way people naturally talk and discuss ideas? I can avoid it if the discourse norms in a particular space demand it, but I tend to feel like this is excessive optimization for politeness at the cost of expressivity. Perhaps different writers place different weights on the relative value of politeness, but I should hope to at least be consistent in what behavior I display and what behavior I expect from others: if you see me tone-policing others over statements whose tone is as harsh as statements I've made in comparable situations, then I would be being hypocritical and you should criticize me for it!

The tone of these sentences, appending an exclamation mark to a trivial statements [...] adding energy and surprise to your lessons

I often use a "high-energy" writing style with lots of italics and exclamation points! I think it textually mimics the way I talk when I'm excited! (I think if you scan over my Less Wrong contributions, my personal blog, or my secret ("secret") blog, you'll see this a lot.) I can see how some readers might find this obnoxious, but I don't think it's accurate to read it as an indicator of contempt for my present interlocutor. (It probably correlates somewhat with contempt, but not nearly as much as you seem to be assuming?)

you're maybe leaning a bit too much on your sources/references/links for credibility in a way that also registers as condescending [...] despite those links going to elementary resources and concepts

Likewise, I think lots of hyperlinks to jargon and concepts are a pretty persistent feature of my writing style? (To a greater extent in public forum posts like this rather than private emails.) In-body hyperlinks are pretty unobtrusive—readers who are interested in the link can click it, and readers who aren't can not-click it.

I wouldn't denigrate the value of having "elementary" resources easily at hand! I often find myself, e.g., looking up the definition of words I ostensibly "already know," not because I can't successfully use the word in a sentence, but to "sync up" my learned understanding of what the word means with what the dictionary says. (For example, I looked up brusque while composing this comment.)

You're using a lot of examples to back up a very simple point (that clamoring in streets isn't an effective strategy).

The intent wasn't just to back up the point that clamoring in the streets is ineffective, but to illustrate what I thought cause-and-effect (causal reality) reasoning would look like in contrast to social (social reality) reasoning—I took "clamoring in the steets" to be an example of the kind of action that social-reality reasoning would recommend. I thought such illustration could provide value to the comment thread, even though you've doubtlessly already heard of earning to give. (I didn't mean to falsely imply you hadn't.)

In practice, you spent 400 words

Yes, it was a bit of a tangent. (Once I start excitedly explaining something, it can be hard to know exactly when to stop! The 29 karma (in 13 votes) suggests that the voters seemed to like it, at least?)

I won't assume you're paying attention, but you might have noticed that I post a moderate amount on LessWrong and am in fact a member of the LessWrong 2.0 team.

I noticed, yes. I don't think this should affect my writing that much? Certainly, how I write should depend on my model of who I'm talking to, but my model of you is mostly informed by the text you've written. (I think we also met at a party once? Aren't you Miranda's husband?) The fact that you work for Less Wrong doesn't alter my perception much.

As another example, this is dismissive and rude too

I wouldn't say "dismissive", exactly, but it's definitely brusque, which, in the context of the surrounding thread, was an awful writing choice on my part. I'm sorry about that! Now that you've correctly pointed out that I made a terrible writing decision, let me try to make partial amends for it by exerting some more interpretive labor to unpack what I meant—

I suspect we have a pretty large disagreement on the degree to which respect is a necessary prerequisite for whether a conversation with someone will be productive? I think if someone is making good arguments, then I consider it my responsibility to update on the information content of what they're saying. Because I'm a social monkey, I certainly find it harder to update (especially publicly) if someone's good arguments are phrased in a way that doesn't seem to respect me. Correspondingly, for my own emotional well-being, I prefer discussion spaces with strong politeness norms. But from the standpoint of minds as inference engines, I consider this a bug in my cognition: I expect to perform better if I can somehow muster the mental toughness to learn from people who hate my guts. (As it is written of the fifth virtue: "Do not believe you do others a favor if you accept their arguments; the favor is to you.")

From that perspective (which you might disagree with!), can you see why it might be tempting to metaphorically characterize the respectful-behavior-is-necessary mindset as "expecting to be marketed to"?

I doubt neither of us is going to start feeling more warmly towards the other with further comments, nor do I expect us to communicate much more information than we already have.

I take that as a challenge! I hope this comment has succeeded at making you feel more warmly towards me and communicating much more information that we already have! But, I'm also assigning a substantial probability that I failed in this ambition. I'm sorry if I failed.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-30T22:01:00.887Z · score: 9 (3 votes) · LW · GW

Even if the contrast is "transhumanist social reality", I ask how did that social reality come to be and how did people join it? I'm pretty sure most transhumanists weren't born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group

This isn't necessarily a point in transhumanism's favor! At least vertically-transmitted memeplexes (spread from parents to children, like established religions) face selective pressures tying the fitness of the meme to the fitness of the host. (Where evolutionary fitness isn't necessarily good from a humane perspective, but there are at least bounds on how bad it can be.) Horizontally-transmitted memeplexes (like cults or mass political movements) don't face this constraint and can optimize for raw marketing appeal independent of long-term consequences.

"Terrible" is a moral judgment. The anticipated experience is that when I point my "moral evaluator unit" at a morally terrible thing, it outputs "terrible."

Isn't this kind of circular? Compare: "A Vice President is anyone who's job title is vice-president. That's a falsifiable prediction because it constrains your anticipations of what you'll see on their business card." It's true, but one is left with the sense that some important part of the explanation is being left out. What is the moral evaluator unit for?

I think moral judgements are usually understood to have a social function—if I see someone stealing forty cakes and say that that's terrible, there's an implied call-to-action to punish the thief in accordance with the laws of our tribe. It seems weird to expect this as an alternative to social reality.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-30T21:58:52.156Z · score: 13 (6 votes) · LW · GW

I think some means of communicating are going to be more effective than others

Yes, marketing is important.

I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't.

You can just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack!

this can go a lot better if you're open to the fact that you could be the wrong one

Your degree of openness to the hypothesis that you could be the wrong one should be proportional to the actual probability that you are, in fact, the wrong one. Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into "I accept a belief from you if you accept a belief from me" social exchange.

can you link to any examples of this on LessWrong from the past few years?

For example, I'm not sure how I'm supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-30T18:15:06.197Z · score: 7 (5 votes) · LW · GW

it feels like it comes from a place of "You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you're wrong and your beliefs should be dismissed."

I would think that if someone's reasoning is obviously wrong, then that person and everyone else should be informed that they are wrong (and that the particular beliefs that are wrong should be dismissed), because then everyone involved will be less wrong, which is what this website is all about!

Certainly, one would be advised to be very careful before asserting that someone's reasoning is obviously wrong. (Obvious mistakes are more likely to be caught before publication than subtle ones, so if you think you've found an obvious mistake in someone's post, you should strongly consider the alternative hypotheses that either you're the one who is wrong, or that you're, e.g., erroneously expecting short inferential distances.)

More generally, I'm in favor of politeness norms where politeness doesn't sacrifice expressive power, but I'm wary of excessive emphasis on collaborative norms (what some authors would call "tone-policing") being used to obfuscate information exchange or even shut it down (via what Yudkowsky characterized as appeal-to-egalitarianism conversation-halters).

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-30T17:45:04.205Z · score: 5 (2 votes) · LW · GW

I don't think the disagreement here is about the feasibility of life extension. (I agree that it looks feasible.) I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn't make sense in the context of trying to explain the "causal reality vs. social reality" frame. "People should be angrier about aging" might be a good thesis for a blog post, but I think it would work better as a different post.

And you needn't be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough.

The magnitude of the chance matters! Have you read the Overly Convenient Excuses Sequence? I think Yudkowsky explained this well in the post "But There's Still a Chance, Right?".

Being Wrong Doesn't Mean You're Stupid and Bad (Probably)

2019-06-29T23:58:09.105Z · score: 17 (9 votes)
Comment by zack_m_davis on How to deal with a misleading conference talk about AI risk? · 2019-06-28T06:21:39.567Z · score: 11 (3 votes) · LW · GW

(Note: posted after the parent was retracted.)

consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics

I would be grateful for the free marketing! (And entertainment—internet randos' distorted impressions of you are fascinating to read.) Certainly, it would be better for people to discuss the specifics of your work, but it's a competitive market for attention out there: vague discussion is better than none at all!

there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with

If I'm interpreting this correctly, this doesn't seem very consistent with the first paragraph? First, you seem to be saying that it's unfair to Sussman to make him the target of vague criticism ("consider how you might take it"). But then you seem to saying that it looks bad for "us" (you know, the "AI risk community", Yudkowski's robot cult, whatever you want to call it) to be making vague criticisms that will get us written off as cranks ("not worth engaging with"). But I mostly wouldn't expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he's taking "us" seriously enough for our curt dismissal to hurt, but in the possible world where we're written off as cranks, then being named and singled out doesn't hurt.

(I'm not very confident in this analysis, but it seems important to practice trying to combat rationalization in social/political thinking??)

Comment by zack_m_davis on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-28T05:30:48.918Z · score: 17 (5 votes) · LW · GW

if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them

That only works for true-or-false questions. In larger answer spaces, he'd need to be wrong in some specific way such that there exists some simple algorithm (the analogue of "inverting") to compute the right answers from those wrong ones.

Comment by zack_m_davis on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-27T03:03:37.732Z · score: 28 (9 votes) · LW · GW

Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".

Culturally 99% of either is fine as long as all parties agree on the culture and act like it.

Okay, but those mean different things. "I don't know if that's true, what about x" is expressing uncertainty about one's interlocutor's claim, and entreating them to consider x as an alternative. "You're wrong because of x" is a denial of one's interlocutor's claim for a specific reason.

I find myself needing to say both of these things, but in different situations, each of which probably occurs more than 1% of the time. This would seem to contradict the claim that 99% of either is fine!

A culture that expects me to refrain from saying "You're wrong because of x" even if someone is in fact wrong because of x (because telling the truth about this wouldn't be "collaborative") is trying to decrease the expressive power of language and is unworthy of the "rationalist" brand name.

I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs.

I advocate for a culture that discourages bleed-through from epistemics to inherent interpersonal beliefs, except to whatever limited extent such bleed-through is epistemically justified.

"You're wrong about this" and "You are stupid and bad" are distinct propositions. It is not only totally possible, but in fact ubiquitously common, for the former to be true but the latter to be false! They're not statistically independent—if Kevin is wrong about everything all the time, that does raise my subjective probability that Kevin is stupid and bad. But I claim that any one particular instance of someone being wrong is only a very small amount of evidence about that person's degree of stupidity or badness! It is for this reason it is written that you should Update Yourself Incrementally!

Humans are not perfect arguers or it would not matter so much.

I agree that humans are not perfect arguers! However, I remember reading a bunch of really great blog posts back in the late 'aughts articulating a sense that it should be possible for humans to become better arguers! I wonder whatever happened to that website!

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-26T07:27:28.121Z · score: 1 (4 votes) · LW · GW

In this case, though, the "What? Why?" actually was rhetorical on my part. (Note the link to "Fake Optimization Criteria", which was intended to suggest that I don't think the optimization criterion of defeating death recommends the policy of clamoring in the streets.) It's not that I didn't understand the "cishumanists accept Death because they believe that the customs of their tribe are the laws of nature" point, it was that I disagreed with its attempted use as an illustration of the concept of social reality (because I think transhumanists similarly fail to understand that the customary optimism of their tribe is no substitute for engineering know-how), and was trying to use "naïve" Socratic questioning/inquiry to illustrate what I thought means-end reasoning about causal reality actually looks like. I can see how the this could be construed as a violation of some possible discourse norms (like the Recurse Center's "No feigned surprise" rule), but sometimes I find some such norms unduly constraining on the way I naturally talk and express ideas!

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-26T05:39:14.517Z · score: 26 (6 votes) · LW · GW

(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I'm trying to say, or destroying my motivation to write anything at all.)

Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I'm instead interpeting as a contrast between between transhumanist social reality vs. "normie" social reality.

(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you're trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)

a belief that aging and death are solvable

But whether aging and death are solvable is an empirical question, right? What if they're not solvable? Then the belief that aging and death are solvable would be incorrect.

I can pretty easily imagine there being an upper bound on humanly-achievable medical technology. Suppose defeating aging would require advanced molecular nanotechnology, but all human civilizations inevitably destroy themselves shortly after reaching that point. (Say, because that same level of nanotech gives you super-fast computers that make it easy to brute-force unaligned AGI, and AI alignment is just too hard.)

and it's terrible that we're not going as fast as we could be.

The concept of "terrible" doesn't exist in causal reality. (How does something being "terrible" pay rent in anticipated experiences?)

I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage

I think people do this. In the OP, you linked to the immortal Scott Alexander's "Who By Very Slow Decay", which contains this passage—

In the cafeteria at lunch, [doctors] will—despite medical confidentiality laws that totally prohibit this—compare stories of the most ridiculous families. "I have a blind 90 year old patient with stage 4 lung cancer with brain mets and no kidney function, and the family is demanding I enroll her in a clinical trial from Sri Lanka." "Oh, that's nothing. I have a patient who can’t walk or speak who’s breathing from a ventilator and has anoxic brain injury, and the family is insisting I try to get him a liver transplant."

What is harrassing doctors to demand a liver transplant, if it's not feeling outrage and taking action?

why have we not solved this yet?

In social reality, this is a rhetorical question used to coordinate punishment of those who can be blamed for not solving it yet.

In causal reality, it's a question with a very straightforward literal answer: the human organism is, in fact, subject to the biological process of senescence, and human civilization has not, in fact, developed the incredibly advanced technology that would be needed to circumvent this.

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-26T05:34:40.211Z · score: 8 (4 votes) · LW · GW

I endeavor to obey the moderation guidelines of any posts I comment on.

collaborative truth-seeking

I'm happy at the coincidence that you happened to use this phrase, because it reminded me of an old (May 2017) Facebook post of mine that I had totally forgotten about, but which might be worth re-sharing as a Question here. (And if it's not, then downvote it.) It's written the same kind of "aggressively Socratic" style that you disliked in the grandparent, but I think that style is serving a specific and important purpose, even if it wouldn't be appropriate in the comments of a post with contrary norm-enforcing moderation guidelines.

Comment by zack_m_davis on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-26T05:32:54.766Z · score: 2 (1 votes) · LW · GW

(Publication history note: lightly adapted from a 4 May 2017 Facebook status update. I pulled the text out of the JSON-blob I got from exporting my Facebook data, but I'm not sure how to navigate to the status update itself without the permalink or pressing the Page Down key too many times, so I don't remember whether I got any good answers from my Facebook friends at the time.)

What does the word "collaborative" mean in the phrase "collaborative truthseeking"?

2019-06-26T05:26:42.295Z · score: 27 (7 votes)
Comment by zack_m_davis on Being the (Pareto) Best in the World · 2019-06-25T05:51:21.936Z · score: 9 (6 votes) · LW · GW

Even if only a small fraction of those combinations are useful, there's still a lot of space to stake out a territory. [...] Thanks to the "curse" of dimensionality, these goldmines are not in any danger of exhausting.

A blessing on the supply side is still a curse on the demand side. A lot of empty hyperspace for you to be the closest expert in, just means that when there's a problem at a precise intersection of statistical-gerontological-macroeconomic-chemistry, the nearest expert might be far away.

Maybe think about this in the context of seeking a romantic partner: as you add more independent traits to your wishlist, your pool of potential matches goes down exponentially. (And God help you if some of your desired traits are anticorrelated.) Suddenly being alone in a high-dimensional space feels less comforting!

Comment by zack_m_davis on Causal Reality vs Social Reality · 2019-06-25T04:40:46.178Z · score: 26 (15 votes) · LW · GW

Why people aren't clamoring in the streets for the end of sickness and death?

What? Why? How would clamoring in the streets causally contribute to the end of sickness and death? Even if we interpret "clamoring in the streets" as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?—it still just doesn't seem like a very effective strategy compared to more narrowly-targeted interventions that can make direct incremental progress on the problem.

Concrete example: I have a friend who just founded a company to use video of D. magnia to more efficiently screen for potential anti-aging drugs. The causal pathway between my friend's work and defeating aging is clear: if the company succeeds at building their water-flea camera rig drug-discovery process, then they might discover promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.

Of course, not everyone has the skillset to do biotechnology work! For example, I don't. That means my causal contributions to ending sickness and death will be much more indirect. For example, my work on improving the error messages in the Rust compiler has the causal effect of making it ever-so-slightly easier to write software in Rust, some of which software might be used by, e.g., companies working on drug-discovery processes for finding promising chemical compounds, some of which (after further research and development) will successfully treat some of the diseases of aging.

That's a pretty small and indirect effect, though! To do better, I might try to harness the power of comparative advantage and earn-to-give: instead of unpaid open-source work on Rust, maybe I should work harder at my paid software dayjob, negotiate for a raise, and use that money to fund someone else to do direct work on ending sickness and death. On the other hand, that's assuming we know how to turn marginal money into marginal (good) research, which might not actually be true, either because a specific area is more talent-constrained than funding-constrained, or more generally because most donations in our inadequate civilization end up getting dissipated into bullshit jobs ...

But while we're thinking about how to contribute to ending sickness and death, it's also important to track how actions might accidentally contribute to more sickness and/or death. For example, improving the error messages in the Rust compiler and having the causal effect of making it ever-so-slightly easier to write software in Rust, might have the causal effect of making it ever-so-slightly easier to write an unaligned recursively self-improving artificial intelligence that will destroy all value in our future light cone. Whoops! If it turns out that we live in that possible world, maybe I should do something less destructive with my time, like clamoring in the streets.

You care about comfort, but you also care about what your friends think. You might decide that Vibrams are just so damn comfortable they're worth a bit of teasing.

I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having "weird" toe-shoes, but some people will think better of you.

Soylent probably markets to a similar demographic niche as Vibrams. I'm sure some people drink Soylent for the "causal" reason of it being an efficient, practical alternative to cooking, rather than the "social" reason that they were suckered by its brilliant marketing to contrarian nerds as an efficient, practical alternative to cooking. But the ten unopened cases of Soylent sitting behind me as I type this represent an uncomfortable weight of evidence that I, personally, am not in the "causal" group, and I suspect some Vibrams-wearers might be in a similar position.

Yes, there are some people who talk about life extension, but they're just playing at some group game the ways goths are. It's just a club, a rallying point. It's not about something. It's just part of the social reality like everything else, and I see no reason to participate in that. I've got my own game which doesn't involve being so weird, a much better strategy.

The phrase "doesn't involve being so weird" makes me wonder if this is meant as deliberate irony? ("Being weird" is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.

What evidence do you use to distinguish between people who are playing the "talk about life extension" group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)

If you primarily inhabit causal reality (like most people on LessWrong)

The Less Wrong website certainly hosts a lot of insightful blog posts about how to inhabit causal reality. How reliable is the causal pathway between "people read the blog posts" and "those people primarily inhabit causal reality"? That's an empirical question!

Comment by zack_m_davis on Should rationality be a movement? · 2019-06-21T15:39:19.428Z · score: 20 (12 votes) · LW · GW

My response was that even if all of this were true, EA still provided a pool of people from which those who are strategic could draw and recruit from.

This ... doesn't seem to be responding to your interlocutor's argument?

The "anti-movement" argument is that solving alignment will require the development of a new 'mental martial art' of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core "rationalists" were originally trying to send.

Now, you might think that this story is false—that the growth of EA hasn't made "rationality" worse, that we're succeeding in raising the sanity waterline rather than selling out and being corrupted. But if so, you need to, like, argue that?

If I say, "Popularity is destroying our culture", and you say, "No, it isn't," then that's a crisp disagreement that we can potentially have a productive discussion about. If instead you say, "But being popular gives you a bigger pool of potential converts to your culture," that would seem to be missing the point. What culture?

Comment by zack_m_davis on No, it's not The Incentives—it's you · 2019-06-19T05:03:57.147Z · score: 4 (2 votes) · LW · GW

unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I could imagine this being true in some sort of hyper-Malthusian setting where any deviation from the Nash equilibrium gets you immediately killed and replaced with an otherwise-identical agent who will play the Nash equilibrium.

Comment by zack_m_davis on The Univariate Fallacy · 2019-06-18T03:20:49.443Z · score: 4 (2 votes) · LW · GW

Sorry, I hope I didn't suggest I thought that!

I mean, it doesn't matter whether you think it, right? It matters whether it's true. Like, if I were to were to write a completely useless blog post on account of failing to understand the concept of a change of basis, then someone should tell me, because that would be helping me stop being deceived about the quality of my blogging.

Comment by zack_m_davis on The Univariate Fallacy · 2019-06-16T20:20:00.646Z · score: 2 (1 votes) · LW · GW

replace the first instance of "are statistically independent" with "are statistically independent and identically distributed"

Done, thanks!

talking about discrete distributions here, then linking to Eliezer's discussion of continuous latent variables ("intelligence") without noting the difference

The difference doesn't seem relevant to the narrow point I'm trying to make? I was originally going to use multivariate normal distributions with different means, but then decided to just make up "peaked" discrete distributions in order to keep the arithmetic simple.

Comment by zack_m_davis on The Univariate Fallacy · 2019-06-16T20:09:49.133Z · score: 3 (2 votes) · LW · GW

Projecting onto any 1-dimensional subspace orthogonal to this (there is a unique one through the origin) will thus yield a 'variable' which cleanly separates the two points into the red and blue categories. So in the illustrated example, it looks just like a problem of bad coordinate choice.

Thanks, this is a really important point! Indeed, for freely-reparametrizable abstract points in an abstract vector space, this is just a bad choice of coordinates. The reason this objection doesn't make the post completely useless, is that for some applications (you know, if you're one of those weird people who cares about "applications"), we do want to regard some bases as more "fundamental", if the variables represent real-world measurements.

For example, you might be able to successfully classify two different species of flower using both "stem length" and "petal color" measurements, even if the distributions overlap for either stem length or petal color considered individually. Mathematically, we could view the distributions as not overlapping with respect to some variable that corresponds to some weighted function of stem length and petal color, but that variable seems "artificial", less "interpretable."

Comment by zack_m_davis on The Univariate Fallacy · 2019-06-16T19:31:19.056Z · score: 4 (2 votes) · LW · GW

Thanks for the bug report; I edited the post to use LaTeX \vec{x}. (The combining arrow worked for me on Firefox 67.0.1 and was kind-of-ugly-but-definitely-renders on Chromium 74.0.3729.169, on Xubuntu 16.04)

It is probably a good idea to use LaTeX to encode such symbols.

I've been doing this thing where I prefer to use "plain" Unicode where possible (where, e.g., the subscript in "x₁" is 0x2081 SUBSCRIPT ONE) and only resort to "fancy" (and therefore suspicious) LaTeX when I really need it, but the reported Chrome-on-macOS behavior does slightly alter my perception of "really need it."

Comment by zack_m_davis on The Univariate Fallacy · 2019-06-15T21:47:32.905Z · score: 2 (1 votes) · LW · GW

(I wonder how a "-1" ended up in the canonical URL slug (/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1)? Did someone else have a draft of the same name, and the system wants unique slugs??)

The Univariate Fallacy

2019-06-15T21:43:14.315Z · score: 21 (8 votes)

No, it's not The Incentives—it's you

2019-06-11T07:09:16.405Z · score: 89 (28 votes)
Comment by zack_m_davis on Drowning children are rare · 2019-06-09T06:48:20.792Z · score: 15 (5 votes) · LW · GW

I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.

I think the most relevant post by Ben here is "Bad Intent Is a Disposition, Not a Feeling". (Highly recommended!)

Recently I've often found myself wishing for better (widely-understood) terminology for phenomena that it's otherwise tempting to call "bad faith", "intellectual dishonesty", &c. I think it's pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that's worth distinguishing from "innocent" mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, "It is difficult to get a man to understand something when his salary depends upon his not understanding it.")

If our discourse norms require us to "assume good faith", but there's an important sense in which that assumption isn't true (because motivated misunderstandings resist correction in a way that simple mistakes don't), but we can't talk about the ways it isn't true without violating the discourse norm, then that's actually a pretty serious problem for our collective sanity!

Comment by zack_m_davis on Drowning children are rare · 2019-06-08T22:20:03.872Z · score: 21 (5 votes) · LW · GW

So, rationality largely isn't actually about doing thinking clearly [...] it's an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.

This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.

Specifically: if you fail to make a hard mental disinction between "rationality"-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called "rationalists" about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance ("Am I crazy? Are they crazy? What's going on?? Auuuuuugh") in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn't.

But ... it shouldn't. Sure, self-identification with the "rationalist" brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that's an empirical question that you can't answer by taking the brand name literally.

I thought the "rationalist" æsthetic-identity-movement's marketing literature expressed this very poetically

How can you improve your conception of rationality? Not by saying to yourself, "It is my duty to be rational." By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, "The sky is green," and you look up at the sky and see blue. If you think: "It may look like the sky is blue, but rationality is to believe the words of the Great Teacher," you lose a chance to discover your mistake.

Do not ask whether it is "the Way" to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.

Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected "the community" to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander's comments in "The Ideology Is Not the Movement" ("[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture").

This isn't to say that the so-called "rationalist" community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don't see any better community to run away to—at the moment. (Though I'm keeping an eye on the Quillette people.) But if attempts to analyze how we're collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!

(Full disclosure: uh, I guess I would also count as part of the "Vassar crowd" these days??)

Comment by zack_m_davis on Drowning children are rare · 2019-06-07T05:06:45.899Z · score: 6 (3 votes) · LW · GW

These phrases have denotative meanings! They're pretty clear to determine if you aren't willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!

I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using "aggressive" connotations ("hoarding", "wildly exaggerated") and again using "softer" words ("accumulating", "significantly overestimated"), with a postscript that says, "Look, I don't care which of these frames you pick; I'm trying to communicate the literal claims common to both frames."

Comment by zack_m_davis on Does Bayes Beat Goodhart? · 2019-06-03T03:40:05.320Z · score: 5 (3 votes) · LW · GW

(Spelling note: it's apparently supposed to be "Goodhart" with no e.)

Comment by zack_m_davis on Editor Mini-Guide · 2019-06-02T01:01:06.555Z · score: 2 (1 votes) · LW · GW

LaTeX test

Added: subscript test, <em>x<sub>1</sub></em>.

Comment by zack_m_davis on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-01T06:15:00.401Z · score: 11 (6 votes) · LW · GW

A possible argument for being deliberately vague (!) in this sort of situation is that telling people in advance exactly what bad things you'll punish helps hypothetical adversaries find bad things that you don't know how to detect.

Comment by zack_m_davis on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T02:49:17.884Z · score: 27 (14 votes) · LW · GW

The Codex contains such exemplary essays as: [...] The Categories Were Made For Man, Not Man For The Categories

"... Not Man for the Categories" is really not Scott's best work, and I think it would be better to cite almost literally any other Slate Star Codex post (most of which, I agree, are exemplary).

That post says (redacting an irrelevant object-level example):

I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.

I claim that this is bad epistemology independently of the particular values of X and Y, because we need to draw our conceptual boundaries in a way that "carves reality at the joints" in order to help our brains make efficient probabilistic predictions about reality.

I furthermore claim that the following disjunction is true:

  • Either the quoted excerpt is a blatant lie on Scott's part because there are rules of rationality governing conceptual boundaries and Scott absolutely knows it, or
  • You have no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way.

Look. I know I've been harping on this a lot lately. I know a lot of people have (understandable!) concerns about what they assume to be my internal psychological motives for spending so much effort harping on this lately.

But the quoted excerpt from "... Not Man for the Categories" is an elementary philosophy mistake. Independently of whatever blameworthy psychological motives I may or may not have for repeatedly pointing out the mistake, and independently of whatever putative harm people might fear as a consequence of correcting this particular mistake, if we're going to be serious about this whole "rationality" project, there needs to be some way for someone to invest a finite amount of effort to correct the mistake and get people to stop praising this stupid "categories can't be false, therefore we can redefine them for putative utilitarian benefits without any epistemic consequences" argument. We had an entire Sequence specifically about this. I can't be the only one who remembers!

Comment by zack_m_davis on "But It Doesn't Matter" · 2019-06-01T02:06:51.641Z · score: 3 (2 votes) · LW · GW

(Publication history note: this post is lightly adapted from a 14 January 2017 Facebook status update, but Facebook isn't a good permanent home for content, for various reasons.)

"But It Doesn't Matter"

2019-06-01T02:06:30.624Z · score: 40 (29 votes)
Comment by zack_m_davis on Drowning children are rare · 2019-05-30T15:00:40.095Z · score: 6 (3 votes) · LW · GW

I think it's possible to use in a "mindful" way even if most people are doing it wrong? The system reminding you what you read n days ago gives you a chance to connect it to the real world today when you otherwise would have forgotten.

Comment by zack_m_davis on Drowning children are rare · 2019-05-30T04:43:32.731Z · score: 10 (2 votes) · LW · GW

when I tried to remember what it was about, all I could remember [...] I'm similarly worried that a year from now

Make spaced repetition cards?

Comment by zack_m_davis on Comment section from 05/19/2019 · 2019-05-25T05:51:01.462Z · score: 20 (9 votes) · LW · GW

I've noted in at least some of your posts that I don't find your abstractions very compelling without examples, and I that I don't much care for the examples I can think of to reify your abstractions.

"Where to Draw the Boundaries?" includes examples about dolphins, geographic and political maps, poison, heaps of sand, and job titles. In the comment section, I gave more examples about Scott Alexander's critique of neoreactionary authors, Müllerian mimickry in snakes, chronic fatigue syndrome, and accent recognition.

I agree that it's reasonable for readers to expect authors to provide examples, which is why I do in fact provide examples. What do you want from me, exactly??

Comment by zack_m_davis on Comment section from 05/19/2019 · 2019-05-25T05:49:09.064Z · score: 12 (3 votes) · LW · GW

Maybe a better way to put it would be that agreement on meta-level principles more reliably forces agreement on simple object-level issues?

I think it's important, valuable, and probably necessary to work out theoretical principles in an artificially simple and often "abstract" context, before you can understand how to correctly apply them to a more complicated situation—and the correct application to the more complicated situation is going to be a longer explanation than the simple case. The longer the explanation, the more chances someone has to get one of the burdensome details wrong, leading to more disagreements.

Students of physics first master problems about idealized point masses, frictionless planes, perfectly elastic collisions, &c. as a prerequisite for eventually being able to solve more real-world-relevant problems, like how to build a car or something—even if the ambition of real-world automotive engineering was one's motivation for studying (or lecturing about) physics.

Similarly, I think students of epistemology need to first master problems about idealized bleggs and rubes with five binary attributes, before they can handle really complicated issues (e.g., the implications on social norms of humans' ability to recognize each other's sex)—even if the ambition of tackling hard sociology problems was one's motivation for studying (or lecturing about) epistemology.

Imagine being at a physics lecture where one of the attendees kept raising their hand to complain that the speaker was using abstraction to "obfuscate" or "disguise" the real issue of how to build a car. That would be pretty weird, right??

Comment by zack_m_davis on Comment section from 05/19/2019 · 2019-05-23T06:24:14.920Z · score: 17 (4 votes) · LW · GW

one voice with an agenda which, if implemented, would put me in physical danger

Okay, I think I have a right to respond to this.

People being in physical danger is a bad thing. I don't think of myself as having a lot of strong political beliefs, but I'm going to take a definite stand here: I am against people being in physical danger.

If someone were to present me with a persuasive argument that my writing elsewhere is increasing the number of physical-danger observer-moments in the multiverse on net, then I would seriously consider revising or retracting some of it! But I'm not aware of any such argument.

Minimax Search and the Structure of Cognition!

2019-05-20T05:25:35.699Z · score: 15 (6 votes)

Where to Draw the Boundaries?

2019-04-13T21:34:30.129Z · score: 81 (31 votes)

Blegg Mode

2019-03-11T15:04:20.136Z · score: 18 (13 votes)

Change

2017-05-06T21:17:45.731Z · score: 1 (1 votes)

An Intuition on the Bayes-Structural Justification for Free Speech Norms

2017-03-09T03:15:30.674Z · score: 5 (6 votes)

Dreaming of Political Bayescraft

2017-03-06T20:41:16.658Z · score: 1 (1 votes)

Rationality Quotes January 2010

2010-01-07T09:36:05.162Z · score: 3 (6 votes)

News: Improbable Coincidence Slows LHC Repairs

2009-11-06T07:24:31.000Z · score: 7 (8 votes)