Posts

Optimized Propaganda with Bayesian Networks: Comment on "Articulating Lay Theories Through Graphical Models" 2020-06-29T02:45:08.145Z · score: 67 (27 votes)
Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning 2020-06-07T07:52:09.143Z · score: 81 (44 votes)
Comment on "Endogenous Epistemic Factionalization" 2020-05-20T18:04:53.857Z · score: 125 (49 votes)
"Starwink" by Alicorn 2020-05-18T08:17:53.193Z · score: 40 (14 votes)
Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis 2020-05-11T06:00:24.836Z · score: 61 (26 votes)
A Book Review 2020-04-28T17:43:07.729Z · score: 16 (13 votes)
Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence 2020-04-16T03:44:06.940Z · score: 13 (6 votes)
Why Telling People They Don't Need Masks Backfired 2020-03-18T04:34:09.644Z · score: 29 (14 votes)
The Heckler's Veto Is Also Subject to the Unilateralist's Curse 2020-03-09T08:11:58.886Z · score: 52 (21 votes)
Relationship Outcomes Are Not Particularly Sensitive to Small Variations in Verbal Ability 2020-02-09T00:34:39.680Z · score: 17 (10 votes)
Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution 2020-01-21T06:28:33.854Z · score: 30 (8 votes)
Less Wrong Poetry Corner: Walter Raleigh's "The Lie" 2020-01-04T22:22:56.820Z · score: 21 (13 votes)
Don't Double-Crux With Suicide Rock 2020-01-01T19:02:55.707Z · score: 70 (21 votes)
Speaking Truth to Power Is a Schelling Point 2019-12-30T06:12:38.637Z · score: 53 (15 votes)
Stupidity and Dishonesty Explain Each Other Away 2019-12-28T19:21:52.198Z · score: 36 (16 votes)
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think 2019-12-27T05:09:22.546Z · score: 95 (34 votes)
Funk-tunul's Legacy; Or, The Legend of the Extortion War 2019-12-24T09:29:51.536Z · score: 13 (20 votes)
Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk 2019-12-21T00:49:02.862Z · score: 65 (25 votes)
A Theory of Pervasive Error 2019-11-26T07:27:12.328Z · score: 21 (7 votes)
Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary 2019-11-22T06:18:59.497Z · score: 73 (23 votes)
Algorithms of Deception! 2019-10-19T18:04:17.975Z · score: 18 (7 votes)
Maybe Lying Doesn't Exist 2019-10-14T07:04:10.032Z · score: 59 (29 votes)
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists 2019-09-24T04:12:07.560Z · score: 219 (75 votes)
Schelling Categories, and Simple Membership Tests 2019-08-26T02:43:53.347Z · score: 52 (19 votes)
Diagnosis: Russell Aphasia 2019-08-06T04:43:30.359Z · score: 47 (13 votes)
Being Wrong Doesn't Mean You're Stupid and Bad (Probably) 2019-06-29T23:58:09.105Z · score: 17 (12 votes)
What does the word "collaborative" mean in the phrase "collaborative truthseeking"? 2019-06-26T05:26:42.295Z · score: 27 (7 votes)
The Univariate Fallacy 2019-06-15T21:43:14.315Z · score: 27 (11 votes)
No, it's not The Incentives—it's you 2019-06-11T07:09:16.405Z · score: 91 (32 votes)
"But It Doesn't Matter" 2019-06-01T02:06:30.624Z · score: 47 (31 votes)
Minimax Search and the Structure of Cognition! 2019-05-20T05:25:35.699Z · score: 15 (6 votes)
Where to Draw the Boundaries? 2019-04-13T21:34:30.129Z · score: 88 (38 votes)
Blegg Mode 2019-03-11T15:04:20.136Z · score: 18 (13 votes)
Change 2017-05-06T21:17:45.731Z · score: 1 (1 votes)
An Intuition on the Bayes-Structural Justification for Free Speech Norms 2017-03-09T03:15:30.674Z · score: 4 (8 votes)
Dreaming of Political Bayescraft 2017-03-06T20:41:16.658Z · score: 9 (3 votes)
Rationality Quotes January 2010 2010-01-07T09:36:05.162Z · score: 3 (6 votes)
News: Improbable Coincidence Slows LHC Repairs 2009-11-06T07:24:31.000Z · score: 7 (8 votes)

Comments

Comment by zack_m_davis on Optimized Propaganda with Bayesian Networks: Comment on "Articulating Lay Theories Through Graphical Models" · 2020-06-30T04:28:41.440Z · score: 3 (2 votes) · LW · GW

Thanks, you are right and the thing I actually typed was wrong. (For the graph A → C ← B, the collider C blocks the path between A and B, but conditioning on the collider un-blocks it.) Fixed.

Comment by zack_m_davis on A reply to Agnes Callard · 2020-06-28T04:46:45.773Z · score: 7 (4 votes) · LW · GW

(Done.)

Comment by zack_m_davis on Atemporal Ethical Obligations · 2020-06-27T03:43:50.618Z · score: 9 (7 votes) · LW · GW

the future will agree Rowling's current position is immoral

This is vague. An exercise: can you quote specific sentences from Rowling's recent essay that you think the future will agree are immoral?

Maybe don't answer that, because we don't care about the object level on this website? (Or, maybe you should answer it if you think avoiding the object-level is potentially a sneaky political move on my part.) But if you try the exercise and it turns out to be harder than you expected, one possible moral is that a lot of what passes for discourse in our Society doesn't even rise to the level of disagreement about well-specified beliefs or policy proposals, but is mostly about coalition-membership and influencing cultural sentiments. Those who scorn Rowling do so not because she has a specific proposal for revising the 2004 Gender Recognition Act that people disagree with, but because she talks in a way that pushes culture in the wrong direction. Everything is a motte-and-bailey: most people, most of the time don't really have "positions" as such!

Comment by zack_m_davis on Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning · 2020-06-22T01:14:18.492Z · score: 2 (1 votes) · LW · GW

So, I actually don't think Less Wrong needs to be nicer! (But I agree that elaborating more was warranted.)

Comment by zack_m_davis on Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning · 2020-06-22T01:11:39.788Z · score: 17 (6 votes) · LW · GW

Thanks for the comment!—and for your patience.

So, the general answer to "Is there anyone who doesn't know this?" is, in fact, "Yes." But I can try to say a little bit more about why I thought this was worth writing.

I do think Less Wrong and /r/rational readers know that words don't have intrinsic definitions. If someone wrote a story that just made the point, "Hey, words don't have intrinsic definitions!", I would probably downvote it.

But I think this piece is actually doing more work and exposing more details than that—I'm actually providing executable source code (!) that sketches how a simple sender–reciever game with a reinforcement-learning rule correlates a not-intrinsically-meaningful signal with the environment such that it can be construed as a meaningful word that could have a definition.

In analogy, explaining how the subjective sensation of "free will" might arise from a deterministic system that computes plans (without being able to predict what it will choose in advance of having computed it) is doing more work than the mere observation "Naïve free will can't exist because physics is deterministic".

So, I don't think all this was already obvious to Less Wrong readers. If it was already obvious to you, then you should be commended. However, even if some form of these ideas was already well-known, I'm also a proponent of "writing a thousand roads to Rome": part of how you get and maintain a community where "everybody knows" certain basic material, is by many authors grappling the ideas and putting their own ever-so-slightly-different pedagogical spin on them. It's fundamentally okay for Yudkowsky's account of free will, and Gary Drescher's account (in Chapter 5 of Good and Real), and my story about writing a chess engine to all exist, even if they're all basically "pointing at the same thing."

Another possible motivation for writing a new presentation of an already well-known idea, is because the new presentation might be better-suited as a prerequisite or "building block" towards more novel work in the future. In this case, some recent Less Wrong discussions have used a "four simulacrum levels" framework (loosely inspired by the work of Jean Baudrillard) to try to model how political forces alter the meaning of language, but I'm pretty unhappy with the "four levels" formulation: the fact that I could never remember the difference between "level 3" and "level 4" even after it was explained several times (Zvi's latest post helped a little), and the contrast between the "linear progression" and "2x2" formulations, make me feel like we're talking about a hodgepodge of different things and haphazardly shoving them into this "four levels" framework, rather than having a clean deconfused concept to do serious thinking with. I'm optimistic about a formal analysis of sender–receiver games (following the work of Skyrms and others) being able to provide this. Now, I haven't done that work yet, and maybe I won't find anything interesting, but laying out the foundations for that potential future work was part of my motivation for this piece.

Comment by zack_m_davis on When is it Wrong to Click on a Cow? · 2020-06-21T17:00:08.373Z · score: 4 (2 votes) · LW · GW

It is always wrong to click on a cow. Clicking on cows is contrary to the moral law.

Comment by zack_m_davis on What is meant by Simulcra Levels? · 2020-06-19T03:03:24.404Z · score: 18 (6 votes) · LW · GW

Because the local discussion of this framework grew out of Jessica Taylor's reading of Wikipedia's reading of continental philosopher Jean Baudrillard's Simulacra and Simulation, about how modern Society has ceased dealing with reality itself, and instead deals with our representations of it—maps that precede the territory, copies with no original. (That irony that no one in this discussion has actually read Baudrillard should not be forgotten!)

Comment by zack_m_davis on Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning · 2020-06-18T04:36:43.221Z · score: 4 (2 votes) · LW · GW

(Thanks for your patience.) If you liked the technical part of this post, then yes! But supplement or substitute Ch. 6, "Deception", with Don Fallis and Peter J. Lewis's "Towards a Formal Analysis of Deceptive Signaling", which explains what Skyrms gets wrong.

Comment by zack_m_davis on That Alien Message · 2020-06-16T06:28:00.189Z · score: 4 (2 votes) · LW · GW

(This was adapted into a longer story by Alicorn.)

Comment by zack_m_davis on Failed Utopia #4-2 · 2020-06-11T05:28:55.802Z · score: 6 (3 votes) · LW · GW

Thanks for commenting! (Strong-upvoted.) It's nice to get new discussion on old posts and comments.

probably applies to some people, somewhere

Hi!

I don't think I'm doing the idea a disservice

How much have you read about the idea from its proponents? ("From its proponents" because, tragically, opponents of an idea can't always be trusted to paraphrase it accurately, rather than attacking a strawman.) If I might recommend just one paper, may I suggest Anne Lawrence's "Autogynephilia and the Typology of Male-to-Female Transsexualism: Concepts and Controversies"?

by dismissing it with a couple of silly comics

Usually, when I dismiss an idea with links, I try to make sure that the links are directly about the idea in question, rather than having a higher inferential distance.

For example, when debating a creationist, I think it would be more productive to link to a page about the evidence for evolution, rather than to link to a comic about the application of Occam's razor to some other issue. To be sure, Occam's razor is relevant to the creation/evolution debate!—but in order to communicate to someone who doesn't already believe that, you (or your link) needs to explain the relevance in detail. The creationist probably thinks intelligent design is "the simplest explanation." In order to rebut them, you can't just say "Occam's razor!", you need to show how they're confused about how evolution works or the right concept of "simplicity".

In the present case, linking to Existential Comics on falsifiability and penis envy doesn't help me understand your point of view, because while I agree that scientific theories need to be falsifiable, I don't agree that the autogynephilia theory is unfalsifiable. An example of a more relevant link might be to Julia Serano's rebuttal? (However, I do not find Serano's rebuttal convincing.)

I don't see how "wanting to see a world without strict gender roles" has anything to do with sexuality

That part is admittedly a bit speculative; as it happens, I'm planning to explain more in a forthcoming post (working title: "Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems") on my secret ("secret") blog, but it's not done yet.

Comment by zack_m_davis on What past highly-upvoted posts are overrated today? · 2020-06-09T22:05:46.569Z · score: 13 (6 votes) · LW · GW

You would downvote them in order to make the sorted-by-karma archives more useful! (See the tragically underrated "Why Artificial Optimism?")

Comment by zack_m_davis on Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning · 2020-06-09T05:55:04.955Z · score: 3 (2 votes) · LW · GW

(More comments on /r/rational and /r/SneerClub.)

Comment by zack_m_davis on Open & Welcome Thread - June 2020 · 2020-06-07T05:05:13.050Z · score: 8 (4 votes) · LW · GW

Comment and post text fields default to "LessWrong Docs [beta]" for me, I assume because I have "Opt into experimental features" checked in my user settings. I wonder if the "Activate Markdown Editor" setting should take precedence?—no one who prefers Markdown over the Draft.js WYSIWYG editor is going to switch because our WYSIWYG editor is just that much better, right? (Why are you guys writing an editor, anyway? Like, it looks fun, but I don't understand why you'd do it other than, "It looks fun!")

Comment by zack_m_davis on [Poll] 'Truth' vs 'Winning' · 2020-06-06T18:10:51.331Z · score: 18 (10 votes) · LW · GW

I'll grant that there's a sense in which instrumental and epistemic rationality could be said to not coincide for humans, but I think they conflict much less often than you seem to be implying, and I think overemphasizing the epistemic/instrumental distinction was a pedagogical mistake in the earlier days of the site.

Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?

From this perspective, the purported counterexamples to the coincidence of instrumental and epistemic rationality seem like pathological edge cases that depend on weird defects in human psychology. Learning how to build an unaligned superintelligence or an atomic bomb isn't dangerous if you just ... choose not to build the dangerous thing, even if you know how. Maybe there are some cases where believing false things helps achieve your goals (particularly in domains where we were designed by evolution to have false beliefs for the function of decieving others), but trusting false information doesn't increase your chances of using information to make decisions that achieve your goals.

Comment by zack_m_davis on Open & Welcome Thread - June 2020 · 2020-06-04T21:50:36.366Z · score: 6 (4 votes) · LW · GW

This seems kind of terrible? I expect authors and readers care more about new posts being published than about the tags being pristine.

Comment by zack_m_davis on Open & Welcome Thread - June 2020 · 2020-06-04T21:24:43.708Z · score: 2 (1 votes) · LW · GW

I was wondering about this, too. (If the implicit Frontpaging queue is "stuck", that gives me an incentive to delay publishing my new post, so that it doesn't have to compete with a big burst of backlogged posts being Frontpaged at the same time.)

Comment by zack_m_davis on Conjuring An Evolution To Serve You · 2020-06-01T06:32:28.628Z · score: 4 (2 votes) · LW · GW

(This post could be read as a predecessor to the Immoral Mazes sequence.)

Comment by zack_m_davis on A Problem With Patternism · 2020-05-20T21:07:07.539Z · score: 5 (3 votes) · LW · GW

the same question Yudkowsky uses in his post on cryonics in the sequences, although I can't find a link at the moment

You may be thinking of "Timeless Identity". Best wishes, the Less Wrong Reference Desk

Comment by zack_m_davis on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-05-19T04:20:41.429Z · score: 6 (3 votes) · LW · GW

I want a language or philosophy of language tag (examples: "37 Ways Words Can Be Wrong", "Fuzzy Boundaries, Real Concepts", some of my published and forthcoming work).

I want a disagreement tag (examples: "The Modesty Argument", "The Rhythm of Disagreement", a forthcoming post).

Comment by zack_m_davis on Raemon's Scratchpad · 2020-04-17T05:19:23.670Z · score: 10 (5 votes) · LW · GW

Looks like the weak 3-votes are gone now!

Comment by zack_m_davis on The Unilateralist’s “Curse” Is Mostly Good · 2020-04-14T03:24:35.473Z · score: 25 (10 votes) · LW · GW

(Previously on Less Wrong: "The Heckler's Veto Is Also Subject to the Unilateralist's Curse")

Comment by zack_m_davis on Takeaways from safety by default interviews · 2020-04-04T22:46:39.912Z · score: 22 (6 votes) · LW · GW

AI researchers are likely to stop and correct broken systems rather than hack around and redeploy them.

Ordinary computer programmers don't do this. (As it is written, "move fast and break things.") What will spur AI developers to greater caution?

Comment by zack_m_davis on Benito's Shortform Feed · 2020-03-28T03:16:51.886Z · score: 3 (2 votes) · LW · GW

Alternatively, "lawful universe" has lower Kolmogorov complexity than "lawful universe plus simulator intervention" and thereore gets exponentially more measure under the universal prior?? (See also "Infinite universes and Corbinian otaku" and "The Finale of the Ultimate Meta Mega Crossover".)

Comment by zack_m_davis on Benito's Shortform Feed · 2020-03-27T15:13:16.484Z · score: 3 (2 votes) · LW · GW

Why appeal to philosophical sophistication rather than lack of motivation? Humans given the power to make ancestor-simulations would create lots of interventionist sims (as is demonstrated by the populatity games like The Sims), but if the vast hypermajority of ancestor-simulations are run by unaligned AIs doing their analogue of history research, that could "drown out" the tiny minority of interventionist simulations.

Comment by zack_m_davis on Can crimes be discussed literally? · 2020-03-24T17:26:02.839Z · score: 30 (10 votes) · LW · GW

If X are making claims that everyone knows are false, then there's no element of deception

"Everyone knows" is an interesting phrase. If literally everyone knew, what would be the function of making the claim? How do you end up with a system that wouldn't work without false assertions, and yet allegedly "everyone" knows that the assertions are false? It seems more likely that the reason the system wouldn't work without false assertions, is because someone is actually fooled. If the people who do know are motivated to prevent it from becoming common knowledge, "It's not deceptive because everyone knows" would be a tempting rationalization for maintaining the status quo.

Comment by zack_m_davis on When to Donate Masks? · 2020-03-23T01:52:16.689Z · score: 8 (5 votes) · LW · GW

if you donate masks today I expect them to be used much more quickly than if you wait and donate them when things are worse.

But this (strategically timing your donation because you don't expect the recipient to use the gift intelligently if you just straightforwardly gave when you noticed the opportunity) is kind of a horrifying situation to be in, right? If you can see the logic of the argument, why can't hospital administrators see it, too?—at least once it's been pointed out?

Comment by zack_m_davis on SARS-CoV-2 pool-testing algorithm puzzle · 2020-03-22T05:19:13.967Z · score: 5 (3 votes) · LW · GW

This reminds me of that cute information-theory puzzle about finding the ball with the different weight! I'm pretty dumb and bad at math, but I think the way this works is that since each test is a yes-or-no question, we can reduce our uncertainty by at most one bit with each test, and, as in Twenty Questions, we want to choose the question such that we get that bit.

A start on the simplest variant of the problem, where we're assuming the tests are perfect and just trying to minimize the number of tests: the probability of at least one person in the pool having the 'roni in pool size S is going to be , the complement of the probability of them all not having it. We want to choose S such that this quantity is close to 0.5, so . (For example, if P = 0.05, then S ≈ 13.51.)

My next thought is that when we do get a positive test with this group size, we should keep halving that group to find out which person is positive—but that would only work with certainty if there were exactly one positive (if the "hit" is in one half of the group, you know it's not in the other), which isn't necessarily the case (we could have gotten "lucky" and got more than one hit in our group that was chosen to have a fifty-fifty shot of having at least one) ...

Partial credit???

Comment by zack_m_davis on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-19T18:12:27.350Z · score: 25 (8 votes) · LW · GW
  1. We should trust the professionals who predict these sorts of things.

What? Why? How do you decide which professionals to trust? (Nick Bostom is just some guy with a PhD; there are lots of those, and most of them aren't predicting a robot apocalypse. Eliezer Yudkowsky never graduated from high school!)

The reason I'm concerned about existential risk from artificial intelligence, is because the arguments actually make sense. (Human intelligence has had a big impact on the planet, check; there's no particular reason to expect humans to be the most possible powerful intelligence, check; there's no particular reason to expect an arbitrary intelligence to have humane values, check; humans are made out of atoms than can be used for other things, check and mate.)

If you think your audience just isn't smart enough to evaluate arguments, then, gee, I don't know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That's a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teach methods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.

Comment by zack_m_davis on King and Princess · 2020-03-17T03:24:18.656Z · score: 2 (1 votes) · LW · GW

It's not an especially accurate depiction of royalty

You mean Tangled: The Series lied to me?!

Comment by zack_m_davis on The absurdity of un-referenceable entities · 2020-03-14T20:21:41.352Z · score: 7 (4 votes) · LW · GW

The un-referenceable may, at best, be inferred (although, of course, this statement is absurd in refererring to the un-referenceable).

Would you also say that a lot of mathematics is absurd in this sense? For example, almost all real numbers are un-nameable (because there uncountably many real numbers, but only countably many names you could give a number).

Comment by zack_m_davis on orthonormal's Shortform · 2020-03-13T04:41:52.800Z · score: 4 (2 votes) · LW · GW

Some authors use ostensive to mean the same thing as "extensional."

Comment by zack_m_davis on Zoom In: An Introduction to Circuits · 2020-03-11T18:15:09.615Z · score: 11 (6 votes) · LW · GW

As is demonstrated by the Hashlife algorithm, that exploits the redundancies for a massive speedup. That's not possible for things like SHA-256 (by design)!

Comment by zack_m_davis on The Heckler's Veto Is Also Subject to the Unilateralist's Curse · 2020-03-09T18:07:37.987Z · score: 18 (8 votes) · LW · GW

The quoted sentence claims that karma systems are a check against the unilateralist's curse specifically, not infohazards in general, as is made explicit in the final sentence of that paragraph ("Conversely, while a net-upvoted post might still be infohazardous [...]").

I've been envisioning "unilateralist's curse" as referring to situations where the average error in individual agents' estimates of the value of the initiative (what I called E in the post, but Bostrom et al. calls the error d and says it's from a cdf F(d)) is zero, and the harm comes from the fact that the variance in error terms makes someone unilaterally act/veto when they shouldn't, in a way that could be corrected by "listening to their peers." If the community as a whole is systematically biased about the value of the initiative, that seems like a different, and harder, problem.

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T08:12:52.843Z · score: 4 (2 votes) · LW · GW

I think you should read Bostrom's actual paper

Thanks for the suggestion! I just re-skimmed the Bostrom et al. paper (it's been a while) and wrote up my thoughts in a top-level post.

that the reference class isn't

Here we face the tragedy of "reference class tennis". When you don't know how much to trust your own reasoning vs. someone else's, you might hope to defer the historical record for some suitable reference class of analogous disputes. But if you and your interlocutor disagree on which reference class is appropriate, then you just have the same kind of problem again.

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T19:02:37.672Z · score: 44 (9 votes) · LW · GW

the Unilateralists's curse

The underlying statistical phenomenon is just regression to the mean: if people aren't perfect about determining how good something is, then the one who does the thing is likely to have overestimated how good it is.

I agree that people should take this kind of statistical reasoning into account when deciding whether to do things, but it's not at all clear to me that the "Unilateralist's Curse" catchphrase is a good summary of the policy you would get if you applied this reasoning evenhandedly: if people aren't perfect about determining how bad something is, then the one who vetoes the thing is likely to have overestimated how bad it is.

In order for the "Unilateralist's Curse" effect to be more important than the "Unilateralist's Blessing" effect, I think you need additional modeling assumptions to the effect that the payoff function is such that more variance is bad. I don't think this holds for the reference class of "blog posts criticizing institutions"? In a world with more variance in blog posts criticizing institutions, we get more good criticisms and more bad criticisms, which sounds like a good deal to me!

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T22:31:52.577Z · score: 32 (12 votes) · LW · GW

I wrote a big critique outlining why I think it's bad, but I couldn't keep it civil and don't want to spend another hour editing it to be

If you post it anyway (maybe a top-level post for visibility?), I'll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques. (As it is written of the fifth virtue, "Those who wish to fail must first prevent their friends from helping them.")

I'm really not trying to make anyone feel bad

Shouldn't you? If the OP is actually harmful, maybe the authors should feel bad for causing harm! Then the memory of that feeling might stop them from causing analogous harms in analogous future situations. That's what feelings are for, evolutionarily speaking.

Personally, I disapprove of this entire class of appeals-to-consequences (simpler to just say clearly what you have to say, without trying to optimize how other people will feel about it), but if you find "This post makes the community harder to defend, which is bad" compelling, I don't see why you wouldn't also accept "Making the authors feel bad would make the community easier to defend (in expectation), which is good".

Comment by zack_m_davis on Have epistemic conditions always been this bad? · 2020-03-04T16:38:10.200Z · score: 8 (3 votes) · LW · GW

A "no canceling anyone" promise isn't very valuable if most of the threat comes from third parties—if you're afraid to talk to me not because you're afraid of attacks from me, but because you're afraid that the intelligent social web will attack you for guilt-by-association with me. A confidentiality promise is more valuable—but it's also a lot more expensive. (I am now extremely reluctant to offer confidentiality promises, because even though my associates can confidently expect me to not try to use information to hurt them, I need the ability to say what I'm actually thinking when it's relevant and I don't know how to predict relevance in advance; there are just too many unpredictable situations where my future selves would have to choose between breaking a promise and lying by omission. This might be easier for people who construe lying by omission more narrowly than I do.)

Comment by zack_m_davis on Open & Welcome Thread - February 2020 · 2020-02-29T06:35:30.125Z · score: 4 (2 votes) · LW · GW

I haven't been able to find any discussion on LW about this.

I discuss this in "Heads I Win, Tails?—Never Heard of Her" ("Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent might just so happen to be more easily construed as "supporting" the Blues or the Greens [...]").

Richard Dawkins seemed surprised

I suspect Dawkins was motivatedly playing dumb, or "living in the should-universe". Indignation (e.g., at people motivatedly refusing to follow a simple logical argument because of their political incentives) often manifests itself as expression of incomprehension, but is distinguishable from literal incomprehension (e.g., by asking Dawkins to bet beforehand on what he thinks is going to happen after he Tweets that).

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-03T16:19:24.171Z · score: 9 (5 votes) · LW · GW

Oh, thanks for this explanation (strong-upvoted); you're right that distinguishing likelihoods and posteriors is really important. I also agree that single occasions only make for a very small update on character. (If this sort of thing comes up again, maybe consider explicitly making the likelihood/posterior point up front? It wasn't clear to me that that's what you were getting at with the great-great-great-grandparent.)

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-03T05:58:36.759Z · score: 0 (2 votes) · LW · GW

I agree that saying anything is, technically, Bayesian evidence about their character, but some statements are much more relevant to someone's character than others. When you say someone's response doesn't look like what you'd expect to hear from someone trying to figure out what's true, that's not very different from just saying that you suspect they're not trying to figure out what's true. Why not cut out the indirection? (That was a rhetorical question; the answer is, "Because it's polite.")

Maybe I'm wrong, but this looks to me less like the response I'd expect from someone not making a character assessment, and more like the response I'd expect from someone who's trying to make a character assessment (which could be construed as a social attack, by the sort of people who do that thing) while maintaining plausible deniability that they're not making a character assessment (in order to avoid being socially attacked on grounds of having made a social attack, by the sort of people who do that thing).

Comment by zack_m_davis on Book Review: Human Compatible · 2020-02-02T21:53:38.755Z · score: 0 (2 votes) · LW · GW

I'm wondering if the last paragraph was a mistake on my part—whether I should have picked a different example. The parent seems likely to have played a causal role in catalyzing new discussion on "A Drowning Child Is Hard to Find", but I'm much less interested in litigating the matter of cost-effectiveness numbers (which I know very little about) than I am in the principle that we want to have (or build the capacity to have, if we don't currently have that capacity) systematically truth-tracking intellectual discussions, rather than accepting allegedly-small distortions for instrumental marketing reasons of the form, "This argument isn't quite right, but it's close enough, and the correct version would scare away powerful and influential people from our very important cause." (As it is written of the fifth virtue, "The part of yourself that distorts what you say to others also distorts your own thoughts.")

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T21:40:57.780Z · score: 0 (2 votes) · LW · GW

But ... that's at least a probabilistic character assessment, right? Like, if someone exhibits a disposition to behave in ways that are more often done by bad-faith actors than good-faith actors, that likelihood ratio favors the "bad-faith actor" hypothesis, and Bayesian reasoning says you should update yourself incrementally. Right? What am I missing here?

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T16:35:19.964Z · score: 4 (2 votes) · LW · GW

You are entitled to your character assessment of Ben (Scott has argued that that bias arguments have nowhere to go, while others including Ben contest that modeling motives is necessary), but if you haven't already read the longer series that the present post was distilled from, it might be useful for better understanding where Ben is coming from: parts 1 2 3 4 5 6.

Comment by zack_m_davis on how has this forum changed your life? · 2020-02-01T08:43:37.734Z · score: 39 (11 votes) · LW · GW

I do not recommend paying attention to the forum or "the community" as it exists today.

Instead, read the Sequences! (That is, the two-plus years of almost-daily blogging by Eliezer Yudkowsky, around which this forum and "the community" coalesced back in 'aught-seven to 'aught-nine.) Reading and understanding the core Sequences is genuinely life-changing on account of teaching you, not just to aspire to be "reasonable" as your culture teaches it, but how intelligence works on a conceptual level: how well-designed agents can use cause-and-effect entanglements to correlate their internal state with the outside world to build "maps that reflect the territory"—and then use those maps to compute plans that achieve their goals.

Again, read the Sequences! You won't regret it!

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T08:07:04.620Z · score: 17 (6 votes) · LW · GW

But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) is a really important message to get people thinking about ethics and how they want to contribute.

I mean, the $5K estimate is at least plausible. (I certainly don't know how to come up with a better estimate than the people at GiveWell, who I have every reason to believe are very smart and hard-working and well-intentioned.)

But I'm a little worried that by not being loud enough with the caveats, the EA movement's "discourse algorithm" (the collective generalization of "cognitive algorithm") might be accidentally running a distributed motte-and-bailey, where the bailey is "You are literally responsible for the death of another human being if you don't donate $5000" and the motte is "The $5000 estimate is plausible, and it's a really important message to get people thinking about ethics and how they want to contribute."

$5K is at least a nontrivial amount of money even for upper-middle–class people in rich countries. It takes more than 12 days at my dayjob for me to acquire that much money—it would be many more days for someone not lucky enough to have a cushy San Francisco software engineer dayjob. When I spend twelve days of my life paying for something for me or my friends, I expect to receive the thing I paid for: if I don't get it, I'm going to seek recourse from the seller. If, when challenged on not delivering the goods, the seller retreats to, "Well, that price was just an estimate, and the estimate was probably true as far as I knew at the time—and besides, it was a really important message to get you thinking about the value of my product," I would be pretty upset!

To be sure, there are significant disanalogies between buying a product and donating to charity, but insofar as those disanalogies lead to charities being much less constrained to actually accomplish the thing they claim to than businesses are (because all criticism can be deflected with, "But we're trying really hard and it's an important message"), that's not a point in favor of encouraging scrupulous idealists to pledge their lives to the top-rated charities rather than trying to optimize the local environment that they can actually get empirical feedback about.

To be clear, the picture I'm painting is an incredibly gloomy one. On the spherical-cow Econ 101 view of the world, altruists should just be able to straightforwardly turn money into utilons. Could our civilization's information-processing institutions really be that broken, that inadequate, for even that not to be true? Really?!

I can't claim to know. Not for certain.

You'll have to think it through for yourself.

Comment by zack_m_davis on Book Review: Human Compatible · 2020-01-31T07:02:15.002Z · score: 35 (12 votes) · LW · GW

(cross posted from the Slate Star comment section)

So probably [exaggerating near-team non-existential AI risks] is a brilliant rhetorical strategy with no downsides. But it still gives me a visceral "ick" reaction to associate with something that might not be accurate.

Listen to that "ick" reaction, Scott! That's evolution's way of telling you about all the downsides you're not currently seeing!

Specifically, the "If we get a reputation as the people who fall for every panic about AI [...] will we eventually cry wolf one too many times and lose our credibility before crunch time?" argument is about being honest so as to be trusted by others. But another reason to be honest is so that other people can have the benefits of accurate information. If you simply report the evidence and arguments that actually convinced you, then your audience can combine the information you're giving them with everything else they know, and make an informed decision for themselves.

This generalizes far beyond the case of AI. Take the "you can save a live for $3000" claim. How sure are you that that's actually true? If it's not true, that would be a huge problem not just because it's not representative of the weird things EA insiders are thinking about, but because it would be causing people to spend a lot of money on the basis of false information.

Comment by zack_m_davis on Raemon's Scratchpad · 2020-01-30T06:10:55.640Z · score: 4 (2 votes) · LW · GW

Votes that are 3 points also make me think this.

The 3-point votes are an enormous entropy leak: only 13 users have a 3-point weak upvote (only 8-ish of which I'd call currently "active"), and probably comparatively few 3-point votes are strong-upvotes from users with 100–249 karma. (In contrast, about 400 accounts have 2-point weak upvotes, which I think of as "basically everyone.")

Comment by zack_m_davis on Comment section from 05/19/2019 · 2020-01-29T17:04:04.052Z · score: 8 (4 votes) · LW · GW

Thanks for asking! So, a Straussian reading was actually intended there.

(Sorry, I know this is really obnoxious. My only defense is that, unlike some more cowardly authors, on the occasions when I stoop to esotericism, I actually explain the Straussian reading when questioned.)

In context, I'm trying to defend the principle that we shouldn't derail discussions about philosophy on account of the author's private reason for being interested in that particular area of philosophy having to do with a contentious object-level topic. I first illustrated my point with an Occam's-razor/atheism example, but, as I said, I was worried that that might come off as self-serving: I want my point to be accepted because the principle I'm advancing is a good one, not due to the rhetorical trick of associating my interlocutor with something locally considered low-status, like religion. So I tried to think of another illustration where my stance (in favor of local validity, or "decoupling norms") would be associated with something low-status, and what I came up with was statistics-of-the-normal-distribution/human-biodiversity. Having chosen the illustration on the basis of the object-level topic being disreputable, it felt like effective rhetoric to link to an example and performatively "lean in" to the disrepute with a denunciation ("crank racist psuedoscientist").

In effect, the function of denouncing du Lion was not to denounce du Lion (!), but as a "showpiece" while protecting the principle that we need the unrestricted right to talk about math on this website. Explicitly Glomarizing my views on the merits of HBD rather than simply denouncing would have left an opening for further derailing the conversation on that. This was arguably intellectually dishonest of me, but I felt comfortable doing it because I expected many readers to "get the joke."

Comment by zack_m_davis on Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution · 2020-01-27T06:50:57.693Z · score: 2 (1 votes) · LW · GW

no way to obtain a fair equilibrium or no way to obtain an equilibrium that is both fair and efficient. (How would you do that in my example, without going outside the game entirely

Taller person steps aside on even-numbered days, shorter person steps aside on odd-numbered days?? (If the "calendar cost" of remembering what day it is, is sufficiently small. But it might not be small if stepping-aside is mostly governed by ingrained habit.)

Comment by zack_m_davis on What are beliefs you wouldn't want (or would feel apprehensive about being) public if you had (or have) them? · 2020-01-27T05:24:53.271Z · score: 9 (4 votes) · LW · GW

(I feel bad for how little intellectually-honest engagement you must get, so I guess I'll chip in with some feedback.)

We overwhelmingly give custody of children to the statistically worse parent

Is the implied policy suggestion here to decrease the number of children being raised without married parents (e.g., by making divorce harder, discouraging premarital sex, encouraging abortion if the parents aren't married, &c.), or are you proposing awarding custody disputes to the father more often? Your phrasing ("the statistically worse parent") seems to suggest the latter, but the distribution of single fathers today is obviously not going to be the same as the distribution after a change in custody rules!

(Child care is cross-culturally assumed to be predominantly "women's work" for both evolutionary and cultural-evolutionary reasons: against that background, there's going to be a selection effect whereby men who volunteer to be primary caretakers are going to be disproportionately unusually well-suited to it.)

When your performance in a task is directly correlated to the presence or absence of another, what does that say about your value in that task?

If the presence or absence of the other also contributes to the task performance, then honestly, not much? If kids are better off in two-parent households, that's an argument in favor of two-parent households: if you have a thesis about women and mothers specifically, you need additional arguments for that.