Posts

Why Telling People They Don't Need Masks Backfired 2020-03-18T04:34:09.644Z · score: 29 (14 votes)
The Heckler's Veto Is Also Subject to the Unilateralist's Curse 2020-03-09T08:11:58.886Z · score: 53 (18 votes)
Relationship Outcomes Are Not Particularly Sensitive to Small Variations in Verbal Ability 2020-02-09T00:34:39.680Z · score: 17 (10 votes)
Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution 2020-01-21T06:28:33.854Z · score: 30 (8 votes)
Less Wrong Poetry Corner: Walter Raleigh's "The Lie" 2020-01-04T22:22:56.820Z · score: 21 (13 votes)
Don't Double-Crux With Suicide Rock 2020-01-01T19:02:55.707Z · score: 68 (19 votes)
Speaking Truth to Power Is a Schelling Point 2019-12-30T06:12:38.637Z · score: 52 (14 votes)
Stupidity and Dishonesty Explain Each Other Away 2019-12-28T19:21:52.198Z · score: 35 (15 votes)
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think 2019-12-27T05:09:22.546Z · score: 94 (33 votes)
Funk-tunul's Legacy; Or, The Legend of the Extortion War 2019-12-24T09:29:51.536Z · score: 12 (19 votes)
Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk 2019-12-21T00:49:02.862Z · score: 64 (24 votes)
A Theory of Pervasive Error 2019-11-26T07:27:12.328Z · score: 21 (7 votes)
Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary 2019-11-22T06:18:59.497Z · score: 73 (23 votes)
Algorithms of Deception! 2019-10-19T18:04:17.975Z · score: 17 (6 votes)
Maybe Lying Doesn't Exist 2019-10-14T07:04:10.032Z · score: 58 (28 votes)
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists 2019-09-24T04:12:07.560Z · score: 215 (71 votes)
Schelling Categories, and Simple Membership Tests 2019-08-26T02:43:53.347Z · score: 52 (19 votes)
Diagnosis: Russell Aphasia 2019-08-06T04:43:30.359Z · score: 47 (13 votes)
Being Wrong Doesn't Mean You're Stupid and Bad (Probably) 2019-06-29T23:58:09.105Z · score: 16 (11 votes)
What does the word "collaborative" mean in the phrase "collaborative truthseeking"? 2019-06-26T05:26:42.295Z · score: 27 (7 votes)
The Univariate Fallacy 2019-06-15T21:43:14.315Z · score: 27 (11 votes)
No, it's not The Incentives—it's you 2019-06-11T07:09:16.405Z · score: 90 (31 votes)
"But It Doesn't Matter" 2019-06-01T02:06:30.624Z · score: 47 (31 votes)
Minimax Search and the Structure of Cognition! 2019-05-20T05:25:35.699Z · score: 15 (6 votes)
Where to Draw the Boundaries? 2019-04-13T21:34:30.129Z · score: 87 (37 votes)
Blegg Mode 2019-03-11T15:04:20.136Z · score: 18 (13 votes)
Change 2017-05-06T21:17:45.731Z · score: 1 (1 votes)
An Intuition on the Bayes-Structural Justification for Free Speech Norms 2017-03-09T03:15:30.674Z · score: 4 (8 votes)
Dreaming of Political Bayescraft 2017-03-06T20:41:16.658Z · score: 9 (3 votes)
Rationality Quotes January 2010 2010-01-07T09:36:05.162Z · score: 3 (6 votes)
News: Improbable Coincidence Slows LHC Repairs 2009-11-06T07:24:31.000Z · score: 7 (8 votes)

Comments

Comment by zack_m_davis on Takeaways from safety by default interviews · 2020-04-04T22:46:39.912Z · score: 17 (5 votes) · LW · GW

AI researchers are likely to stop and correct broken systems rather than hack around and redeploy them.

Ordinary computer programmers don't do this. (As it is written, "move fast and break things.") What will spur AI developers to greater caution?

Comment by zack_m_davis on Benito's Shortform Feed · 2020-03-28T03:16:51.886Z · score: 2 (1 votes) · LW · GW

Alternatively, "lawful universe" has lower Kolmogorov complexity than "lawful universe plus simulator intervention" and thereore gets exponentially more measure under the universal prior?? (See also "Infinite universes and Corbinian otaku" and "The Finale of the Ultimate Meta Mega Crossover".)

Comment by zack_m_davis on Benito's Shortform Feed · 2020-03-27T15:13:16.484Z · score: 3 (2 votes) · LW · GW

Why appeal to philosophical sophistication rather than lack of motivation? Humans given the power to make ancestor-simulations would create lots of interventionist sims (as is demonstrated by the populatity games like The Sims), but if the vast hypermajority of ancestor-simulations are run by unaligned AIs doing their analogue of history research, that could "drown out" the tiny minority of interventionist simulations.

Comment by zack_m_davis on Can crimes be discussed literally? · 2020-03-24T17:26:02.839Z · score: 30 (10 votes) · LW · GW

If X are making claims that everyone knows are false, then there's no element of deception

"Everyone knows" is an interesting phrase. If literally everyone knew, what would be the function of making the claim? How do you end up with a system that wouldn't work without false assertions, and yet allegedly "everyone" knows that the assertions are false? It seems more likely that the reason the system wouldn't work without false assertions, is because someone is actually fooled. If the people who do know are motivated to prevent it from becoming common knowledge, "It's not deceptive because everyone knows" would be a tempting rationalization for maintaining the status quo.

Comment by zack_m_davis on When to Donate Masks? · 2020-03-23T01:52:16.689Z · score: 8 (5 votes) · LW · GW

if you donate masks today I expect them to be used much more quickly than if you wait and donate them when things are worse.

But this (strategically timing your donation because you don't expect the recipient to use the gift intelligently if you just straightforwardly gave when you noticed the opportunity) is kind of a horrifying situation to be in, right? If you can see the logic of the argument, why can't hospital administrators see it, too?—at least once it's been pointed out?

Comment by zack_m_davis on SARS-CoV-2 pool-testing algorithm puzzle · 2020-03-22T05:19:13.967Z · score: 5 (3 votes) · LW · GW

This reminds me of that cute information-theory puzzle about finding the ball with the different weight! I'm pretty dumb and bad at math, but I think the way this works is that since each test is a yes-or-no question, we can reduce our uncertainty by at most one bit with each test, and, as in Twenty Questions, we want to choose the question such that we get that bit.

A start on the simplest variant of the problem, where we're assuming the tests are perfect and just trying to minimize the number of tests: the probability of at least one person in the pool having the 'roni in pool size S is going to be , the complement of the probability of them all not having it. We want to choose S such that this quantity is close to 0.5, so . (For example, if P = 0.05, then S ≈ 13.51.)

My next thought is that when we do get a positive test with this group size, we should keep halving that group to find out which person is positive—but that would only work with certainty if there were exactly one positive (if the "hit" is in one half of the group, you know it's not in the other), which isn't necessarily the case (we could have gotten "lucky" and got more than one hit in our group that was chosen to have a fifty-fifty shot of having at least one) ...

Partial credit???

Comment by zack_m_davis on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-19T18:12:27.350Z · score: 25 (8 votes) · LW · GW
  1. We should trust the professionals who predict these sorts of things.

What? Why? How do you decide which professionals to trust? (Nick Bostom is just some guy with a PhD; there are lots of those, and most of them aren't predicting a robot apocalypse. Eliezer Yudkowsky never graduated from high school!)

The reason I'm concerned about existential risk from artificial intelligence, is because the arguments actually make sense. (Human intelligence has had a big impact on the planet, check; there's no particular reason to expect humans to be the most possible powerful intelligence, check; there's no particular reason to expect an arbitrary intelligence to have humane values, check; humans are made out of atoms than can be used for other things, check and mate.)

If you think your audience just isn't smart enough to evaluate arguments, then, gee, I don't know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That's a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teach methods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.

Comment by zack_m_davis on King and Princess · 2020-03-17T03:24:18.656Z · score: 2 (1 votes) · LW · GW

It's not an especially accurate depiction of royalty

You mean Tangled: The Series lied to me?!

Comment by zack_m_davis on The absurdity of un-referenceable entities · 2020-03-14T20:21:41.352Z · score: 7 (4 votes) · LW · GW

The un-referenceable may, at best, be inferred (although, of course, this statement is absurd in refererring to the un-referenceable).

Would you also say that a lot of mathematics is absurd in this sense? For example, almost all real numbers are un-nameable (because there uncountably many real numbers, but only countably many names you could give a number).

Comment by zack_m_davis on orthonormal's Shortform · 2020-03-13T04:41:52.800Z · score: 4 (2 votes) · LW · GW

Some authors use ostensive to mean the same thing as "extensional."

Comment by zack_m_davis on Zoom In: An Introduction to Circuits · 2020-03-11T18:15:09.615Z · score: 8 (4 votes) · LW · GW

As is demonstrated by the Hashlife algorithm, that exploits the redundancies for a massive speedup. That's not possible for things like SHA-256 (by design)!

Comment by zack_m_davis on The Heckler's Veto Is Also Subject to the Unilateralist's Curse · 2020-03-09T18:07:37.987Z · score: 18 (8 votes) · LW · GW

The quoted sentence claims that karma systems are a check against the unilateralist's curse specifically, not infohazards in general, as is made explicit in the final sentence of that paragraph ("Conversely, while a net-upvoted post might still be infohazardous [...]").

I've been envisioning "unilateralist's curse" as referring to situations where the average error in individual agents' estimates of the value of the initiative (what I called E in the post, but Bostrom et al. calls the error d and says it's from a cdf F(d)) is zero, and the harm comes from the fact that the variance in error terms makes someone unilaterally act/veto when they shouldn't, in a way that could be corrected by "listening to their peers." If the community as a whole is systematically biased about the value of the initiative, that seems like a different, and harder, problem.

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T08:12:52.843Z · score: 4 (2 votes) · LW · GW

I think you should read Bostrom's actual paper

Thanks for the suggestion! I just re-skimmed the Bostrom et al. paper (it's been a while) and wrote up my thoughts in a top-level post.

that the reference class isn't

Here we face the tragedy of "reference class tennis". When you don't know how much to trust your own reasoning vs. someone else's, you might hope to defer the historical record for some suitable reference class of analogous disputes. But if you and your interlocutor disagree on which reference class is appropriate, then you just have the same kind of problem again.

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T19:02:37.672Z · score: 44 (9 votes) · LW · GW

the Unilateralists's curse

The underlying statistical phenomenon is just regression to the mean: if people aren't perfect about determining how good something is, then the one who does the thing is likely to have overestimated how good it is.

I agree that people should take this kind of statistical reasoning into account when deciding whether to do things, but it's not at all clear to me that the "Unilateralist's Curse" catchphrase is a good summary of the policy you would get if you applied this reasoning evenhandedly: if people aren't perfect about determining how bad something is, then the one who vetoes the thing is likely to have overestimated how bad it is.

In order for the "Unilateralist's Curse" effect to be more important than the "Unilateralist's Blessing" effect, I think you need additional modeling assumptions to the effect that the payoff function is such that more variance is bad. I don't think this holds for the reference class of "blog posts criticizing institutions"? In a world with more variance in blog posts criticizing institutions, we get more good criticisms and more bad criticisms, which sounds like a good deal to me!

Comment by zack_m_davis on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T22:31:52.577Z · score: 32 (12 votes) · LW · GW

I wrote a big critique outlining why I think it's bad, but I couldn't keep it civil and don't want to spend another hour editing it to be

If you post it anyway (maybe a top-level post for visibility?), I'll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques. (As it is written of the fifth virtue, "Those who wish to fail must first prevent their friends from helping them.")

I'm really not trying to make anyone feel bad

Shouldn't you? If the OP is actually harmful, maybe the authors should feel bad for causing harm! Then the memory of that feeling might stop them from causing analogous harms in analogous future situations. That's what feelings are for, evolutionarily speaking.

Personally, I disapprove of this entire class of appeals-to-consequences (simpler to just say clearly what you have to say, without trying to optimize how other people will feel about it), but if you find "This post makes the community harder to defend, which is bad" compelling, I don't see why you wouldn't also accept "Making the authors feel bad would make the community easier to defend (in expectation), which is good".

Comment by zack_m_davis on Have epistemic conditions always been this bad? · 2020-03-04T16:38:10.200Z · score: 8 (3 votes) · LW · GW

A "no canceling anyone" promise isn't very valuable if most of the threat comes from third parties—if you're afraid to talk to me not because you're afraid of attacks from me, but because you're afraid that the intelligent social web will attack you for guilt-by-association with me. A confidentiality promise is more valuable—but it's also a lot more expensive. (I am now extremely reluctant to offer confidentiality promises, because even though my associates can confidently expect me to not try to use information to hurt them, I need the ability to say what I'm actually thinking when it's relevant and I don't know how to predict relevance in advance; there are just too many unpredictable situations where my future selves would have to choose between breaking a promise and lying by omission. This might be easier for people who construe lying by omission more narrowly than I do.)

Comment by zack_m_davis on Open & Welcome Thread - February 2020 · 2020-02-29T06:35:30.125Z · score: 4 (2 votes) · LW · GW

I haven't been able to find any discussion on LW about this.

I discuss this in "Heads I Win, Tails?—Never Heard of Her" ("Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent might just so happen to be more easily construed as "supporting" the Blues or the Greens [...]").

Richard Dawkins seemed surprised

I suspect Dawkins was motivatedly playing dumb, or "living in the should-universe". Indignation (e.g., at people motivatedly refusing to follow a simple logical argument because of their political incentives) often manifests itself as expression of incomprehension, but is distinguishable from literal incomprehension (e.g., by asking Dawkins to bet beforehand on what he thinks is going to happen after he Tweets that).

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-03T16:19:24.171Z · score: 9 (5 votes) · LW · GW

Oh, thanks for this explanation (strong-upvoted); you're right that distinguishing likelihoods and posteriors is really important. I also agree that single occasions only make for a very small update on character. (If this sort of thing comes up again, maybe consider explicitly making the likelihood/posterior point up front? It wasn't clear to me that that's what you were getting at with the great-great-great-grandparent.)

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-03T05:58:36.759Z · score: 2 (1 votes) · LW · GW

I agree that saying anything is, technically, Bayesian evidence about their character, but some statements are much more relevant to someone's character than others. When you say someone's response doesn't look like what you'd expect to hear from someone trying to figure out what's true, that's not very different from just saying that you suspect they're not trying to figure out what's true. Why not cut out the indirection? (That was a rhetorical question; the answer is, "Because it's polite.")

Maybe I'm wrong, but this looks to me less like the response I'd expect from someone not making a character assessment, and more like the response I'd expect from someone who's trying to make a character assessment (which could be construed as a social attack, by the sort of people who do that thing) while maintaining plausible deniability that they're not making a character assessment (in order to avoid being socially attacked on grounds of having made a social attack, by the sort of people who do that thing).

Comment by zack_m_davis on Book Review: Human Compatible · 2020-02-02T21:53:38.755Z · score: 0 (2 votes) · LW · GW

I'm wondering if the last paragraph was a mistake on my part—whether I should have picked a different example. The parent seems likely to have played a causal role in catalyzing new discussion on "A Drowning Child Is Hard to Find", but I'm much less interested in litigating the matter of cost-effectiveness numbers (which I know very little about) than I am in the principle that we want to have (or build the capacity to have, if we don't currently have that capacity) systematically truth-tracking intellectual discussions, rather than accepting allegedly-small distortions for instrumental marketing reasons of the form, "This argument isn't quite right, but it's close enough, and the correct version would scare away powerful and influential people from our very important cause." (As it is written of the fifth virtue, "The part of yourself that distorts what you say to others also distorts your own thoughts.")

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T21:40:57.780Z · score: 2 (1 votes) · LW · GW

But ... that's at least a probabilistic character assessment, right? Like, if someone exhibits a disposition to behave in ways that are more often done by bad-faith actors than good-faith actors, that likelihood ratio favors the "bad-faith actor" hypothesis, and Bayesian reasoning says you should update yourself incrementally. Right? What am I missing here?

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T16:35:19.964Z · score: 4 (2 votes) · LW · GW

You are entitled to your character assessment of Ben (Scott has argued that that bias arguments have nowhere to go, while others including Ben contest that modeling motives is necessary), but if you haven't already read the longer series that the present post was distilled from, it might be useful for better understanding where Ben is coming from: parts 1 2 3 4 5 6.

Comment by zack_m_davis on how has this forum changed your life? · 2020-02-01T08:43:37.734Z · score: 39 (11 votes) · LW · GW

I do not recommend paying attention to the forum or "the community" as it exists today.

Instead, read the Sequences! (That is, the two-plus years of almost-daily blogging by Eliezer Yudkowsky, around which this forum and "the community" coalesced back in 'aught-seven to 'aught-nine.) Reading and understanding the core Sequences is genuinely life-changing on account of teaching you, not just to aspire to be "reasonable" as your culture teaches it, but how intelligence works on a conceptual level: how well-designed agents can use cause-and-effect entanglements to correlate their internal state with the outside world to build "maps that reflect the territory"—and then use those maps to compute plans that achieve their goals.

Again, read the Sequences! You won't regret it!

Comment by zack_m_davis on REVISED: A drowning child is hard to find · 2020-02-01T08:07:04.620Z · score: 17 (6 votes) · LW · GW

But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) is a really important message to get people thinking about ethics and how they want to contribute.

I mean, the $5K estimate is at least plausible. (I certainly don't know how to come up with a better estimate than the people at GiveWell, who I have every reason to believe are very smart and hard-working and well-intentioned.)

But I'm a little worried that by not being loud enough with the caveats, the EA movement's "discourse algorithm" (the collective generalization of "cognitive algorithm") might be accidentally running a distributed motte-and-bailey, where the bailey is "You are literally responsible for the death of another human being if you don't donate $5000" and the motte is "The $5000 estimate is plausible, and it's a really important message to get people thinking about ethics and how they want to contribute."

$5K is at least a nontrivial amount of money even for upper-middle–class people in rich countries. It takes more than 12 days at my dayjob for me to acquire that much money—it would be many more days for someone not lucky enough to have a cushy San Francisco software engineer dayjob. When I spend twelve days of my life paying for something for me or my friends, I expect to receive the thing I paid for: if I don't get it, I'm going to seek recourse from the seller. If, when challenged on not delivering the goods, the seller retreats to, "Well, that price was just an estimate, and the estimate was probably true as far as I knew at the time—and besides, it was a really important message to get you thinking about the value of my product," I would be pretty upset!

To be sure, there are significant disanalogies between buying a product and donating to charity, but insofar as those disanalogies lead to charities being much less constrained to actually accomplish the thing they claim to than businesses are (because all criticism can be deflected with, "But we're trying really hard and it's an important message"), that's not a point in favor of encouraging scrupulous idealists to pledge their lives to the top-rated charities rather than trying to optimize the local environment that they can actually get empirical feedback about.

To be clear, the picture I'm painting is an incredibly gloomy one. On the spherical-cow Econ 101 view of the world, altruists should just be able to straightforwardly turn money into utilons. Could our civilization's information-processing institutions really be that broken, that inadequate, for even that not to be true? Really?!

I can't claim to know. Not for certain.

You'll have to think it through for yourself.

Comment by zack_m_davis on Book Review: Human Compatible · 2020-01-31T07:02:15.002Z · score: 35 (12 votes) · LW · GW

(cross posted from the Slate Star comment section)

So probably [exaggerating near-team non-existential AI risks] is a brilliant rhetorical strategy with no downsides. But it still gives me a visceral "ick" reaction to associate with something that might not be accurate.

Listen to that "ick" reaction, Scott! That's evolution's way of telling you about all the downsides you're not currently seeing!

Specifically, the "If we get a reputation as the people who fall for every panic about AI [...] will we eventually cry wolf one too many times and lose our credibility before crunch time?" argument is about being honest so as to be trusted by others. But another reason to be honest is so that other people can have the benefits of accurate information. If you simply report the evidence and arguments that actually convinced you, then your audience can combine the information you're giving them with everything else they know, and make an informed decision for themselves.

This generalizes far beyond the case of AI. Take the "you can save a live for $3000" claim. How sure are you that that's actually true? If it's not true, that would be a huge problem not just because it's not representative of the weird things EA insiders are thinking about, but because it would be causing people to spend a lot of money on the basis of false information.

Comment by zack_m_davis on Raemon's Scratchpad · 2020-01-30T06:10:55.640Z · score: 4 (2 votes) · LW · GW

Votes that are 3 points also make me think this.

The 3-point votes are an enormous entropy leak: only 13 users have a 3-point weak upvote (only 8-ish of which I'd call currently "active"), and probably comparatively few 3-point votes are strong-upvotes from users with 100–249 karma. (In contrast, about 400 accounts have 2-point weak upvotes, which I think of as "basically everyone.")

Comment by zack_m_davis on Comment section from 05/19/2019 · 2020-01-29T17:04:04.052Z · score: 8 (4 votes) · LW · GW

Thanks for asking! So, a Straussian reading was actually intended there.

(Sorry, I know this is really obnoxious. My only defense is that, unlike some more cowardly authors, on the occasions when I stoop to esotericism, I actually explain the Straussian reading when questioned.)

In context, I'm trying to defend the principle that we shouldn't derail discussions about philosophy on account of the author's private reason for being interested in that particular area of philosophy having to do with a contentious object-level topic. I first illustrated my point with an Occam's-razor/atheism example, but, as I said, I was worried that that might come off as self-serving: I want my point to be accepted because the principle I'm advancing is a good one, not due to the rhetorical trick of associating my interlocutor with something locally considered low-status, like religion. So I tried to think of another illustration where my stance (in favor of local validity, or "decoupling norms") would be associated with something low-status, and what I came up with was statistics-of-the-normal-distribution/human-biodiversity. Having chosen the illustration on the basis of the object-level topic being disreputable, it felt like effective rhetoric to link to an example and performatively "lean in" to the disrepute with a denunciation ("crank racist psuedoscientist").

In effect, the function of denouncing du Lion was not to denounce du Lion (!), but as a "showpiece" while protecting the principle that we need the unrestricted right to talk about math on this website. Explicitly Glomarizing my views on the merits of HBD rather than simply denouncing would have left an opening for further derailing the conversation on that. This was arguably intellectually dishonest of me, but I felt comfortable doing it because I expected many readers to "get the joke."

Comment by zack_m_davis on Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution · 2020-01-27T06:50:57.693Z · score: 2 (1 votes) · LW · GW

no way to obtain a fair equilibrium or no way to obtain an equilibrium that is both fair and efficient. (How would you do that in my example, without going outside the game entirely

Taller person steps aside on even-numbered days, shorter person steps aside on odd-numbered days?? (If the "calendar cost" of remembering what day it is, is sufficiently small. But it might not be small if stepping-aside is mostly governed by ingrained habit.)

Comment by zack_m_davis on What are beliefs you wouldn't want (or would feel apprehensive about being) public if you had (or have) them? · 2020-01-27T05:24:53.271Z · score: 9 (4 votes) · LW · GW

(I feel bad for how little intellectually-honest engagement you must get, so I guess I'll chip in with some feedback.)

We overwhelmingly give custody of children to the statistically worse parent

Is the implied policy suggestion here to decrease the number of children being raised without married parents (e.g., by making divorce harder, discouraging premarital sex, encouraging abortion if the parents aren't married, &c.), or are you proposing awarding custody disputes to the father more often? Your phrasing ("the statistically worse parent") seems to suggest the latter, but the distribution of single fathers today is obviously not going to be the same as the distribution after a change in custody rules!

(Child care is cross-culturally assumed to be predominantly "women's work" for both evolutionary and cultural-evolutionary reasons: against that background, there's going to be a selection effect whereby men who volunteer to be primary caretakers are going to be disproportionately unusually well-suited to it.)

When your performance in a task is directly correlated to the presence or absence of another, what does that say about your value in that task?

If the presence or absence of the other also contributes to the task performance, then honestly, not much? If kids are better off in two-parent households, that's an argument in favor of two-parent households: if you have a thesis about women and mothers specifically, you need additional arguments for that.

Comment by zack_m_davis on Matt Goldenberg's Short Form Feed · 2020-01-26T23:26:26.699Z · score: 2 (1 votes) · LW · GW

Hopefully, it will be easier in the Bay Area than it would be otherwise.

Speaking as a Bay Area native,[1] I would not use the word "hopefully" here!

(One would hope to find or create a subgroup, but it would be nicer if it were possible to do this somewhere with less-insane housing prices and ambient culture. Hoping that it needs to be done here on account of just having moved here would be the sunk cost fallacy.)


  1. Raised in Walnut Creek, presently in Berkeley. ↩︎

Comment by zack_m_davis on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2020-01-26T04:53:32.319Z · score: 14 (4 votes) · LW · GW

(Thanks for your patience.)

This is part of what you mean when you say the report-drafting scientist is "not a bad person"—they've followed the letter of the moral law as best they can [...] your judgment ("I guess they're not a bad person") is the judgment that morality encourages you to give

So, from my perspective as an author (which, you know, could be wrong), that line was mostly a strategic political concession: there's this persistent problem where when you try to talk about harms from people being complicit with systems of deception (not even to do anything about it, but just to talk about the problem), the discussion immediately gets derailed on, "What?! Are you saying I'm a bad person!? How dare you!" ... which is a much less interesting topic.

The first line of defense against this kind of derailing is to be very clear about what is being claimed (which is just good intellectual practice that you should be doing anyway): "By systems of deception, I mean processes that systematically result in less accurate beliefs—the English word 'deception' is often used with moralizing connotations, but I'm talking about a technical concept that I can implement as literal executable Python programs. Similarly, while I don't yet have an elegant reduction of the underlying game theory corresponding to the word 'complicity' ..."

The second line of defense is to throw the potential-derailer a bone in the form of an exculpatory disclaimer: "I'm not trying to blame anyone, I'm just saying that ..." Even if (all other things being equal) you would prefer to socially punish complicity with systems of deception, by precomitting to relinquish the option to punish, you can buy a better chance of actually having a real discussion about the problem. (Making the precommitment credible is tough, though.)

Ironically, this is an instance of the same problem it's trying to combat ("distorting communication to appease authority" and "distorting communication in order to appease people who are afraid you're trying to scapegoat them on the pretext of them distorting communication to appease authority" are both instances of "distorting communication because The Incentives"), but hopefully a less severe one, whose severity is further reduced by explaining that I'm doing it in the comments.

You can also think of the "I'm not blaming you, but seriously, this is harmful" maneuver as an interaction between levels: an axiological attempt to push for a higher moral standard in given community, while acknowledging that the community does not yet uphold the higher standard (analogous to moral attempt to institute tougher laws, while acknowledging that the sin in question is not a crime under current law).

noticing small lies committed by accident or under stress.

Lies committed "by accident"? What, like unconsciously? (Maybe the part of your brain that generated this sentence doesn't disagree with Jessica about the meaning of the word lie as much as the part of your brain that argues about intensional definitions??)

Comment by zack_m_davis on 2018 Review: Voting Results! · 2020-01-24T22:26:09.019Z · score: 8 (4 votes) · LW · GW

I'd rather get an answer to my initial comment - why it makes sense to you/them

Sure. I explained my personal enthusiasm for the Review in a November comment.

Comment by zack_m_davis on 2018 Review: Voting Results! · 2020-01-24T21:14:24.606Z · score: 6 (3 votes) · LW · GW

What, in your view, is the main issue? Other than printing/distribution costs, the only other problem that springs to mind is the opportunity cost of the labor of whoever does the design/typesetting, but I don't think either of us is in a good position to assess that. What bad thing happens if the people who run a website also want to print a few paper books?

Comment by zack_m_davis on 2018 Review: Voting Results! · 2020-01-24T18:30:39.527Z · score: 16 (7 votes) · LW · GW

Print-on-demand books aren't necessarily very expensive: I've made board books for my friend's son in print runs of one or two for like thirty bucks per copy. If the team has some spare cash and someone wants to do the typesetting, a tiny print run of 100 copies could make sense as "cool in-group souvenir", even if it wouldn't make sense as commercial publishing.

Comment by zack_m_davis on Book Review—The Origins of Unfairness: Social Categories and Cultural Evolution · 2020-01-23T07:29:25.187Z · score: 5 (3 votes) · LW · GW

Apologies—my blog distillation of "what I learned" is glossing over a lot of stuff that the actual book covers properly: the difference between models where agents only meet the other type vs. also their own type is discussed in §3.3.2–3, and "taller person leads, shorter person follows" is an example of what O'Connor calls "gradient markers" in §2.3.2.

As far as dancing goes, I think it's kind of like how we give cute mnemonic names like "Hawk–Dove" to payoff matrices of a particular form that don't quite make sense as a literal story about literal hawks and literal doves, but evolutionary game theory in general really is useful for understanding the behavior of animals (including birds).

Comment by zack_m_davis on How Doomed are Large Organizations? · 2020-01-22T04:17:29.114Z · score: 9 (5 votes) · LW · GW

#2 could be a "baptists and bootleggers" effect: ideological activists (the "baptists") want to change Society; mazey organizations (the "bootleggers") can offer the activists "shallow" concessions that (you can tell if you scrutinize closely enough, but almost no one does) don't actually end up changing Society much, but do shut out less-mazey competitors who can't afford to make the concessions.

Comment by zack_m_davis on Preliminary thoughts on moral weight · 2020-01-13T03:12:24.596Z · score: 14 (8 votes) · LW · GW

This kind of thinking actively drives me and many others I know away from LW/EA/Rationality

And that kind of thinking (appeal to the consequence of repelling this-and-such kind of person away from some alleged "community") has been actively driving me away. I wonder if there's some way to get people to stop ontologizing "the community" and thereby reduce the perceived need to fight for control of the "LW"/"EA"/"rationalist" brand names? (I need to figure out how to stop ontologizing, because I'm exhausted from fighting.) Insofar as "rationality" is a thing, it's something that Luke-like optimization processes and Zvi-like optimization processes are trying to approximate, not something they're trying to fight over.

Comment by zack_m_davis on Realism about rationality · 2020-01-12T19:31:23.741Z · score: 4 (2 votes) · LW · GW

I struggle to name a way that evolution affects an everyday person (ignoring irrelevant things like atheism-religion debates).

Evolutionary psychology?

Comment by zack_m_davis on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2020-01-06T08:19:38.850Z · score: 29 (5 votes) · LW · GW

when you talk to them in private they get everything 100% right.

I'm happy for them, but I thought the point of having taxpayer-funded academic departments was so that people who aren't insider experts can have accurate information with which to inform decisions? Getting the right answer in private can only help those you talk to in private.

I also don't think we live in the world where everyone has infinite amounts of slack to burn endorsing taboo ideas and nothing can possibly go wrong.

Can you think of any ways something could possibly go wrong if our collective map of how humans work fails to reflect the territory?

(I drafted a vicious and hilarious comment about one thing that could go wrong, but I fear that site culture demands that I withhold it.)

Comment by zack_m_davis on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:41:25.356Z · score: 2 (1 votes) · LW · GW

Oh, sorry, I wasn't trying to offer a legal opinion; I was just trying to convey worldview-material while riffing off your characterization of "defrauding everyone about the El Dorado thing."

Comment by zack_m_davis on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T02:32:44.950Z · score: 21 (3 votes) · LW · GW

Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x?

For some multiplier, yes. (I don't know what the multiplier is.) If potentates would murder me on the spot unless I deny that that they live acting by others' action, and affirm that they are loved even if they don't give and are strong independently of a faction, then I will say those things in order to not be murdered on the spot.

I guess I need to clarify something: I tend to talk about this stuff in the language of virtues and principles rather than the language of consequentialism, not because I think the language of virtues and principles is literally true as AI theory, but because humans can't use consequentialism for this kind of thing. Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn't help the part of you can that talk, that can be reached by the part of me that can talk.

"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answer even when the local analogue of the Pope doesn't like it. I falsifiably predict that a culture that has "Use Bayesian decision theory to decide whether or not to speak the truth" as its slogan won't be able to do science—Platonically, the math has to exist, but letting humans appeal to Platonic math whenever they want is just too convenient of an excuse.

Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x? I falsifiably predict that your answer is "Yes." Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)

(If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?)

Great question! No, I'm not sure. But if my current view is less wrong than the mainstream, I expect to do good by talking about it, even if there exists an even better theory that I wouldn't be brave enough to talk about.

Immaterial souls are stabbed all the time in the sense that their opinions are discredited.

Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).

Comment by zack_m_davis on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-05T03:15:36.675Z · score: 37 (8 votes) · LW · GW

I think the Vassarian–Taylorist conflict–mistake synthesis moral is that in order to perform its function, the English court system needs to be able to punish Raleigh for "fraud" on the basis of his actions relative what he knew or could have reasonably been expected to know, even while Raleigh is subjectively the hero of his own story and a sympathetic psychologist could eloquently and truthfully explain how easy it was for him to talk himself into a biased narrative.

Where mistake theorists treat politics as "science, engineering, or medicine" and conflict theorists treat politics as war, this view treats politics as evolutionary game theory: the unfolding over time of a population of many dumb, small agents executing strategies, forming coalitions, occasionally switching strategies to imitate those that are more successful in the local environment, &c. The synthesis view is mistake-theoretic insofar as the little agents are understood to be playing far from optimally and could do much better if they were smarter, but conflict-theoretic insofar as the games being played have large zero-sum components and you mostly can't take the things the little agents say literally. The "mistakes" aren't random and not easily fixable with more information (in contrast to how if I said 57 was prime and you said "But 3 × 19", I would immediately say "Oops"), but rather arise from the strategies being executed: it's not a coincidence that Raleigh talked himself into a narrative where he was lone angel who would discover limitless gold.

Agents select beliefs on the basis of either their being true (and therefore useful for navigating the world) or because they successfully deceive other agents into mis-navigating the world in a way that benefits the belief-holder. "Be more charitable to other people" isn't necessarily great advice in general, because while sometimes other agents have useful true information to offer (Raleigh's The Discovery of Guiana "includes some material of a factual nature"), it's hard to distinguish from misinformation that was optimized to benefit the agents who propogate it (Discovery of Guiana also says you should invest in Raleigh's second expedition).

Mistake theorists think conflict theorists are making a mistake; conflict theorists think mistake theorists are the enemy. Evolutionary game theorists think that conflict theorists are executing strategies adapted to an environment predominated by zero-sum games, and that mistake theorists are executing strategies adapted to an environment containing cooperative games (where the existence of a mechanism for externally enforcing agreements, like a court system, aligns incentives and thereby makes it easier to propogate true infromation).

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T21:05:17.167Z · score: 4 (2 votes) · LW · GW

When designing norms, we should take into account an asymmetry between reading and writing: each comment is only written once, but read many times. Each norm imposed on writers to not be unduly annoying constrains the information flow of the forum much more than each norm imposed on readers to not be unduly annoyed.

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T17:05:40.492Z · score: 16 (7 votes) · LW · GW

Why should Said be the one to change, though? Maybe relatively subtle tweaks to your reading style could make a big difference.

A "surprised bafflement" tone is often seen as a social attack because it's perceived as implying, "You should know this already, therefore I'm surprised that you don't, therefore I should have higher status than you." But that's not the only possible narrative. What happens if you reframe your reaction as, "He's surprised, but surprise is the measure of a poor hypothesis—the fact that he's so cluelessly self-centered as to not be able to predict what other people know means that I should have higher status"?

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T02:48:08.538Z · score: 33 (11 votes) · LW · GW

What I meant by the word "our" was "the broader context culture-at-large," not Less Wrong or my own personal home culture or anything like that. Apologies, that could've been clearer.

No, I got that, I was just using the opportunity to riff off your "In My Culture" piece[1] while defending Said, who is a super valuable commenter who I think is being treated pretty unfairly in this 133-comment-and-counting meta trainwreck!

Sure, sometimes he's insistent on pressing for rigor in a way could seem "nitpicky" or "dense" to readers who, like me, are more likely to just shrug and say, "Meh, I think I mostly get the gist of what the author is trying to say" rather than honing in on a particular word or phrase and writing a comment asking for clarification.

But that's valuable. I am glad that a website nominally devoted to mastering the hidden Bayesian structure of cognition to the degree of precision required to write a recursively self-improving superintelligence to rule over our entire future lightcone has people whose innate need for rigor is more demanding than my sense of "Meh, I think I mostly get the gist"!


  1. This is actually the second time in four months. Sorry, it writes itself! ↩︎

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T00:57:34.022Z · score: 2 (1 votes) · LW · GW

I also apologize for breaking a lot of the comment-permalinks in that thread

It looks like the post comment-counter is also broken. (The header for this post says "4 comments".)

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T21:54:29.524Z · score: 39 (11 votes) · LW · GW

I note that we, as a culture, have reified a term for this, which is "sealioning."

Perhaps in your culture. In my culture, use of the term "sealioning" is primarily understood as an expression of anti-intellectualism (framing requests for dialogue as aggression).

In my culture, while the need to say "I don't expect engaging with you to be productive, therefore I must decline this and all future requests for dialogue from you" is not unheard of, it is seen as a sad and unusual occasion—definitely not something meriting a short codeword with connotations of contempt.

Comment by zack_m_davis on Don't Double-Crux With Suicide Rock · 2020-01-02T00:04:38.795Z · score: 14 (5 votes) · LW · GW

Everything I'm saying is definitely symmetric across persons, even if, as an author, I prefer to phrase it in the second person. (A previous post included a clarifying parenthetical to this effect at the end, but this one did not.)

That is, if someone who trusted your rationality noticed that you seemed visibly unmoved by their strongest arguments, they might think that the lack of agreement implies that they should update towards your position, but another possibility is that their trust has been misplaced! If they find themselves living a world of painted rocks where you are one of the rocks, then it may come to pass that protecting the sanctity of their map would require them to master the technique of lonely dissent.

You could argue that my author's artistic preference to phrase things in the second person is misleading, but I'm not sure what to do about that while still accomplishing everything else I'm trying to do with my writing: my reply to Wei Dai and a Reddit user's commentary on another previous post seem relevant.

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T22:27:13.936Z · score: 5 (3 votes) · LW · GW

I agree with your main point that authors are not obligated to respond to comments, but—

I don't know whose judgement you would trust on this, but if I drag Eliezer into this thread, and have him say decisively that the norms of LessWrong should not put an obligation on authors to respond to every question, and to be presumed wrong or ignorant in the absence of a response, would that change your mind on this?

Why would this kind of appeal-to-authority change his mind? (That was a rhetorical question; I wouldn't expect you to reply to this comment unless you really wanted to.) Said thinks his position is justified by normatively correct general principles. If he's wrong, he's wrong because of some counterargument that normative general principles don't actually work like that, not because of Eliezer Yudkowsky's say-so.

Comment by zack_m_davis on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T22:08:34.228Z · score: 7 (3 votes) · LW · GW

There is always an obligation by any author to respond to anyone's comment along these lines. [...] What is the point of posting here, if you're not going to engage with commenters?

Can you clarify what you mean by "along these lines"? Not all comments or commenters are equally worth engaging with (in terms of some idealized "insight per unit effort" metric).

I think I agree that simple questions like "What do you mean by this-and-such word?" are usually not that expensive to answer, but there are times when I write off a comment or commenter as not worth my time, and it can be annoying when someone is being unduly demanding even after a "reasonable" attempt to clarify has been made.