Algorithms of Deception! 2019-10-19T18:04:17.975Z · score: 17 (6 votes)
Maybe Lying Doesn't Exist 2019-10-14T07:04:10.032Z · score: 58 (26 votes)
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists 2019-09-24T04:12:07.560Z · score: 199 (65 votes)
Schelling Categories, and Simple Membership Tests 2019-08-26T02:43:53.347Z · score: 52 (19 votes)
Diagnosis: Russell Aphasia 2019-08-06T04:43:30.359Z · score: 47 (13 votes)
Being Wrong Doesn't Mean You're Stupid and Bad (Probably) 2019-06-29T23:58:09.105Z · score: 16 (11 votes)
What does the word "collaborative" mean in the phrase "collaborative truthseeking"? 2019-06-26T05:26:42.295Z · score: 27 (7 votes)
The Univariate Fallacy 2019-06-15T21:43:14.315Z · score: 27 (11 votes)
No, it's not The Incentives—it's you 2019-06-11T07:09:16.405Z · score: 91 (29 votes)
"But It Doesn't Matter" 2019-06-01T02:06:30.624Z · score: 47 (31 votes)
Minimax Search and the Structure of Cognition! 2019-05-20T05:25:35.699Z · score: 15 (6 votes)
Where to Draw the Boundaries? 2019-04-13T21:34:30.129Z · score: 80 (33 votes)
Blegg Mode 2019-03-11T15:04:20.136Z · score: 18 (13 votes)
Change 2017-05-06T21:17:45.731Z · score: 1 (1 votes)
An Intuition on the Bayes-Structural Justification for Free Speech Norms 2017-03-09T03:15:30.674Z · score: 4 (8 votes)
Dreaming of Political Bayescraft 2017-03-06T20:41:16.658Z · score: 9 (3 votes)
Rationality Quotes January 2010 2010-01-07T09:36:05.162Z · score: 3 (6 votes)
News: Improbable Coincidence Slows LHC Repairs 2009-11-06T07:24:31.000Z · score: 7 (8 votes)


Comment by zack_m_davis on The Credit Assignment Problem · 2019-11-08T22:35:07.327Z · score: 26 (7 votes) · LW · GW

I can't for the life of me remember what this is called

Shapley value

(Best wishes, Less Wrong Reference Desk)

Comment by zack_m_davis on Daniel Kokotajlo's Shortform · 2019-11-03T19:52:03.399Z · score: 3 (2 votes) · LW · GW

Related: "Is Clickbait Destroying Our General Intelligence?"

Comment by zack_m_davis on [Question] When Do Unlikely Events Should Be Questioned? · 2019-11-03T17:36:16.919Z · score: 2 (1 votes) · LW · GW

the probability of observing something with a low probability is very different from the probability of observing specifically that low probability event

Right. For example, suppose you have a biased coin that comes up Heads 80% of the time, and you flip it 100 times. The single most likely sequence of flips is "all Heads." (Consider that you should bet heads on any particular flip.) But it would be incredibly shocking to actually observe 100 Headses in a row (probability 0.8¹⁰⁰ ≈ 2.037 · 10⁻¹⁰). Other sequences have less probability per individual sequence, but there are vastly more of them: there's only one way to get "all Heads", but there are 100 possible ways to get "99 Headses and 1 Tails" (the Tails could be the 1st flip, or the 2nd, or ...), 4,950 ways to get "98 Headses and 2 Tailses", and so on. It turns out that you're almost certain to observe a sequence with about 20 Tailses—you can think of this as being where the "number of ways this reference class of outcomes could be realized" factor balances out the "improbability of an individual outcome" factor. For more of the theory here, see Chapter 4 of Information Theory, Inference, and Learning Algorithms.

Comment by zack_m_davis on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T16:22:06.339Z · score: 2 (1 votes) · LW · GW

I mean, yes, but we still usually want to talk about collections of humans (like a "corporation" or "the economy") producing highly optimized outputs, like pencils, even if no one human knows everything that must be known to make a pencil. If someone publishes bad science about the chemistry of graphite, which results in the people in charge of designing a pencil manufacturing line making a decision based on false beliefs about the chemistry of graphite, that makes the pencils worse, even if the humans never achieve unanimity and you don't want to use the language of "agency" to talk about this process.

Comment by zack_m_davis on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T08:43:11.024Z · score: 5 (2 votes) · LW · GW

Thanks for asking. The reason artificial general intelligence is an existential risk is because agentic systems that construct predictive models of their environment can use those models to compute what actions will best achieve their goals (and most possible goals kill everyone when optimized hard enough because people are made of atoms that can be used for other things).

The "compute what actions will best achieve goals" trick doesn't work when the models aren't accurate! This continues to be the case when the agentic system is made out of humans. So if our scientific institutions systematically produce less-than-optimally-informative output due to misaligned incentives, that's a factor that makes the "human civilization" AI dumber, and therefore less good at not accidentally killing itself.

Comment by zack_m_davis on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T17:09:31.305Z · score: 30 (10 votes) · LW · GW

Sorry, let me clarify: I was specifically reacting to the OP's characterization of "throw in the towel while making it look like they were claiming victory." Now, if that characterization is wrong, then my comment becomes either irrelevant (if you construe it as a conditional whose antecedent turned out to be false: "If DeepMind decided to throw in the towel while making it look like ..., then is that good news or bad news") or itself misleading (if you construe it as me affirming and propagating the misapprehension that DeepMind is propagating misapprehensions—and if you think I'm guilty of that, then you should probably downvote me and the OP so that the Less Wrong karma system isn't complicit with the propagation of misapprehensions).

I agree that the "Grandmaster-level"/"ranked above 99.8% of active players" claims are accurate. But I also think it's desirable for intellectuals to aspire to high standards of intent to inform, for which accuracy of claims is necessary but not sufficient, due to the perils of selective reporting.

Imagine that, if you spoke to the researchers in confidence (or after getting them drunk), they would agree with the OP's commentary that "AlphaStar doesn't really do the 'strategy' part of real-time strategy [...] because there's no representation of causal thinking." (This is a hypothetical situation to illustrate the thing I'm trying to say about transparency norms; maybe there is some crushing counterargument to the OP that I'm not aware of because I'm not a specialist in this area.) If that were the case, why not put that in the blog post in similarly blunt language, if it's information that readers would consider relevant? If the answer to that question is, "That would be contrary to the incentives; why would anyone 'diss' their own research like that?" ... well, the background situation that makes that reply seem normative is what I'm trying to point at with the "information warfare" metaphor: it's harder to figure out what's going on with AI in a world in which the relevant actors are rewarded and selected for reporting impressive-seeming capability results subject to the constraint of not making any false statements, than a world in which actors are directly optimizing for making people more informed about what's going on with AI.

Comment by zack_m_davis on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T05:09:54.671Z · score: 7 (13 votes) · LW · GW

I think that DeepMind realized they'd need another breakthrough to do what they did to Go, and decided to throw in the towel while making it look like they were claiming victory.

Is this mildly good news on the existential risk front (because the state of the field isn't actually as advanced as it looks), or extremely bad news (because we live in a world of all-pervasive information warfare where no one can figure out what's going on because even the reports of people whose job it is to understand what's going on are distorted by marketing pressures)?

Comment by zack_m_davis on Is there a definitive intro to punishing non-punishers? · 2019-11-01T02:56:43.042Z · score: 4 (2 votes) · LW · GW

A different Cosmides-and-Tooby (and Michael E. Price) take:

Unfortunately, these results do not make the evolution of adaptations for collective action any less mysterious. Because punishing a free rider would generally have entailed some nontrivial cost, each potential punisher has an incentive to defect—that is, to avoid this cost by not punishing acts of free riding. Thus, the provision of punishment is itself a public good: Each individual has an incentive to free ride on the punishment activities of others. Hence, second-order free riders should be fitter (or better off) than punishers. Without a way of solving this second-order free rider problem, cooperation should unravel, with nonparticipation and nonpunishment the equilibrium outcome. Even worse, this problem reappears at each new level, revealing an infinite regress problem: Punishment needs to be visited on free riders on the original public good, and on those who do not punish free riders, and on those who do not punish those who do not punish free riders, and so on.

Comment by zack_m_davis on Maybe Lying Doesn't Exist · 2019-10-30T15:04:26.400Z · score: 3 (2 votes) · LW · GW

I will endeavor to make my intuitions more rigorous and write up the results in a future post.

Comment by zack_m_davis on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2019-10-30T04:34:06.903Z · score: 9 (5 votes) · LW · GW

Three cheers for pingbacks! Hip, hip

Comment by zack_m_davis on Maybe Lying Doesn't Exist · 2019-10-28T04:27:14.962Z · score: 12 (4 votes) · LW · GW

(Thanks for the questioning!—and your patience.)

In order to compute what actions will have the best consequences, you need to have accurate beliefs—otherwise, how do you know what the best consequences are?

There's a sense in which the theory of "Use our methods of epistemic rationality to build predictively accurate models, then use the models to decide what actions will have the best consequences" is going to be meaningfully simpler than the theory of "Just do whatever has the best consequences, including the consequences of the thinking that you do in order to compute this."

The original timeless decision theory manuscript distinguishes a class of "decision-determined problems", where the payoff depends on the agent's decision, but not the algorithm that the agent uses to arrive at that decision: Omega isn't allowed to punish you for not making decisions according to the algorithm "Choose the option that comes first alphabetically." This seems like a useful class of problems to be able to focus on? Having to take into account the side-effects of using a particular categorization, seems like a form of being punished for using a particular algorithm.

I concede that, ultimately, the simple "Cartesian" theory that disregards the consequences of thinking can't be the true, complete theory of intelligence, because ultimately, the map is part of the territory. I think the embedded agency people are working on this?—I'm afraid I'm not up-to-date on the details. But when I object to people making appeals to consequences, the thing I'm objecting to is never people trying to do a sophisticated embedded-agency thing; I'm objecting to people trying to get away with choosing to be biased.

you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there's no way to train them to be more rational about this. Is that true

Actually, yes.

and if so why do you think that?

Long story. How about some game theory instead?

Consider some agents cooperating in a shared epistemic project—drawing a map, or defining a language, or programming an AI—some system that will perform better if it does a better job of corresponding with (some relevant aspects of) reality. Every agent has the opportunity to make the shared map less accurate in exchange for some selfish consequence. But if all of the agents do that, then the shared map will be full of lies. Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); "just make the map be accurate" is a natural focal point (because the truth is generally useful to everyone).

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-26T06:59:39.549Z · score: 13 (3 votes) · LW · GW

You don't need to denounce someone that's demonstrably wrong, you just point out how they're wrong.

I think you're misunderstanding the implications of the heresy dynamic. It's true that people who want to maintain their good standing within the dominant ideology—in the Cathedral, we could say, since you seem to be a Moldbug fan—can't honestly engage with the heretic's claims. That doesn't imply that the heretic's claims are correct—they just have to be not so trivially wrong as to permit a demonstration of their wrongness that doesn't require the work of intellectually honest engagement (which the pious cannot permit themselves).

If a Bad Man says that 2+2=5, then good people can demonstrate the arithmetic error without pounding the table and denouncing him as a Bad Man. If a Bad Man claims that P equals NP, then good people who want the Bad Man gone but wouldn't be caught dead actually checking the proof, are reduced to pounding the table—but that doesn't mean the proof is correct! Reversed stupidity is not intelligence.

What exactly do people think is the endgame of denunciation?

Evading punishment of non-punishers. Good people who don't shun Bad Men might fall under suspicion of being Bad Men themselves.

I had hoped that people would be more rational and less pissed off, but you win some you lose some.

I know the feeling.

The evolutionary need for sexual dimorphism will disappear, evolution will take care of the rest.

Um. You may be underestimating the timescale on which evolution works? (The evolution of sexual dimorphism is even slower!)

I specifically said I offered no solution in that post.

That's a start, but if you're interested in writing advice, I would recommend trying a lot harder to signal that you really understand the is/ought distinction. (You're doing badly enough at this that I'm not convinced you do.) You've been pointing to some real patterns, but when your top-line summary is "Women's agency [...] is contrary to a society's progress and stability" ... that's not going to play in Berkeley. (And for all of their/our other failings, a lot of people in Berkeley are very smart and have read the same blogs as you, and more—even if they're strategic about when to show it.)

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-26T01:59:17.617Z · score: 6 (3 votes) · LW · GW

Thanks. (e) is very important: that's a large part of why my special-purpose pen name ended up being a mere "differential visibility" pseudonym (for a threat-model where the first page of my real-name Google results matters because of casual searches by future employers) rather than an Actually Secret pseudonym. (There are other threat models that demand more Actual Secrecy, but I'm not defending against those because I'm not that much of a worthless coward.)

I currently don't have a problem with (d), but I agree that it's probably true in general (and I'm just lucky to have such awesome friends).

I think people underestimate the extent to which (c) is a contingent self-fulfilling prophecy rather than a fixed fact of nature. You can read the implied social attack in (a) as an attempt to push against the current equilibrium.

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T23:16:56.621Z · score: 20 (7 votes) · LW · GW

We really need better "vocabulary tech" to talk about the natural category that includes both actually-realized physical violence, and credible threats thereof. When a man with a gun says "Your money or your life" and you say "Take my money", you may not want to call that "violence", but something has happened that made you hand over your wallet, and we may want to consider it the same kind of something if the man actually shoots. Reactionary thinkers who praise "stability" and radicals who decry "structural violence" may actually be trying to point at the same thing. We would say counterfactual rather than structural—the balanced arrangement of credible threats and Schelling points by which "how things are" is held in place.

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-24T05:22:02.984Z · score: 17 (7 votes) · LW · GW

Invite-only private email list that publishes highlights to a pseudonymous blog with no comment section.

You might ask, why aren't people already doing this? I think the answer is going to be some weighted combination of (a) they're worthless cowards, and (b) the set of things you can't say, and the distortionary effect of recursive lies, just aren't that large, such that they don't perceive the need to bother.

There are reasons I might be biased to put too much weight on (a). Sorry.

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T22:47:59.595Z · score: 26 (11 votes) · LW · GW

But the fact that you feel compelled to say that says something worrying about the state of our Society, right? It should really just go without saying—to anyone who actually thinks about the matter for a minute—that when someone on a no-barriers-to-entry free-to-sign-up internet forum asks for examples of unpopular opinions, then someone is going to post a terrible opinion that most other commenters will strongly disagree with (because it's terrible). If, empirically, it doesn't go without saying, that would seem to suggest that people feel the need to make the forum as a whole accountable to mob punishment mechanisms that are less discerning than anyone who actually thinks about the matter for a minute. But I continue to worry that that level of ambient social pressure is really bad for our collective epistemology, even if the particular opinion that we feel obligated to condemn in some particular case is, in fact, worthy of being condemned.

Like, without defending the text of the grandparent (Anderson pretty obviously has a normative agenda to push; my earlier comment was probably too charitable), the same sorts of general skills of thinking that we need to solve AI alignment, should also be able be able to cope with empirical hypotheses of the form, "These-and-such psychological sex differences in humans (with effect size Cohen's d equalling blah) have such-and-these sociological consequences."

Probably that discussion shouldn't take place on Less Wrong proper (too far off-topic), but if there is to be such a thing as an art of rationality, the smart serious version of the discussion—the version that refutes misogynistically-motivated idiocy while simultaneously explaining whatever relevant structure-in-the-world some misogynistic idiots are nevertheless capable of perceiving—needs to happen somewhere. If none of the smart serious people can do it because we're terrified that the media (or Twitter, or /r/SneerClub) can't tell the difference between us and Stuart Anderson, then we're dead. I just don't think that level of cowardice is compatible with the amount of intellectual flexibility that we need to save the world.

The immortal Scott Alexander wrote, "you can't have a mind that questions the stars but never thinks to question the Bible." Similarly, I don't think you can have a mind that designs a recursively self-improving aligned superintelligence (!!) that has to rely on no-platforming tactics rather than calmly, carefully, objectively describing in detail the specific ways in which the speaker's cognitive algorithms are failing to maximize the probability they assign to the actual outcome.

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T18:06:40.324Z · score: 4 (3 votes) · LW · GW

Okay, that makes sense.

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T18:00:36.863Z · score: 9 (5 votes) · LW · GW

Not that clearly? I agree that Anderson is using vague, morally-charged (what constitutes "progress"?), and hyperbolic ("everything in society"!?) language, but the comment still has empirical content: if someone told you about an alien civilization in which "blerples do the bulk of the work when it comes to maintaining society, but splapbops' agency causes there to be nothing to pay the blerples with", that testimony would probably change your implied probability distribution over anticipated observations of the civilization (even if you didn't know what blerples and splapbops were).

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T17:37:56.079Z · score: 9 (5 votes) · LW · GW

the costs of replacing it with a less-bad example seem fairly minimal.

Can you elaborate? I think the costs (in the form of damaging the integrity of the inquiry) are quite high. If you're going to crowdsource a list of unpopular beliefs, and carry out that job honestly, then the list is inevitably going to contain a lot of morally objectionable ideas. After all, being morally objectionable is a good reason for an idea to be unpopular! (I suppose the holders of such ideas might argue that the causal relationship between unpopularity and perception-of-immorality runs in the other direction, but we don't care what they think.)

Now, I also enjoy our apolitical site culture, which I think reflects an effective separation of concerns: here, we talk aboout Bayesian epistemology. When we want to apply our epistemology skills to contentious object-level topics that are likely to generate "more heat than light", we take it to someone else's website. (I recommend /r/TheMotte.) That separation is a good reason to explicitly ban specific topics or hypotheses as being outside of the site's charter. But if we do that, then we can't compile a list of unpopular beliefs without lying about the results. Blatant censorship is the best kind!

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T16:58:38.493Z · score: 29 (13 votes) · LW · GW

What did you think was going to happen when you asked people for unpopular opinions?!

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T15:43:59.939Z · score: 12 (6 votes) · LW · GW

Um, you might also want to ask people to PM you, in case some people have contrarian beliefs that they don't want to report in the public comment section?

Comment by zack_m_davis on What are some unpopular (non-normative) opinions that you hold? · 2019-10-23T15:28:28.254Z · score: 14 (4 votes) · LW · GW

It's sad because the OP is specifically asking for contrarian opinions! In that specific context, the grandparent is an on-topic contribution (even if it would be strong-downvote-worthy had it appeared in most other possible contexts on this website).

Comment by zack_m_davis on Maybe Lying Doesn't Exist · 2019-10-19T02:13:44.180Z · score: 14 (4 votes) · LW · GW

so whether we decide to call something a "danger" seemingly must depend entirely or mostly on the consequences of doing so

I'm not claiming that the theory can tell us exactly how dangerous something has to be before we call it a "danger." (Nor how many grains of sand make a "heap".) This, indeed, seems necessarily subjective.

I'm claiming that whether we call something a "danger" should not take into account considerations like, "We shouldn't consider this a 'danger', because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus."

That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it's not a good reason to change the definition of "danger" itself. Why? Because from the perspective of "language as AI design", that would be wireheading. You can't actually make people safer in reality by destroying the language we would use to represent danger.

Is that clear, or should I write a full post about this?

Comment by zack_m_davis on No Really, Why Aren't Rationalists Winning? · 2019-10-17T06:13:13.995Z · score: 6 (3 votes) · LW · GW

a whole lot of people seem to have some reason for pretending not to be able to tell ...

Right—they call it the "principle of charity."

Comment by zack_m_davis on Maybe Lying Doesn't Exist · 2019-10-15T01:53:08.046Z · score: 2 (1 votes) · LW · GW

I agree that the complete theory needs to take coordination problems into account, but I think it's a much smaller effect than you seem to? See "Schelling Categories, and Simple Membership Tests" for what I think this looks like. (I also analyzed a topical example on my secret ("secret") blog.)

Comment by zack_m_davis on Maybe Lying Doesn't Exist · 2019-10-15T01:49:19.569Z · score: 5 (3 votes) · LW · GW

Oh, that's a good point! Maybe read that paragraph as a vote for "relatively less word-choice-policing on the current margin in my memetic vicinity"? (The slip into the first person ("I want to avoid tone-policing," not "tone-policing is objectively irrational") was intentional.)

Comment by zack_m_davis on For progress to be by accumulation and not by random walk, read great books · 2019-10-13T16:26:54.840Z · score: 3 (2 votes) · LW · GW

I'm not sure the OP pays that much attention to Less Wrong these days? The mods could do it if they wanted (or write a broken-link checker??).

Comment by zack_m_davis on For progress to be by accumulation and not by random walk, read great books · 2019-10-12T04:45:23.694Z · score: 6 (3 votes) · LW · GW


Comment by zack_m_davis on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2019-10-07T17:18:27.886Z · score: 5 (3 votes) · LW · GW

So now I'm left wondering, if not here, then where? Where could rational-adjacent people sanely interact with feminists and sociologists and others in 'challenging' fields

Perhaps /r/TheMotte?! (Backstory.)

Comment by zack_m_davis on How do I view my replies? · 2019-10-05T17:06:22.731Z · score: 2 (1 votes) · LW · GW

On the "Edit User Settings" page (in the menu when you click on your username in the top right), there are "Notifications for Comments on My Posts" and "Notifications For Replies to My Comments" checkboxes—one would hope that, as long as those are checked, the bell should give you notifications when people reply?

Comment by zack_m_davis on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-10-04T05:19:45.357Z · score: 8 (4 votes) · LW · GW

in part so that he could write about politics, which we've traditionally avoided

I want to state that I agree with this model.

(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable ("As I became more careful in my own writings [...]") in a way that I have been trained not to be. But I would need to learn to write faster.)

Comment by zack_m_davis on Eli's shortform feed · 2019-09-28T16:36:06.449Z · score: 13 (4 votes) · LW · GW

People rarely change their mind when they feel like you have trapped them in some inconsistency [...] In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate [bolding mine—ZMD]

"People" in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it's a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That's a good thing! (As it is written of the fifth virtue, "Do not believe you do others a favor if you accept their arguments; the favor is to you.")

Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don't understand, then apparent "faults" you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Performing the behavior of trying to substantiate something that cannot, in fact, be substantiated (because it contains an inconsistency) is a waste of everyone's time!

In general (but not universally) it is more productive to adopt a collaborative attitude

Can you say more about what you think the exceptions to the general-but-not-universal rule are? (Um, specifically.)

Comment by zack_m_davis on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2019-09-28T16:01:20.391Z · score: 3 (2 votes) · LW · GW

This sounds fun, but unfortunately, I don't think I have time to commit to anything!—I have a lot more (prose) writing to do today and tomorrow.

(I also try to avoid JavaScript, to the extent that even my new browser game, U.S.S. Uncommon Priors Require Origin Disputes (source, demo) is mostly written in Rust (compiled to WebAssembly), with just enough JavaScript glue to listen to keystrokes and paint the canvas.)

Comment by zack_m_davis on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2019-09-25T15:33:44.638Z · score: 7 (5 votes) · LW · GW

You have three green slots, three gray slots, and three blue slots. You put three counters each on each of the green and gray slots, and one counter each on each of the blue slots. The frequencies of counters per slot is [3, 3, 3, 3, 3, 3, 1, 1, 1]. The total number of counters you put down is 3*6 + 3 = 18 + 3 = 21. To turn the frequencies into a probability distribution, you divide everything by 21, to get [1/7, 1/7, 1/7, 1/7, 1/7, 1/7, 1/21, 1/21, 1/21]. Then the entropy is , which is . Right? (Thanks for checking—it would be really embarrassing if I got this wrong. I might edit the post later to include more steps.)

Comment by zack_m_davis on Reframing Impact · 2019-09-22T03:35:22.125Z · score: 4 (2 votes) · LW · GW

(I was briefly confused by the "Think about what Frank brings us for each distance" "slide" because it doesn't include the pinkest marble: I saw the second-pinkest marble (on the largest dotted circle) thinking that it was meant to be the pinkest (because it's rightmost on the "Search radius" legend) and was like, "Wait, why is the pinkest marble closer than the terrorist in this slide when it was farther away in the previous slide?")

Comment by zack_m_davis on G Gordon Worley III's Shortform · 2019-09-12T15:06:30.513Z · score: 13 (4 votes) · LW · GW

as clearly noted in my original objection

Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)

there is absolutely a time and a place for this

Exactly—and is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.

Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)

My whole objection is that Gordon wasn't bothering to

Great! Thank you for critcizing people who don't justify their beliefs with adequate evidence and arguments. That's really useful for everyone reading!

(I believe as a cover for not being able to).

In context, it seems worth noting that this is a claim about Gordon's mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence), but it seems in tension with some of your other claims?

Comment by zack_m_davis on G Gordon Worley III's Shortform · 2019-09-12T02:13:09.913Z · score: 13 (8 votes) · LW · GW

leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."

Nesov scooped me on the obvious objection, but as long as we're creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

the existence of bisexuals

As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.

  1. J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patterns vs. the Kinsey Scale: The Case of Male Bisexuality" ↩︎

Comment by zack_m_davis on Matthew Barnett's Shortform · 2019-09-08T20:16:19.735Z · score: 10 (5 votes) · LW · GW

People who don't understand the concept of "This person may have changed their mind in the intervening years", aren't worth impressing. I can imagine scenarios where your economic and social circumstances are so precarious that the incentives leave you with no choice but to let your speech and your thought be ruled by unthinking mob social-punishment mechanisms. But you should at least check whether you actually live in that world before surrendering.

Comment by zack_m_davis on gilch's Shortform · 2019-08-26T03:51:35.462Z · score: 3 (2 votes) · LW · GW

What are the notable differences between Hissp and Hy? (Hyperlink to "Hy" in previous sentence is just for convenience and the benefit of readers; as you know, we're both former contributors.)

Comment by zack_m_davis on Schelling Categories, and Simple Membership Tests · 2019-08-26T02:45:59.937Z · score: 6 (3 votes) · LW · GW

Okay, Said, I used LaTeX for the numerical subscripts this time. I hope you're happy! (No, really—I actually mean it.)

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-21T02:16:56.536Z · score: 4 (2 votes) · LW · GW

P.S. (to sister comment), I'm going to be traveling through the 25th and probably won't check this website, in case that information helps us break out of this loop of saying "Let's stop the implicitly-emotionally-charged back-and-forth in the comments here," and then continuing to do so anyway. (I didn't get anything done at my dayjob today, which is an indicator of me also suffering from the "Highly tense conversations are super stressful and expensive" problem.)

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-20T06:25:50.104Z · score: 4 (2 votes) · LW · GW

but I'm guessing your wording was just convenient shorthand rather than a disagreement with the above


As I said, even if the Judge example, Carol has to understand Alice's claims.

Yes, trivially; Jessica and I both agree with this.

Jessica's Judge example still feels like a nonsequitor [sic] that doesn't have much to do with what I was talking about.

Indeed, it may not have been relevant to the specific thing you were trying to say. However, being that as it may, I claim that the judge example is relevant to one of the broader topics of conversation: specifically, "what norms and/or principles should Less Wrong aspire to." The Less Wrong karma and curation systems are functionally a kind of Judge, insofar as ideas that get upvoted and curated "win" (get more attention, praise, general acceptance in the rationalist community, &c.).

If Alice's tendency to lie, obfuscate, rationalize, play dumb, report dishonestly, filter evidence, &c. isn't an immutable feature of her character, but depends on what the Judge's behavior incentivizes (at least to some degree), then it really matters what kind of Judge you have.

We want Less Wrong specifically, and the rationalist community more generally, to be a place where clarity wins, guided by the beauty of our weapons. If we don't have that—if we live in a world where lies and bullshit outcompete truth, not just in the broader Society, but even in the rationalist community—then we're dead. (Because you can't solve AI alignment with lies and bullshit.)

As a moderator and high-karma user of, you, Raymond Arnold, are a Judge. Your strong-upvote is worth 10 karma; you have the power to Curate a post; you have the power to have the power to tell Alice to shape up or ship out. You are the incentives. This is a huge and important responsibility, your Honor—one that has the potential to influence 10¹⁴ lives per second. It's true that truthtelling is only useful insofar as it generates understanding in other people. But that observation, in itself, doesn't tell you how to exercise your huge and important responsibility.

If Jessica says, "Proponents of short AI timelines are lying, but not necessarily consciously lying; I mostly mean covert deception hidden from conscious attention," and Alice says, "Huh? I can't understand you if you're going to use words in nonstandard ways," then you have choices to make, and your choices have causal effects.

If you downvote Jessica because you think she's drawing the category boundaries of "lying" too widely in a way that makes the word less useful, that has causal effects: fewer people will read Jessica's post; maybe Jessica will decide to change her rhetorical strategy, or maybe she'll quit the site in disgust.

If you downvote Alice for pretending to be stupid when Jessica explicitly explained what she meant by the word "lying" in this context, then that has causal effects, too: maybe Alice will try harder to understand what Jessica meant, or maybe Alice will quit the site in disgust.

I can't tell you how to wield your power, your Honor. (I mean, I can, but no one listens to me, because I don't have power.) But I want you to notice that you have it.

If they're so motivatedly-unreasonable that they won't listen at all, the problem may be so hard that maybe you should go to some other place where more reasonable people live and try there instead. (Or, if you're Eliezer in 2009, maybe you recurse a bit and write the Sequences for 2 years so that you gain access to more reasonable people).

I agree that "retreat" and "exert an extraordinary level of interpretive labor" are two possible strategies for dealing with unreasonable people. (Personally, I'm a huge fan of the "exert arbitrarily large amounts of interpretive labor" strategy, even though Ben has (correctly) observed that it leaves me incredibly vulnerable to certain forms of trolling.)

The question is, are there any other strategies?

The reason "retreat" isn't sufficient, is because sometimes you might be competing with unreasonable people for resources (e.g., money, land, status, control of the "rationalist" and Less Wrong brand names, &c.). Is there some way to make the unreasonable people have to retreat, rather than the reasonable people?

I don't have an answer to this. But it seems like an important thing to develop vocabulary for thinking about, even if that means playing in hard mode.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-18T22:54:58.077Z · score: 25 (6 votes) · LW · GW

(You said you didn't want more back-and-forth in the comments, but this is just an attempt to answer your taboo request, not prompt more discussion; no reply is expected.)

We say that clarity wins when contributing to accurate shared models—communicating "clearly"—is a dominant strategy: agents that tell the truth, the whole truth, and nothing but the truth do better (earn more money, leave more descendants, create more paperclips, &c.) than agents that lie, obfuscate, rationalize, play dumb, report dishonestly, filter evidence, &c.

Creating an environment where "clarity wins" (in this sense) looks like a very hard problem, but it's not hard to see that some things don't work. Jessica's example of a judged debate where points are only awarded for arguments that the opponent acknowledges, is an environment where agents who want to win the debate have an incentive to play dumb—or be dumb—never acknowledging when their opponent made a good argument (even if the opponent in fact made a good argument). In this scenario, being clear (or at least, clear to the "reasonable person", if not your debate opponent) doesn't help you win.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T20:02:48.215Z · score: 4 (2 votes) · LW · GW

The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all

OK. Looking forward to future posts.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T19:43:34.342Z · score: 12 (6 votes) · LW · GW

The reason it's still tempting to use "deception" is because I'm focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, "Oceania has always been at war at Eastasia" and I believe him, there's a sense in which we want to say that I "have been deceived" (even if it's not really Winston's fault, thus the passive voice).

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:27:04.946Z · score: 14 (4 votes) · LW · GW

It might also be part of the problem that people are being motivated or deceptive. [...] the evidence for the latter is AFAICT more like "base rates".

When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?

Note on the word "deceptive": I need some word to talk about the concept of "saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted." (The part about resistence to correction is important for distinguishing "deception"-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it's not, I'll immediately say, "Oops, you're right," rather than digging my heels in.)

I'm sympathetic to the criticism that lying isn't the right word for this; so far my best alternatives are "deceptive" and "misleading." If someone thinks those are still too inappropriately judgey-blamey, I'm eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.

If an Outer Party member in the world of George Orwell's 1984 says, "Oceania has always been at war with Eastasia," even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don't want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there's more to the story than that. The divergence between this person's deep model and mine isn't just a random accident such that I should humbly accept that the Outside View says they're as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.

Obviously, this isn't license to indiscriminately say, "You're just saying that because you're afraid of not getting invited to parties!" to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people's beliefs.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T03:00:17.476Z · score: 2 (1 votes) · LW · GW

I think Benquo has often mixed the third thing in with the first thing (and sort of skipped over the second thing?), which I consider actively harmful to the epistemic health of the discourse.

Question: do you mean this as a strictly denotative claim (Benquo is, as a matter of objective fact, mixing the things, which is, as a matter of fact, actively harmful to the discourse, with no blame whatsoever implied), or are you accusing Benquo of wrongdoing?

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T02:58:34.475Z · score: 16 (8 votes) · LW · GW

That makes sense. I, personally, am interested in developing new terminology for talking about not-necessarily-conscious-and-yet-systematically-deceptive cognitive algorithms, where Ben and Jessica think that "lie"/"fraud"/&c. are fine and correct.

Comment by zack_m_davis on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-14T22:56:48.025Z · score: 22 (15 votes) · LW · GW

I define clarity in terms of what gets understood, rather than what gets said.

Defining clarity in terms of what gets understood results in obfuscation winning automatically, by effectively giving veto power to motivated misunderstandings. (As Upton Sinclair put it, "It is difficult to get a man to understand something when his salary depends upon his not understanding it," or as Eliezer Yudkowsky put it more recently, "politically motivated incomprehension makes people dumber than cassette tape recorders.")

If we may be permitted to borrow some concepts from law (while being wary of unwanted transfer of punitive intutions), we may want concepts of willful blindness, or clarity to the "reasonable person".

politics backpropogates into truthseeking, causes people to view truthseeking norms as a political weapon.

Imagine that this had already happened. How would you go about starting to fix it, other than by trying to describe the problem as clearly as possible (that is, "invent[ing] truthseeking-politics-on-the-fly")?

Comment by zack_m_davis on Could we solve this email mess if we all moved to paid emails? · 2019-08-13T03:59:24.877Z · score: 4 (2 votes) · LW · GW

3 karma in 21 votes (including a self-strong-upvote)

Actually, jeez, the great-grandparent doesn't deserve that self-strong-upvote; let me revise that to a no-self-vote.