Posts

Blatant lies are the best kind! 2019-07-03T20:45:56.948Z · score: 24 (14 votes)
Reason isn't magic 2019-06-18T04:04:58.390Z · score: 115 (39 votes)
Drowning children are rare 2019-05-28T19:27:12.548Z · score: 15 (46 votes)
A War of Ants and Grasshoppers 2019-05-22T05:57:37.236Z · score: 17 (5 votes)
Towards optimal play as Villager in a mixed game 2019-05-07T05:29:50.826Z · score: 40 (12 votes)
Hierarchy and wings 2019-05-06T18:39:43.607Z · score: 26 (11 votes)
Blame games 2019-05-06T02:38:12.868Z · score: 43 (9 votes)
Should Effective Altruism be at war with North Korea? 2019-05-05T01:50:15.218Z · score: 15 (13 votes)
Totalitarian ethical systems 2019-05-03T19:35:28.800Z · score: 36 (12 votes)
Authoritarian Empiricism 2019-05-03T19:34:18.549Z · score: 40 (13 votes)
Excerpts from a larger discussion about simulacra 2019-04-10T21:27:40.700Z · score: 43 (15 votes)
Blackmailers are privateers in the war on hypocrisy 2019-03-14T08:13:12.824Z · score: 24 (17 votes)
Moral differences in mediocristan 2018-09-26T20:39:25.017Z · score: 21 (8 votes)
Against the barbell strategy 2018-09-20T15:19:08.185Z · score: 21 (20 votes)
Interpretive Labor 2018-09-05T18:36:49.566Z · score: 28 (16 votes)
Zetetic explanation 2018-08-27T00:12:14.076Z · score: 79 (43 votes)
Model-building and scapegoating 2018-07-27T16:02:46.333Z · score: 23 (7 votes)
Culture, interpretive labor, and tidying one's room 2018-07-26T20:59:52.227Z · score: 29 (13 votes)
There is a war. 2018-05-24T06:44:36.197Z · score: 52 (24 votes)
Talents 2018-05-18T20:30:01.179Z · score: 47 (12 votes)
Oops Prize update 2018-04-20T09:10:00.873Z · score: 42 (9 votes)
Humans need places 2018-04-19T19:50:01.931Z · score: 113 (28 votes)
Kidneys, trade, sacredness, and space travel 2018-03-01T05:20:01.457Z · score: 51 (13 votes)
What strange and ancient things might we find beneath the ice? 2018-01-15T10:10:01.010Z · score: 32 (12 votes)
Explicit content 2017-12-02T00:00:00.946Z · score: 14 (8 votes)
Cash transfers are not necessarily wealth transfers 2017-12-01T10:10:01.038Z · score: 111 (43 votes)
Nightmare of the Perfectly Principled 2017-11-02T09:10:00.979Z · score: 32 (8 votes)
Poets are intelligence assets 2017-10-25T03:30:01.029Z · score: 26 (9 votes)
Seeding a productive culture: a working hypothesis 2017-10-18T09:10:00.882Z · score: 28 (9 votes)
Defense against discourse 2017-10-17T09:10:01.023Z · score: 64 (21 votes)
On the construction of beacons 2017-10-16T09:10:00.866Z · score: 58 (18 votes)
Sabbath hard and go home 2017-09-27T07:49:40.482Z · score: 87 (49 votes)
Why I am not a Quaker (even though it often seems as though I should be) 2017-09-26T07:00:28.116Z · score: 61 (31 votes)
Bad intent is a disposition, not a feeling 2017-05-01T01:28:58.345Z · score: 12 (15 votes)
Actors and scribes, words and deeds 2017-04-26T05:12:29.199Z · score: 17 (10 votes)
Effective altruism is self-recommending 2017-04-21T18:37:49.111Z · score: 71 (52 votes)
An OpenAI board seat is surprisingly expensive 2017-04-19T09:05:04.032Z · score: 5 (6 votes)
OpenAI makes humanity less safe 2017-04-03T19:07:51.773Z · score: 18 (20 votes)
Against responsibility 2017-03-31T21:12:12.718Z · score: 13 (12 votes)
Dominance, care, and social touch 2017-03-29T17:53:20.967Z · score: 3 (4 votes)
The D-Squared Digest One Minute MBA – Avoiding Projects Pursued By Morons 101 2017-03-19T18:48:55.856Z · score: 1 (2 votes)
Threat erosion 2017-03-15T23:32:30.000Z · score: 1 (2 votes)
Sufficiently sincere confirmation bias is indistinguishable from science 2017-03-15T13:19:05.357Z · score: 19 (19 votes)
Bindings and assurances 2017-03-13T17:06:53.672Z · score: 1 (2 votes)
Humble Charlie 2017-02-27T19:04:37.578Z · score: 2 (3 votes)
Against neglectedness considerations 2017-02-24T21:41:52.144Z · score: 1 (2 votes)
GiveWell and the problem of partial funding 2017-02-14T10:48:38.452Z · score: 2 (3 votes)
The humility argument for honesty 2017-02-05T17:26:41.469Z · score: 4 (5 votes)
Honesty and perjury 2017-01-17T08:08:54.873Z · score: 4 (5 votes)
[LINK] EA Has A Lying Problem 2017-01-11T22:31:01.597Z · score: 13 (13 votes)

Comments

Comment by Benquo on [deleted post] 2019-08-21T17:55:05.090Z
I don't feel like "the epistemics are failing" is the coarse-grained description I'd use, I think there's more details about which bits are going on and why (and which bits actually seem to be going quite excellently!), but I wanted to agree with feeling sad about this particular bit.

I expect it would be quite useful both here and more generally for you to expand on your model of this.

Comment by Benquo on [deleted post] 2019-08-21T16:43:14.202Z

I'd put you in a cluster with Lahwran and Said Achmiz on this, if that helps clarify the gestalt I'm trying to identify. By contrast, I'd say that the cluster Benito's pointing to - which I'd say mainly publicly includes me, Jessicata, Zvi Mowshowitz, and Romeo, though obviously also Michael Vassar - is organized around the idea that if you honestly apply loose gestalt thinking, it can very efficiently and accurately identify structures in the world - all of which can ultimately be reconciled - but that this kind of honesty necessarily involves reprogramming ourselves to not be such huge tools, and most people, well, haven't done that, so they end up having to pick sides between perception and logic.

Comment by Benquo on [deleted post] 2019-08-21T16:29:13.099Z

My impression is that for the majority of my audience, my efforts to show how everything adds up to normality are redundant, and mostly they're going by a vague feeling. Overall, it seems to me that there are people trying to do the kind of translational work Davis is asking for, but the community is not, as a whole, applying the sort of discernment that would demand such work. Whether or not this is the problem Davis is trying to identify, I'm worried enough about it that LessWrong has been getting less and less interesting to me as a community to engage with. You're probably by far the person most worth reaching who isn't already in my "faction," such as it is, and Davis is one of the few others who seem to be trying to make sense of things at all.

Comment by Benquo on [deleted post] 2019-08-21T16:25:55.150Z

The rhetorical structure of this post seems to imply a substantially different model of who your audience is, and what sort of appeals will work, than the model explicitly described. Since the question of which arguments will work on your intended audience is actually the whole point of your post, I think you should do an internal double crux on this issue, the results of which I expect will change your entire strategic sense of what's going on in the Rationalist community the value of schisms, etc. I'm happy to spend plenty of time in live conversation on this if you're interested and think seeking that sort of mutual understanding might be worth your time.

Explicitly, it seems like you're saying that the way in which woo ideas are being brought into the community discourse is objectionable, because people are simply adopting new fragmentary beliefs that on the surface blatantly contradict much of the other things they believe, without doing the careful work of translation, synthesis, and rigorous grounding of concepts. You're arguing that we should be alarmed by this trend, and engage in substantial retrenching. This is fundamentally an appeal to rational monism.

But rhetorically, you begin by offering a simple list of the names of things admitted into the conversation, and implicitly ask the reader to agree that this is objectionable before talking about method at all (and you don't go into enough detail on the type of skepticism and rigor you're suggesting for me to have a sense of whether I even agree.) The implied model here is that for most of your readers appeals to reason are futile, and you can at best get them to reject some ideas as foreign.

I think that the second model - the one you used to decide how to write the post - is a better representation of the current state of the Rationalist community than the first one. I don't see much value in preserving or restoring the "integrity" of such a community (constituting in practice the cluster of people vaguely attracted to the Sequences, HPMoR, EA, the related in-person Bay Area community, CFAR, and the MIRI narrative). I see a lot of value in a version of this post clearly targeted to the remnant better-described by the first model. It would be nice if we could communicate about this in a way that didn't hurt the feelings of the MOPs too badly, since they never wanted to hurt anyone.

Comment by benquo on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-08-20T21:46:57.600Z · score: 4 (3 votes) · LW · GW
it's not obvious to me that humans are friendly if you scale them up

It seems like any AI built by multiple humans coordinating is going to reflect the optimization target of the coordination process building it, so we had better figure out how to make this so.

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-18T21:04:56.949Z · score: 9 (2 votes) · LW · GW

How is that relevant to the OP?

Comment by Benquo on [deleted post] 2019-08-14T22:21:21.443Z

I think my basic worry is that if there's not an active culture-setting drive against concern-trolling, then participating on this site will mean constant social pressure against this sort of thing. That means that if I try to do things like empathize with likely readers, take into account feedback, etc., I'll either gradually become less clear in the direction this kind of concern trolling wants, or oppositionally pick fights to counteract that, or stop paying attention to LessWrong, or put up crude mental defenses that make me a little more obtuse in the direction of Said. Or some combination of those. I don't think any of those are great options.

No one here since Eliezer seems to have had both the social power and the willingness to impose new - not quite standards, but social incentive gradients. The mod team has the power, I think.

Thanks for clarifying that you're firmly in favor of at least tolerating this kind of speech. That is somewhat reassuring. But the culture is also determined by which things the mods are willing to ban for being wrong for the culture, and the implicit, connotative messages in the way you talk about conflicts as they come up. The generator of this kind of behavior is what I'm trying to have an argument with, as it seems to be to be basically embracing the drift towards pressure against the kind of clarity-creation that creates discomfort for people connected to conventional power and money. I recognize that asking for an active push in the other direction is a difficult request, but LessWrong's mission is a difficult mission!

Comment by Benquo on [deleted post] 2019-08-14T22:11:44.583Z

You got it right, 1 was the suggested change I was most disappointed by, as the weakening of rhetorical force also took away a substantive claim that I actually meant to be making: that GiveWell wasn't actually doing utilitarian-consequentialist reasoning about opportunity cost, but was instead displaying a sort of stereotyped accumulation behavior. (I began Effective Altruism is Self-Recommending with a cute story about a toddler to try to gesture at a similar "this is stereotyped behavior, not necessarily a clever conscious scheme," but made other choices that interfered with that.)

4 turned out to be important too, since (as I later added a quote and link referencing) "unfairness" literally was a stated motivation for GiveWell - but Zack didn't know that at the time, and the draft didn't make that clear, so it was reasonable to suggest the change.

The other changes basically seemed innocuous.

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-14T17:08:52.148Z · score: 6 (3 votes) · LW · GW

Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?

It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-12T06:42:48.464Z · score: 19 (8 votes) · LW · GW

If your concern is that this is evidence that the OP is wrong (since it has conflict-theoretic components, which are mindkillers), it seems important to establish that there are important false object-level claims, not just things that make such mistakes likely. If you can't do that, maybe change your mind about how much conflict theory introduces mistakes?

If you're just arguing that laying out such models are likely to have bad consequences for readers, this is an important risk to track, but it's also changing the subject from the question of whether the OP's models do a good job explaining the data.

Comment by benquo on Benito's Shortform Feed · 2019-08-07T17:33:45.506Z · score: 7 (3 votes) · LW · GW

I'm much more interested in finding out what your model is after having tried to take those considerations into account, than I am in a point-by-point response.

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-07T17:27:01.315Z · score: 8 (4 votes) · LW · GW
  1. Talk is cheap, especially when claiming not to hold opinions widely considered blameworthy.
  2. Buchanan's academic career (and therefore ability to get our attention) can easily depend on racists' appetite for convenient arguments regardless of his personal preferences.
Comment by benquo on Power Buys You Distance From The Crime · 2019-08-07T17:23:04.963Z · score: 7 (4 votes) · LW · GW

This is an excellent analytical account of the underlying dynamics. It also VERY strongly resembles the series of blame-deflections described in Part II Chapter VII of Atlas Shrugged (the train-in-the-tunnel part), where this sort of information suppression ultimately backfires on the nominal beneficiary.

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-05T02:09:27.434Z · score: 4 (2 votes) · LW · GW

I don't quite draw the line at denotative vs enactive speech - command languages which are not themselves contested would fit into neither "conflict theory" nor "mistake theory."

"War is the continuation of politics by other means" is a very different statement than its converse, that politics is a kind of war. Clausewitz is talking about states with specific, coherent policy goals, achieving those goals through military force, in a context where there's comparatively little pretext of a shared discourse. This is very different from the kind of situation described in Rao where a war is being fought in the domain of ostensibly "civilian" signal processing.

Comment by benquo on Power Buys You Distance From The Crime · 2019-08-04T17:26:01.929Z · score: 4 (2 votes) · LW · GW

Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases “who we should be angry at” if that’s the best available implementation.

"Conflict theory" is specifically about the meaning of speech acts. This not the general question of conflicting interests. The question of conflict vs mistake theory is fundamentally, what are we doing when we talk? Are we fighting over the exact location of a contested border, or trying to refine our compression of information to better empower us to reason about things we care about?

Comment by benquo on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-04T17:03:11.469Z · score: 2 (1 votes) · LW · GW

For me, a conflict theorist is someone who thinks the main driver of disagreement is self-interest rather than honest mistakes.

I don't see how to reconcile this with:

Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.

Comment by benquo on Making Exceptions to General Rules · 2019-07-23T05:27:27.921Z · score: 21 (5 votes) · LW · GW

I think this schema could benefit from a distinction between rules for internal and external consumption. For external consumption there's some benefit to having (implied) policies for exceptions that are responsive to likely costs, both the internal costs of acting according to the rule in an emergency, and the external cost of having one's expectations violated.

But for internal consumption, it makes more sense to, as Said Achmiz points out, just change the rule to a better one that gets the right answer in this case (and all the prior ones). I think people are confused by this in large part because they learned, from authoritarian systems, to rule themselves as though they were setting rules for external consumption or accountability, instead of reasoning about and doing what they want.

This leads to a weird tension where the same person is sternly setting rules for themselves (and threatening to shame themselves for rule violations), and trying to wiggle out of those same rules as though they were demands by a malevolent authority.

Comment by benquo on Dialogue on Appeals to Consequences · 2019-07-22T01:02:13.897Z · score: 4 (2 votes) · LW · GW

This is maybe half or more of what Robin Hanson wrote about back when it was still all on overcomingbias.com

Comment by benquo on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T22:15:30.892Z · score: 2 (1 votes) · LW · GW

Link seems broken

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-21T20:57:14.381Z · score: 2 (1 votes) · LW · GW

The "different masters" thing is a special case of the problem of accepting feedback (i.e. learning from approval/disapproval or reward/punishment) from approval functions in conflict with each other or your goals. Multiple humans trying to do the same or compatible things with you aren't "different masters" in this sense, since the same logical-decision-theoretic perspective (with some noise) is instantiated on both.

But also, there's all sorts of gathering data from others' judgment that doesn't fit the accountability/commitment paradigm.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-21T20:48:54.978Z · score: 14 (3 votes) · LW · GW

Here are a couple specific ways I expect that lumping these cases together will case bad decisions (Raemon's comment helped me articulate this):

  • If you don't notice that some character types are stable under pressure, you'll overestimate the short-run power of incentives relative to selection effects. You can in fact ask people to act against their "incentives" (example: the Replication Project, whistleblowers like Daniel Ellsberg and Edward Snowden, people who return lost wallets) when conditions permit such people to exist, though this only lasts intergenerationally if longer-run incentives point the other way. What you can't do is expect people in particular positions where there's strong selection for certain traits, to exhibit incompatible traits. (Related: Less Competition, More Meritocracy?, No, it's not The Incentives—it's you)
  • If you assume learning and responding to pressure are the same, you'll put yourself in a position where you're forced to do things way more than is optimal. In hindsight, instead of going for a master's degree in math and statistics, I wish I'd (a) thought about what math I was curious about and tried to take the least-effort path to learning it, and (b) if I had motivation problems, examined whether I even wanted to learn the math, instead of outsourcing motivation to the academic system and my desire (more precisely, anxiety) to pass tests. I massively underinvested in playing with or other low-pressure high-intrinsic-motivation exploration of things I was just ... interested in. (Related: The Costs of Reliability, The Order of the Soul).
Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-21T20:20:51.989Z · score: 2 (1 votes) · LW · GW

I feel like all of this mixes together info sources and incentives, so it feels a bit wrong to say I agree, but also feels a bit wrong to say I disagree.

Comment by benquo on Raemon's Scratchpad · 2019-07-21T19:43:42.494Z · score: 6 (3 votes) · LW · GW

I think Critch is basically correct here; it makes more sense to model distractions or stress due to internal conflict as accumulating in some contexts, rather than willpower as a single quantity being depleted.

Comment by benquo on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T08:27:41.934Z · score: 18 (7 votes) · LW · GW

It seems like you're imagining a context that isn't particularly conducive to making intellectual progress. Otherwise, why would it be the case that John feels the need to regularly argue for veganism? If it's not obvious to the others that John's not worth engaging with, they should double-crux and be done with it. The "needs" framing feels like a tell that talking, in this context, is mainly about showing that you have broadcast rights, rather than about informing others.

The main case I can imagine where a truth-tracking group should be rationing attention like this, is an emergency where there's a time-sensitive question that needs to be answered, and things without an immediate bearing on it need to be suppressed for the duration.

Comment by benquo on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T07:18:38.516Z · score: 4 (2 votes) · LW · GW

My steelman of this position is something like, “I favored focusing instrumental rationality because it seemed, well, useful. At the time I figured that this was just a different subject than epistemic rationality, & focusing on it would at worst mean less progress improving the accuracy of our beliefs. But in hindsight this involved allowing epistemics to get worse for the sake of more instrumental success. I’ve now updated towards that having been a bad tradeoff.”

How close is that?

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-21T01:11:38.186Z · score: 2 (1 votes) · LW · GW

I don't think I find that objectionable, it didn't seem particularly interesting as a claim. It's as old as "you can only serve one master," god vs mammon, etc etc - you can't do well at accountability to mutually incompatible standards. I think it depends a lot on the type and scope of accountability, though.

If the takeaway were what mattered about the post, why include all the other stuff?

Comment by Benquo on [deleted post] 2019-07-20T00:18:42.147Z

Interestingly the readiest example I have at hand comes from Zack Davis. Over email, he suggested four sample edits to Drowning Children are Rare, claiming that this would say approximately the same thing with a much gentler tone. He suggested changing this:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives. Either scenario clearly implies that these estimates are severely distorted [...]

To this:

Either charities like the Gates Foundation and Good Ventures are accumulating funds that could be used to prevent millions of deaths, or the low cost-per-life-saved numbers are significantly overestimated. My former employer GiveWell in particular is notable here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried about "crowding out" other donors. Either scenario clearly implies that these estimates are systematically mistaken [...]

Some of these changes seemed fine to me (and subsequent edits reflect this), but one of them really does leave out quite a lot, and that kind of suggestion seems pretty typical of the kind of pressure I'm perceiving. I wonder if you can tell which one I mean and how you'd characterize the difference. If not, I'm happy to try explaining, but I figure I should at least check whether the inferential gap here is smaller than I though.

Comment by Benquo on [deleted post] 2019-07-19T07:33:17.800Z

I don't think I can honestly accept mere explicit endorsements of a high-level opinion here, because as far as I can tell, such endorsements are often accompanied by behavior, or other endorsements, that seem (to me) to contradict them. I guess that's why the examples have attracted so much attention - I have more of an expectation that they correspond to the intuitions people will make decisions with, and those are what I want to be arguing with.

Having written that, it occurs to me that I too could do a better job giving examples that illustrate the core considerations I'm trying to draw attention to, so I'll make an effort to do that in the future.

Comment by Benquo on [deleted post] 2019-07-19T07:29:56.170Z

It might help for me to also try to make a positive statement of what I think is at stake here.

I agree with the underlying point that side-channel communication around things like approval is real and common, and it's important to be able to track and criticize such communication.

What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue.

Comment by Benquo on [deleted post] 2019-07-18T19:28:48.326Z
My attention has been on which parts of speech it is legitimate to call out.

Do you think anyone in this conversation has an opinion on this beyond "literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable"? If so, what?

I thought we were arguing about which speech is in fact objectionable, not which speech it's okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we've been talking past each other.

Comment by Benquo on [deleted post] 2019-07-18T16:15:09.084Z
It feels like you keep repeating the 101 arguments and I want to say "I get them, I really get them, you're boring me" -- can you instead engage with why I think we can't use "but I'm saying true things" as free license to say anything in way whatsoever? That this doesn't get you a space where people discuss truth freely.

I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn't get it - by the Gricean maxim of relevance - even if you verbally affirmed it. Your framing didn't distinguish between "don't say things through the side channels of your speech" and "don't criticize other participants." You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.

(The conversational move I want to recommend to you here is something like, "You keep saying X. It sort of seems like you think that I believe not-X. I'd rather you directly characterized what you think I'm getting wrong, and why, instead of arguing on the assumption that I believe something silly." If you don't explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it's generally rude to "put words in other people's mouths" and people get unhelpfully defensive about that pretty reliably, so it's natural to try to let you save face by skipping over the unpleasantness there.)

I think there's also a big disagreement about how frequently someone's motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you're thinking of that as something like the "nuclear option," which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.

Then there's also a problem where it's a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack's "What? Why?" seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it's a lot of extra work to turn that sort of tone into content reliably, and most people - including most people on this forum - don't know how to do it. It's fine to ask for extra work, but it's objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.

Comment by benquo on Doublecrux is for Building Products · 2019-07-18T04:14:40.574Z · score: 16 (4 votes) · LW · GW

I think this is missing one of the most important benefits of things like double crux: the potential for strong updates outside the domain of the initial agreement, for both parties. See also Benito's A Sketch of Good Communication.

Comment by Benquo on [deleted post] 2019-07-18T04:08:03.786Z

I mean the section titled "2. Secondary information perceived in your message is upsetting."

Comment by benquo on Doublecrux is for Building Products · 2019-07-18T04:03:32.494Z · score: 9 (4 votes) · LW · GW

"Double crux is for building products" is true mostly because of the more general fact that epistemic rationality is for shared production relationships.

Comment by Benquo on [deleted post] 2019-07-17T20:51:20.569Z

I didn't think I was disagreeing with you - I meant to refer to the process of publicly explicitly awarding points to offset the implied reputational damage

Comment by Benquo on [deleted post] 2019-07-17T15:18:46.374Z
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying "I see you trying to do a thing! I think it's harmful and you should stop." and you saying "oops!" should net you points without me having to say "POINTS!"

Huh. I think part of what's bothering me here is that I'm reading requests to award points (on the assumption that otherwise people will assign credit perversely) as declaring intent to punish me if I publicly change my mind in a way that's not savvy to this game, insofar as implying that perverse norms are an unchangeable fait accompli strengthens those norms.

Comment by Benquo on [deleted post] 2019-07-17T15:15:19.638Z

Overall, the feeling I get from this post is one of being asked to make a move in a game I don't fully understand (or am being lied* to about to persuade me that my understanding of it is too cynical). I'm asked to believe that I'm not giving up anything important by making this move, but people seem to care a lot about it in a way that casts some suspicion on that claim.

The last time I agreed to a major request that felt like this, I ended up spending an extra year I shouldn't have in a deeply toxic situation.

Here's my impression of what's going on. Most people socialize by playing a zero-sum coalitional game with a blame component. To conceal the adversarial nature of what's going on - to confuse the victims and bystanders - coalition membership isn't explicit or discrete. Instead, approval and disapproval mechanisms are used to single out people for exclusion and expropriation. Getting away with insulting someone is a way of singling them out in this way. Any criticism that challenges someone's social narrative is going to have similar effects in this game, even if it's not intended mainly as a move in the game.

I'm being asked, by someone who cares about their stakes in that game, to make sure to declare that I think the person I'm criticizing should still be in the coalition, rather than an expropriation target. I also see requests that others actually conceal information that's valuable to me in the games I am trying to play, to accommodate this coalitional game. For instance, your request that Zack pretend he thought you were being more reasonable than he actually thought.

I am actually not trying to play this game. I don't consent to this game. I don't want to give it more power. I don't think the person should be in the coalition because I don't like or trust the coalition and think that the game it's playing is harmful. This is not how I find or keep my friends. I've seen this kind of behavior pervert the course of justice. I find it deeply sketchy to be asked to make a move in this game on LessWrong in a way that keeps the game covert. I fled the Bay Area in part because I got sick and tired of this game.

*At the very least, in the the sense that people are dismissing my model because it sounds bad and they don't feel like bad people, without checking whether it's, well, true. The technical term for this might be bullshit but there's no nonawkward passive verb form of that.

Comment by Benquo on [deleted post] 2019-07-17T14:50:50.159Z
When I consider things like "making the map less accurate in order to get some gain", I don't think "oh, that might be worth it, epistemic rationality isn't everything", I think "Jesus Christ you're killing everyone and ensuring we're stuck in the dark ages forever".

To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.

Comment by Benquo on [deleted post] 2019-07-17T14:38:48.259Z

Your treatment of cause 2 seems like it's actually asking people to lie or conceal crucial information. Obviously Version 2 is unhelpful, but the implication that we have to choose between version 1 or 2 when talking about the people involved - that judging people is just about directing positive or negative affect towards them - is misleading. I thought this was supposed to be a rationality forum. Talking about the patterns that generate mistakes seems like a core activity for a rationality forum.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-17T14:17:19.625Z · score: 2 (1 votes) · LW · GW

It seems to me that integrity of thought is actually quite a lot easier if it constrains the kind of anticipations that authentically and intuitively affect actions. Actions can still diverge from beliefs if someone with integrity of thought gets distracted enough to drop into a stereotyped habit (e.g. if I'm a bit checked out while driving and end up at a location I'm used to going to instead of the one I need to be at) or is motivated to deceive (e.g. corvids that think carefully about how to hide their food from other corvids).

The kind of belief-action split we're used to seeing, I think, involves a school-broken sort of "believing" that's integrated with the structures that are needed to give coherent answers on tests, but severed from thinking about one's actual environment and interests.

The most important thing I did for my health in the last few years was healing this split.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-17T07:32:26.715Z · score: 4 (2 votes) · LW · GW

It seems to me that the way humans acquire language pretty strongly suggests that (2) is true. (1) seems probably false, depending on what you mean by incentives, though.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-17T07:24:22.584Z · score: 11 (3 votes) · LW · GW

To give a concrete example, I expect math prodigies to have the easiest time solving any given math problem, but even so, I don't expect that a system that punishes the students who don't complete their assignments correctly will serve the math prodigies well. This, even if under other, totally different circumstances it's completely appropriate to compel performance of arbitrary assignments through the threat of punishment.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-17T07:13:55.592Z · score: 48 (10 votes) · LW · GW

Thanks for checking - I'm trying to say something pretty different.

It seems like the frame of the OP is lumping together the kind of consistency that comes from using the native architecture to model the deep structure of reality (see also Geometers, Scribes, and the structure of intelligence), and the kind of consistency that comes from trying to perform a guaranteed level of service for an outside party (see also Unreal's idea of Dependability), and an important special case of the latter is rule-following as a form of submission or blame-avoidance. These are very different mental structures, respond very differently to incentives, and learn very different things from criticism. (Nightmare of the Perfectly Principled is my most direct attempt to point to this distinction.)

People who are trying to submit or avoid blame will try to alleviate the pressure of criticism with minimal effort, in ways that aren't connected to their other beliefs. On the other hand, people with structured models will sometimes leapfrog past the critic, or jump in another direction entirely, as Benito pointed out in A Sketch of Good Communication.

If we don't distinguish between these cases, then attempts to reason about the "optimal" attitude towards integrity or accountability will end up a lumpy, unsatisfactory linear compromise between the following policy goals:

  • Helping people with structurally integrated models notice tensions in their models that they can learn from.
  • Distinguishing people with structurally integrated models from those who (at least in the relevant domain) are mostly just trying not to stick out as wrong, so we can stop listening to the second group.
  • Establishing and enforcing the norms needed to coordinate actions among equals (e.g. shared expectations about promises).
  • Compelling a complicated performance from inferiors, or avoiding punishment by superiors trying to compel a complicated performance from you.
  • Converting people without structurally integrated models into people with structurally integrated models (or vice versa).

Depending on what problem you're trying to solve, habryka's statement that "if someone changes their stated principles in an unpredictable fashion every day (or every hour), then I think most of the benefits of openly stating your principles disappear" can be almost exactly backwards.

If your principles predictably change based on your circumstances, that's reasonably likely to be a kind of adversarial optimization similar to A/B testing of communication. They don't mean their literal content, at least.

But there's plenty of point in principles consistent with learning new things fast. In that case, change represents noise, which is costly, but much less costly than messaging optimized for extraction. And of course changing principles doesn't need to imply a change in behavior to match - your new principles can and should take into account the fact that people may have committed resources based on your old stated principles.

In summary, my objection is that habryka seems to be thinking of beliefs as a special case of promises, while I think that if we're trying to succeed based on epistemic rationality, we should be modeling promises as a special case of beliefs. For more detail on that, see Bindings and Assurances.

Comment by benquo on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-17T04:46:54.543Z · score: 13 (4 votes) · LW · GW

Another thing I'd add - putting this in its own comment to help avoid any one thread blowing up in complexity:

The orientation-towards-clarity problem is at the very least strongly analogous to, and most likely actually an important special case of, the AI alignment problem.

Friendliness is strictly easier with groups of humans, since the orthogonality thesis is false for humans - if you abuse us out of our natural values you end up with stupider humans and groups. This is reason for hope about FAI relative to UFAI, but also a pretty strong reason to prioritize developing a usable decision theory and epistemology for humans over using our crappy currently-available decision theory to direct resources in the short run towards groups trying to solve the problem in full generality.

AGI will, if ever, almost certainly be built - directly or indirectly - by a group of humans, and if that group is procedurally Unfriendly (as opposed to just foreign), there's no reason to expect the process to correct to FAI). For this reason, friendly group intelligence is probably necessary for solving the general problem of FAI.

Comment by benquo on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-17T04:33:16.549Z · score: 24 (6 votes) · LW · GW

This sounds really, really close. Thanks for putting in the work to produce this summary!

I think my objection to the 5 Words post fits a pattern where I've had difficulty expressing a class of objection. The literal content of the post wasn't the main problem. The main problem was the emphasis of the post, in conjunction with your other beliefs and behavior.

It seemed like the hidden second half of the core claim was "and therefore we should coordinate around simpler slogans," and not the obvious alternative conclusion "and therefore we should scale up more carefully, with an uncompromising emphasis on some aspects of quality control." (See On the Construction of Beacons for the relevant argument.)

It seemed to me like there was some motivated ambiguity on this point. The emphasis seemed to consistently recommend public behavior that was about mobilization rather than discourse, and back-channel discussions among well-connected people (including me) that felt like they were more about establishing compatibility than making intellectual progress. This, even though it seems like you explicitly agree with me that our current social coordination mechanisms are massively inadequate, in a way that (to me obviously) implies that they can't possibly solve FAI.

I felt like if I pointed this kind of thing out too explicitly, I'd just get scolded for being uncharitable. I didn't expect, however, that this scolding would be accompanied by an explanation of what specific, anticipation-constraining, alternative belief you held. I've been getting better at pointing out this pattern (e.g. my recent response to habryka) instead of just shutting down due to a preverbal recognition of it. It's very hard to write a comment like this one clearly and without extraneous material, especially of a point-scoring or whining nature. (If it were easy I'd see more people writing things like this.)

Comment by benquo on Benito's Shortform Feed · 2019-07-17T04:00:00.916Z · score: 12 (4 votes) · LW · GW

The definitional boundaries of "abuser," as Scott notes, are in large part about coordinating around whom to censure. The definition is pragmatic rather than objective.*

If the motive for the definition of "lies" is similar, then a proposal to define only conscious deception as lying is therefore a proposal to censure people who defend themselves against coercion while privately maintaining coherent beliefs, but not those who defend themselves against coercion by simply failing to maintain coherent beliefs in the first place. (For more on this, see Nightmare of the Perfectly Principled.) This amounts to waging war against the mind.

Of course, in matter of actual fact we don't strongly censure all cases of consciously deceiving. In some cases (e.g. "white lies") we punish those who fail to lie, and those who call out the lie. I'm also pretty sure we don't actually distinguish between conscious deception and e.g. reflexively saying an expedient thing, when it's abundantly clear that one knows very well that the expedient thing to say is false, as Jessica pointed out here.

*It's not clear to me that this is a good kind of concept to have, even for "abuser." It seems to systematically force responses to harmful behavior to bifurcate into "this is normal and fine" and "this person must be expelled from the tribe," with little room for judgments like "this seems like an important thing for future partners to be warned about, but not relevant in other contexts." This bifurcation makes me less willing to disclose adverse info about people publicly - there are prominent members of the Bay Area Rationalist community doing deeply shitty, harmful things that I actually don't feel okay talking about beyond close friends because I expect people like Scott to try to enforce splitting behavior.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-16T23:47:59.736Z · score: 8 (6 votes) · LW · GW

I don't understand the relevance of your responses to my stated model. I'd like it if you tried to explain why your responses are relevant, in a way that characterizes what you think I'm saying more explicitly.

My other most recent comment tries to show what your perspective looks like to me, and what I think it's missing.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-16T19:54:37.296Z · score: 12 (6 votes) · LW · GW

This exchange has given me the feeling of pushing on a string, so instead of pretending that I feel like engaging on the object level will be productive, I'm going to try to explain why I don't feel that way.

It seems to me like you're trying to find an angle where our disagreement disappears. This is useful for papering over disagreements or pushing them off, which can be valuable when that reallocates attention from zero-sum conflict to shared production or trade relations. But that's not the sort of thing I'd hope for on a rationalist forum. What I'd expect there is something more like double-cruxing, trying to find the angle at which our core disagreement becomes most visible and salient.

Sentences like this seem like a strong tell to me:

I do think that a more continuous model is accurate here, though I share at least a bit of your sense (or at least what I perceive to be your sense) of there being some discrete shift between the two different modes of thinking.

While "I think you're partly wrong, but also partly right" is a position I often hold about someone I'm arguing with, it doesn't clarify things any more than "let's agree to disagree." It can set the frame for a specific effort to articulate what exactly I think is wrong under what circumstances. What I would have hoped to see from you would have been more like:

  • If you don't see why I care about pointing out this distinction, you could just ask me why you should care.
  • If you think you know why I care but disagree, you could explain what you think I'm missing.
  • If you're unsure whether you have a good sense of the disagreement, you could try explaining how you think our points of view differ.
Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-16T13:57:13.068Z · score: 6 (3 votes) · LW · GW

This seems like a proposal to use the same kinds of postural adjustments on a group that includes anatomically complete human beings, and lumps of clay. Even if there's a continuum between the two, if what you want to produce is the former, adjustments that work for the latter are going to be a bad idea.

If someone's inconsistencies are due to an internal confusion about what's true, that's a different situation requiring a different kind of response from the situation in which those inconsistencies are due to occasionally lying when they have an incentive to avoid disclosing their true belief structure. Both are different from one in which there simply isn't an approximately coherent belief structure to be represented.

Comment by benquo on Integrity and accountability are core parts of rationality · 2019-07-16T03:54:44.689Z · score: 9 (3 votes) · LW · GW

I'm not saying that people "should try" to use their beliefs to model and act in reality.

I'm saying that some people's minds are set up such that stated beliefs are by default reports about a set of structurally integrated (and therefore logically consistent) constraints on their anticipations. Others' minds seem to be concerned with making socially desirable assertions, where apparent consistency is a desideratum. The first group is going to have no trouble at all "acting in accordance with [their] stated beliefs about the world" so long as they didn't lie when they stated their beliefs, and the sort of accountability you're talking about seems a bit silly. The second group is going to have a great deal of trouble, and accountability will at best cause them to perform consistency when others are watching, not to take initiative based on their beliefs. (Cf. Guess culture screens for trying to cooperate.)