Appeal to Consequence, Value Tensions, And Robust Organizations

post by Matt Goldenberg (mr-hire) · 2019-07-19T22:09:43.583Z · LW · GW · 90 comments

Contents

    Epistemic Status: Strong opinions weakly held. Mostly trying to bring some things into the discourse that I think are too often ignored.
  Introduction
  Dialogue
  Commentary
None
90 comments

Epistemic Status: Strong opinions weakly held. Mostly trying to bring some things into the discourse that I think are too often ignored.

Some updates I've made based on the discussion in this post are here [LW(p) · GW(p)] .

Introduction

Jessicata's Dialogue on Appeals to Consequences [LW · GW] is an expansion of a response that she wrote to me a few months ago, arguing a particular point that I agree with: Namely, if you have an object level thing you want in the world, it's almost never worth lying or withholding information about that thing, because it breaks meta level norms about truthseeking that are much more important to accomplishing object level goals in general. However, there's a slightly more interesting case that I think is quite murkier, that the original comment was pointing to. That is, what if your truthseeking norms are in tension with OTHER meta level norms that are important? In general, how do you deal with instances where tensions between two important values cause you to not know what to do?

Dialogue

Let's imagine John and Jill are discussing John's behavior in a private space. Jill is a leader of the space, and John is someone who frequently attends the space and has lively discussions trying to get to the truth.

Jill: John, I've had several complaints about your tendency to steer conversations towards the divisive topic that everyone should be a Vegan, and I'm going to ask you to tone it down a bit when you're in our main space.

John: Are people saying that I'm making arguments that are false?

Jill: No, no one is saying that you're making false arguments. John: Are people saying that I'm derailing the conversation? I think you'll find that every instance I brought up veganism was highly relevant to the conversation.

Jill: Yes, some people have said that, but I happen to believe you when you say that you've only brought it up in relevant contexts for you.

John: Then what's the problem? I'm stating relevant true beliefs that add to the totality of the conversation and steer it in conversationally relevant directions.

Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations [LW · GW], and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism [LW · GW]. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!

John: Excuse me for saying so, but this a clear appeal to consequence!

Jill: Is it? I'm not saying that the negative consequences to the community mean that what you're saying is false - that would be a clear logical fallacy. Instead I'm just asking you to bring up this argument less often because I think it will lead to bad outcomes.

John: Ok, maybe it's not a logical fallacy, but it is dangerous. This community is built on a foundation of truth seeking, and once we start abandoning that because of people's feelings, we devolve into tribal dynamics and tone arguments!

Jill: Yes, truthseeking is very important. However, It's clear that just choosing one value as sacred [LW · GW], and not allowing for tradeoffs can lead to very dysfunctional belief systems,.I believe you've pointed at a clear tension in our values as they're currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.

John: You're saying there's a tension, but to me there's a clear and obvious winner. Under your proposed rules, anyone will be able to silence anything simply by saying they don't like it!

Jill: If I find someone trying to silence good arguments through that tactic, I'll sit them down and have a similar conversation to the one we're having now.

John: That's even worse! That means that instead of the putting the allowed conversation topics up to vote, we're putting them in the hands of one person, you! You can silence any conversation you want.

Jill: I can see how it would seem that way, but I believe we've cultivated some great cultural norms that make it harder for me to play to political games like that. [LW(p) · GW(p)] Firstly, our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.

John: That makes sense. Also, Hi Mom!

Jill: Second, our organization allows anyone to apply the values to anyone else, so if you see ME not following the values in any of my talks, you can call me out on it and I'll comply.

John: Sure, you say that now, but because of your role you can just defy that rule whenever you want! Jill: That's true, and it's one of the reasons I've worked to cultivate integrity as a leader. [LW · GW] Has there been any instance of my behavior where you think I would actually do that?

John: No I suppose not. Are there any other cultural norms preventing you from using the arbitrary nature of decisions for your own gain? Jill: There's one more. Our organization has a clear set of values, and as the leader one of my roles is to spearhead the change the values in clear ways when there's tension between them. [LW(p) · GW(p)] So I'm not just going to talk to you, I'm actually going to suggest to the organization that we clarify our values such that they tell us to do in these relatively common situations, and I'm going to have you help me.

John: I think that makes sense. We can probably make a list of topics that people are allowed to taboo, and a list of topics people are not allowed to taboo, and then I'll always know what it's ok to "appeal to consequences" on. Jill: I'm afraid that particular rule would be unwise. I think there's practically unlimited scissor statements that could cause schisms in our community, and a skilled adversary could easily find one that's not on our list of approved topics. No, I'm afraid we'll need to make a general value that can cover these situations in the general case.

John: Oh, so trying to avoid appeals to consequence argument can actually be used by someone looking to harm our community? That's interesting! But it's not clear to me that there is a general rule that can cover all the cases.

Jill: There is. The general rule is that people should give equal weight to their own needs, the needs of the people they're interacting with, and the needs of the organization as a whole. [LW(p) · GW(p)]

John: I'm not sure I get it.

Jill: Well, you have a need to express that everyone should be a vegan. It's clearly very important to you, or you wouldn't bring it up so much. At the same time, many of the people in our community have a need to have variety in their conversation, and you should be aware of this when talking with them. Finally, our organization has a need to not experience/discuss scissor statements too often or too frequently, in order to remain healthy and avoid frequent schisms. By bringing this topic up so much, you're putting your needs above the needs of others you're interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.

John: That makes sense. I suppose by the same token, if there's a really interesting topic that's helpful for the group to know about, and lots of people want to talk about, it would be putting your own needs above others needs if you said it hurt your feelings so people couldn't talk about it.

Jill: Exactly!

John: So this rule seems plausible to me, and I'm sure it would be great for many people, but I have to admit its' not for me. I'd much prefer a space where people are allowed to say anything they want to me, and I can say anything I want to them in return.

Jill: I agree that this may not be the best rule for everybody. That's why next week we're going to start experimenting with The Archipelago Model [LW · GW]. As I said, I want you to tone it down in the main room, which follows the Maturity value mentioned above. However, we've designated a side room that instead follows Crocker's Rules. You're allowed to go to either room, but when in that room, must follow the stated values of the room. And most importantly, all conversations are recorded and can be listened to by anyone in the community!

John: Cool, that seems worthwhile, but very messy and likely to have numerous hidden failure modes...

Jill: I agree, but it at least seems worth a shot!

Commentary

So you probably noticed already, but this post wasn't really about Appeal to Consequences at all. Instead, it's a meditation on how good organizations deal with tensions in their values, and avoid the organization being overrun by skilled sociopaths. A lot of these suggestions and ideas come from the work I've been doing over the past year or so to figure out what makes great organizations and communities. I'd be particularly interested in peoples' inner sim of how the organization described by John and Jill above would go horribly wrong, and counter ideas about what could be done to fix THOSE issues.

90 comments

Comments sorted by top scores.

comment by Benquo · 2019-07-21T08:27:41.934Z · LW(p) · GW(p)

It seems like you're imagining a context that isn't particularly conducive to making intellectual progress. Otherwise, why would it be the case that John feels the need to regularly argue for veganism? If it's not obvious to the others that John's not worth engaging with, they should double-crux and be done with it. The "needs" framing feels like a tell that talking, in this context, is mainly about showing that you have broadcast rights, rather than about informing others.

The main case I can imagine where a truth-tracking group should be rationing attention like this, is an emergency where there's a time-sensitive question that needs to be answered, and things without an immediate bearing on it need to be suppressed for the duration.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T09:30:09.049Z · LW(p) · GW(p)

The "needs" framing feels like a tell that talking, in this context, is mainly about showing that you have broadcast rights, rather than about informing others.

That's because lots of talking is mainly about broadcast rights. Any doublecrux on this situation has to include both John's explicit argument, AND his need for broadcast rights, or it won't actually solve the underlying issue. He'll fail to update, or choose another thing to continually bring up.

Pretending humans are only optimizing for truth is a recipe for spending lots of time having arguments that are pretend about one thing when they're actually about broadcast rights or traumas.

The dialogue is portraying an organization that's just realizing that the naive idea that more time spent on object level truths leads to more truth is wrong.

In the fully evolved form of the organization, someone (maybe even John himself) would have realized he had this need the first or second time it happened, and gone meta to address it. Then in the future, when it comes up, people could point out when it's derailing the conversation in a way that puts John's need above the need of the group to get to the truth. The organization would also set up times to debug, double Crux, or specifically address that need so that it wouldn't keep coming up.

See this reply to Ruby for a more explicit argument in that vein:https://www.lesswrong.com/posts/7vofFovKWPrnM7y9Q/appeal-to-consequence-value-tensions-and-robust#vgtQsT5dKCobNMAoJ [LW(p) · GW(p)]

comment by Ruby · 2019-07-20T19:24:55.628Z · LW(p) · GW(p)

Overall great post, thanks! Much I agree with, but a few things stick out.

By bringing this topic up so much, you're putting your needs above the needs of others you're interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.

The competing needs frame feels off to me. I think this is why (but I haven't thought about it at length):

  • Balancing between everyone's needs makes sense if the point of the group/community is for people to come together and assist each other in meeting their individual needs. But I think that's very often not the point of a group/community.
  • In many cases (including the rationality community/LW), the point is to come together towards some joint objective. Raemon would call this building a product together [LW · GW]. When you're building a product, it's not about my needs vs your needs, it's about which actions will actually lead to a successful product.
    • It doesn't make sense to say "we should balance between my need for website minimalism and your need for information density", but rather "we need to answer which of these is actually better for the product."
    • If I think insufficient minimalism is going to the kill the product, it makes sense that I want to keep talking about that until I convince you or you convince me.

In the context of your examples (which seems to EA/rationality community), the "product" we're building together is very nebulous. Maybe it's "a truth-seeking community/true knowledge" or "an optimal world." So John might not be repeatedly mentioning veganism because its his need, but because he believes veganism is crucially important the success of the entire joint project + everyone's else values/goals. He might be arguing: we need to talk about this for all of our sakes, not just mine.

Obviously, there need to be good ways of allocating group attention between the different things that different people think are imperative for success, and good ways of handling persistent disagreements, etc. If 9 out of 10 people have heard my arguments (repeatedly) and are still against minimalism, I should possibly accept that or leave (first I might have a conversation with Jill). If I'm being unproduct and uncooperative/coercive in my desire to talk about a thing repeatedly in way that harms group cooperation and health, it's probably necessary for Jill to have a chat with me, etc - similar to the picture you painted.


Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T19:59:37.947Z · LW(p) · GW(p)

In many cases (including the rationality community/LW), the point is to come together towards some joint objective. Raemon would call this building a product together. When you're building a product, it's not about my needs vs your needs, it's about which actions will actually lead to a successful product.

I think it's quite importantly about both. In her review of An Everyone Culture Sarah describes a deliberately developmental organizations as

creating a culture where everyone talks about mistakes and improvements, and where the personal/professional boundaries are broken down.

That second one may seem nuts, but as she points out in her review of moral mazes in the same post, there's a really good reasons to bring our needs to work: If we don't, we end up pretending to have conversations about Product that are actually about our needs. This should terrify us as people who actually care about the product, because it might mean that your fighting for website minimalism is actually about your need to be heard, and has nothing to do with creating a better reading experience as you're eloquently arguing.

Once you accept that much work at traditional organizations is actually about expressing trauma and unmet needs, we can split up the conversation so that the part about expressing needs also has the CONTENT of addressing your needs, so the product doesn't suffer from being an outlet for them.

At this point, you have to make sure that you're taking into account the consequences to the product and organization as a whole as you work to address your needs, and balance them with your other co-workers, who are trying to get their needs met as well.

Replies from: Ruby, Hazard
comment by Ruby · 2019-07-30T01:00:44.262Z · LW(p) · GW(p)

I think I understand this picture and could pass your ITT (maybe), but I think your proposed org will fail in all but exceptional circumstances for reasons I don't have an immediate great articulation for.

I'll attempt to offer something, but I might need to stew on it longer (plus it's probably a rather long conversation we were to try to properly resolve it. I'd be up for chatting sometime or a public Double-Crux or the like. Feel free to reply to this one, but probably next round should happen elsewhere).

A thing I emphatically agree with is that people are usually covertly pursuing other goals when working on products together. I lean a bit "cynical" here and think it's "expressing trauma and unmet needs" plus typical monkey status competition stuff. Much of the later stuff is a) subconscious and instinctive (for the reasons given in Elephant in the Brain/Trivers), and b) not stuff you can ever admit to and still succeed at due to its zero-sum, adversarial nature. I'll collectively call these a person's Other Goals because they're "other" than the stated goal of building a product.

I think that people's (sub)conscious pursuit of Other Goals does interfere with their ability to work on the product, but I think it's a perilous for an organization's solution to be to try ensure everyone's satisfied on their Other Goals enough to work on the product without distraction/compromise. Individuals should attempt to do achieve integration/inner harmony, etc., but if an organizations tries to create this for them as its primary strategy for dealing with Other Goals, I foresee that opening being exploited ruthlessly by the Other Goals to detriment of the product. [elaboration/justification needed]

I favor solving the Other Goals problem by being a culture/system which rewards and punishes you for helping or hindering the product. Want to be more listened to? Have good ideas for the product! This requires an emotional maturity of sorts from members who need to be able to contribute to the actual goal even if it means neglecting their Other Goals pursuit. ("I concede you're right about minimalism because I care about doing the correct thing for the product and not just winning." Rationalist circles do well here because it rewards one socially for this behavior thereby aligning product-goals and Other goals.)

This isn't to say feeling, emotions, needs, etc. should never be mentioned or dealt with. They should, but carefully, and only (as far as the organization is concerned) secondarily to the mission of creating the product. Definitely, I think people should explicitly deal with interpersonal issues that arise ("I feel disregarded because you never listen and always interrupt" or "I don't feel like I'm getting enough feedback on whether my work is valued"). Definitely, definitely, people should take care not to harm their collaborators psychologically or covertly do zero-sum things. Also there are many times it's good to share things that are going on for a person and receive support. But all of this only within the context of organizational values that say first and foremost comes the product, and that if something appears to be sucking attention way from the product in a way that is net harmful, that something will be cut.

As I understood it, your envisioned organization makes needs ("Other Goals" in my parlance) first-class concerns in way I expect the product to lose to. [elaboration needed]. Crucially, to me, it is the product which is far more fragile [elaboration needed].


comment by Hazard · 2019-07-21T20:01:58.263Z · LW(p) · GW(p)

Could part of this be paraphrased as "If you don't address meeting people's needs equally, they won't be able to work on pure product without it secretly being about their needs"?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T20:37:18.858Z · LW(p) · GW(p)

Yes that seems like a decent summary

comment by romeostevensit · 2019-07-20T06:43:36.966Z · LW(p) · GW(p)

I feel like the elephant in the room is that convincing logical arguments are often only weak to moderate evidence for something.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T11:04:27.283Z · LW(p) · GW(p)

So it's unclear to me which arguments you're referring to, but I think you might be saying something like

"The reason its' important to focus on needs is that if we don't, it causes people to make convincing logical arguments that are actually about their needs"

However, you could also be saying "This post is a logical argument and convincing, but that doesn't make it true."

Or possibly "A culture that's focused on discussion to find truth isn't that useful, and we should be focusing more on things like empiricism."

I'm curious what it is you're trying to point at here.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2019-07-21T22:45:07.491Z · LW(p) · GW(p)

So I can't speak for Romeo, but there's an important sense in which "logical arguments" are often not the ideal they present themselves to be as a class. Making clean and correct logical arguments requires imposing a consistent ontology, and such an ontology is necessarily not complete. Thus someone can make a correct logical argument and still fail to convince because thankfully people are better Bayesian reasoners than we often give them credit for, and if they are not convinced by logic there is a decent chance it's because the logic left out some part of reality that is holding up the belief.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-22T13:04:46.024Z · LW(p) · GW(p)

Yes, I get the argument, but am unsure of how Romeo sees it relating to this post.

Replies from: romeostevensit
comment by romeostevensit · 2019-07-22T16:41:36.144Z · LW(p) · GW(p)

In the 'keep the organization from being overrun' sense, see also sealioning. The search space of worthwhile things is very large and idiosyncratically explored by well meaning, intelligent people. Aggressive value laden 'logical arguments' often point to a tacit value to have everyone converge on the same set of metaheuristics. This is because the person doing this has a strong need for internal consistency that they are externalizing onto their social space. And there's nothing wrong with wanting internal consistency. But if pressed hard, it is anti-truth seeking as an aggregate strategy because you lose out on the consilience of having different people pursuing different search methods. Epistemology is a team sport. The objection would be 'but if we don't then argue about what we've discovered what's the point?' The point is that adversarial processes as a part of the truth seeking process needs to be consensual. This applies doubly when you aren't in a 101 space and people might be sick of a dynamic where simple seeming questions with complicated answers make newer members feel entitled to the effort needed to explain said complicated answers. This is one of the reasons well written blog posts that can be referenced by name can be so helpful for community discourse.

I like this post by the way and my comment wasn't an objection to it.

comment by Zack_M_Davis · 2019-07-20T05:33:38.747Z · LW(p) · GW(p)

our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.

And the decision algorithm that your brain uses to decide who to sit down is also recorded, one imagines? In accordance with our norm of radical transparency.

The general rule is that people should give equal weight to their own needs, the needs of the people they're interacting with, and the needs of the organization as a whole.

I'm terribly sorry, but I'm afraid I'm having a little bit of trouble working out the details of exactly how this rule would be applied in practice—could you, perhaps, possibly, help me understand?

Suppose Jill comes to Jezebel and says, "Jezebel, by mentioning the hidden Bayesian structure of language and cognition [LW · GW] so often, you're putting your own needs above the needs of those you're interacting with, and those of the organization as a whole."

Jezebel says, "Thanks, I really value your opinion! However, I've already taken everyone's needs into account, and I'm very confident I'm already doing the right thing."

What happens?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T06:02:17.153Z · LW(p) · GW(p)
And the decision algorithm that your brain uses to decide who to sit down is also recorded, one imagines? In accordance with our norm of radical transparency.

That would be pointing towards a norm of radical honesty. Radical transparency is more about assuring that skilled sociopaths can't maintain multiple realities/narratives and control information flows. Note that the selective enforcement issue you linked to is addressed by the radical transparency norm, but more directly by the previously mentioned norm.

Suppose Jill comes to Jezebel and says, "Jezebel, by mentioning the hidden Bayesian structure of language and cognition [LW · GW]so often, you're putting your own needs above the needs of those you're interacting with, and those of the organization as a whole."
Jezebel says, "Thanks, I really value your opinion! However, I've already taken everyone's needs into account, and I'm very confident I'm already doing the right thing."
What happens?

I mean, this is quite context dependent. Has Jezebel done this many times? How many complaints have there been? etc.

Here's one example of what could happen next, but I stress that this is not "the procedure", it's just one way that would work for specific circumstances:

Jezebel sits down with a few of people that have made the complaint. They work to understand each others point of view until they can ITT each other. With a facilitator they double crux. The people who complained work to understand why they think its' important, Jill works to understand why they don't want to hear it, they both talk about what effect they have on the organization. They work to come to a shared point of view.

However, this is a really weird case and even after all that there's fundamental differences. At the end, they can't come to an agreement. Jill sits down with everyone, understands all the points of views, and does her best to understand all the arguments. In the end, she determines Jezebel was correct. She sits down with each of them and explains why, based on the values, she decided that Jezebel was correct. The recording of this conversation then becomes "case law" for this specific value, and things are slightly clearer when a similar situation comes up in the future.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-20T08:27:19.684Z · LW(p) · GW(p)

Jill sits down with everyone, understands all the points of views, and does her best to understand all the arguments. In the end, she determines Jezebel was correct.

What if, instead, Jill determines that Jezebel was wrong—but Jezebel still disagrees?

She sits down with each of them and explains why, based on the values, she decided that Jezebel was correct.

What if all said people are not satisfied with Jill’s explanation?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T09:16:54.053Z · LW(p) · GW(p)

Once again, the answers to these questions are highly context specific. Are the values in question new, or very established? Does the decision seem highly idiosyncratic and hard to justify with previous decisions? How many people are involved/disagree?

Depending on these issues, the next steps could involve anything from changing onboarding procedures, norms, and rituals(because the values are not being imparted well), to going to a leadership oversight commitee (because the leader's doing a bad job), to telling people to respect the leaders decision, to firing or banning people.

------

On a meta note, both of the above questions (Zach's and Said's) feel a bit weird to me, like there are clear answers to them if you spend a few minutes steelmanning how the aforementioned organization would work well. My sense is that either the questions are being very uncharitable, they're looking for impossible certainty in an obviously context specific and highly variable situation, or they're doing some sort of socratic move (in the latter case, this is a style of conversation I'd rather not have on my posts, and in the former cases, I'd prefer people to be more charitable and work to steelman).

It could also be that I'm just assuming a much smaller inferential gap than there actually is, and the answers would not be clear to most people who aren't as steeped in this stuff as I am.

Replies from: Zack_M_Davis, Ruby
comment by Zack_M_Davis · 2019-07-20T15:08:14.052Z · LW(p) · GW(p)

or they're doing some sort of socratic move (in the latter case, this is a style of conversation I'd rather not have on my posts

Very well. I will endeavor to be more direct.

there are clear answers to them if you spend a few minutes steelmanning how the aforementioned organization would work well

The fourth virtue is evenness! If you first write at the bottom of a sheet of paper [LW · GW], "And therefore, the aforementioned organization would work well!", it doesn't matter what arguments you write above it afterward—the evidential entanglement [LW · GW] between your position and whatever features-of-the-world actually determine organizational success, was fixed the moment you determined your conclusion. After-the-fact steelmanning that selectively searches for arguments supporting that conclusion [LW · GW] can't help you design better organizations unless they have the power to change the conclusion. Yes requires the possibility of no. [LW · GW]

they're looking for impossible certainty in an obviously context specific and highly variable situation

We're looking for a decision procedure. "It's context-specific; it depends" is a good start, but a useful proposal needs to say more about what it depends on.

A simple example of a decision procedure might be "direct democracy." People vote on what to do, and whichever proposal has more votes is implemented. This procedure provides a specific way to proceed when people don't agree on what to do: they vote!

In both the OP and your response to me, you tell a story about people successfully talking out their differences, but a robust institution needs to be able to function even when people can't talk it out—and the game theory of "What happens if we can't talk it out" probably ends up shaping people's behavior while talking it out.

For example, suspects of a police investigation might be very cooperative with the "good cop" who speaks with a friendly demeanor and offers the suspect a cup of coffee: if you look at the radically transparent video of the interview, you'll just see two people having a perfectly friendly conversation about where the suspect was at 8:20 p.m. on the night of the seventeenth and whether they have witnesses to support this alibi. But the reason that conversation is so friendly is because the suspect can predict that the good cop's partner might not be so friendly.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T16:10:08.976Z · LW(p) · GW(p)

I feel like having a trusted leader is a pretty clear tiebreaking decision procedure, no? However, the important parts of this model and the organizations I've been a part of is all the OTHER parts that come before that last resort, where people have a clear sense of values, buy into them, and recognize themselves or as a group when they're not following them. But in the end, if all of those important bits failed, these organizations still have a hierarchy.

ETA: The decision procedure IS the values. The values are hard to pin down because values are hard to pin down, they're taught through examples and rituals and anecdotes and example and the weights on the neural nets in people's heads get to learn what following them and breaking them look like. Ultimately theres leaders who can help make tough calls and fix adversarial examples and ambiguous options and the like, but the important part of these organizations is mostly how they're set up to train that neural net.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-20T19:06:26.439Z · LW(p) · GW(p)

The decision procedure IS the values [...] taught through examples and rituals and anecdotes and example and the weights on the neural nets in people's heads

That makes sense; I agree that culture (which is very complicated and hard to pin down) is a very important determinant of outcomes in organizations. One thing that's probably important to study (that I wish I understood better) is how subcultures develop over time: as people leave and exit [LW · GW] the organization over time, the values initially trained into the neural net may drift substantially.

comment by Ruby · 2019-07-20T18:13:33.072Z · LW(p) · GW(p)

Edit: I hadn't read Zack's long reply [LW(p) · GW(p)] when making this comment, so it wasn't factored into it. Likely would have said something very slightly different if I had.

--

Entirely fair of you to make the meta-note. Data point from me: I actually found the question/answer pairs quite helpful + think they're reasonable; I probably could have generated answers for a system I set up, but I haven't fully absorbed your proposal enough to do so on your behalf.

Actually, something generally helpful to hear is the "it's highly context specific." That seems true and a good answer. I think I would have tried to specify some overarching principle for all these cases and done so poorly.

Treading carefully, I'll say that I can't speak to the motivations/attitudes behind the questions, and I thought the wording in the other question wasn't very good, but both questions themselves seem good to me.

comment by Matt Goldenberg (mr-hire) · 2019-07-30T21:30:46.508Z · LW(p) · GW(p)

Doing the "strong opinions weakly held" thing can make it hard to know when I've updated, so I want to list a few updates I've made from discussing this post with people on LW and in person:

  • One of the major things I didn't realize about the models I was using in this post is when they do and don't apply. In particular, the models related to radical transparency and applying the values to everyone work better in a private space with strong vetting, and the models related to "balancing needs" work better in a public space with weaker vetting. If I were to write the post again, this is the biggest change I would focus on making.

  • I am now more skeptical of radical transparency and wary of some of its' psychological effects, especially in the context of a public space, but even in private organizations with strong vetting.

  • I still think the "people's needs are equal with the product of the space" model is basically correct for a public space, but now think that there are multiple ways that could look. One of the ways it could look is like here, but another way this could be implemented is in which everyone is "responsible" for their own feelings. That is, people can treat their own needs as equal by leaving the space if they're getting annoyed/having bad feelings. I still think this is likely to lead to the most abrasive/thickest skinned people taking over the space, but I think there are probably some spaces that should operate this way, and definitely this should exist in an archipelago model.

  • I didn't do enough to distinguish between terminal and instrumental values, and now put more weight on an organization making these things clear, as well as for my own explanations of these cultures.

Replies from: Raemon
comment by Raemon · 2019-07-30T22:19:08.953Z · LW(p) · GW(p)

I think a relatively straightforward disagreement is "peoples needs are equal with the space" seems fairly strong, and unnecessarily so. Why 50/50 instead of 75/25 or some such? Especially for spacing that are aiming to be, like, a professional production environment, it does seem to me that if you don't put any effort into making sure people's basic needs are taken care of you're product will suffer (as people find ways to make the product fit their needs, in subtle ways). 50/50 just seems like a pretty strong jump to me.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-31T16:01:29.059Z · LW(p) · GW(p)
Why 50/50 instead of 75/25 or some such?

I think that there are bad psychological traps that happen when you view a person's needs as less important than your own, which transfer over to an organization, as well as when you view your needs as lesser. That is, I suspect that in a public/non-vetted space, saying "people's needs are half as important as the organization" will lead to abuse by people in power at the organization of people with less power, or even by people who feel more senior/in tune with the needs of the organization to newer members. It may also lead to people who don't know the importance of self-care burning out.

In a vetted or private space, I think you can talk about people being willing to sacrifice their needs for the greater good, as long as its' done carefully and deliberately with strong checks and balances.

comment by Ruby · 2019-07-30T01:18:57.108Z · LW(p) · GW(p)

I just reread your post and have a couple more comments.

Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations [LW · GW], and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism [LW · GW]. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!
Jill: Yes, truthseeking is very important. However, It's clear that just choosing one value as sacred [LW · GW], and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you've pointed at a clear tension in our values as they're currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.

I think it's one thing to say that instrumentally the value of truth is maximized by placing some restrictions on people's ability to express things (e.g. no repeating the same argument again and again, you have to be civil) and a very different thing to treat to treat something like attracting people as a top-level value to be traded off against the value of truth.

My prediction [justification needed] is that if you allow appeals to "but that would be unpopular/drive people away" to be as important as "is it true/cause accurate updates?", you will no longer be a place of truth-seeking, and politics will eat you, something, something. Even allowing questions "will it drive people away?" instrumentally for truth is dangerous, but perhaps safer if ultimately you're judging by the impact on truth.

Sorry, I'll work on explaining why I have that prediction. It seems sometimes once a model has become embedded deep enough, it gets difficult to express succinctly in words.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-31T21:06:42.328Z · LW(p) · GW(p)

So I think the actual terminal goal for something like LW might be "uncover important intellectual truths." It's certainly not "say true things" or the site would be served by simply republishing the thesaurus over and over.

I think if you're judging the impact on that value, then both "freedom of speech" and "not driving people away" begin to trade off against each other in important ways.

Replies from: Ruby
comment by Ruby · 2019-08-01T00:17:50.356Z · LW(p) · GW(p)
I think if you're judging the impact on that value, then both "freedom of speech" and "not driving people away" begin to trade off against each other in important ways.

Yes, that I agree with, and I'm happy with that framing of it.

I suppose the actual terminal goal is a thing that ought to be clarified and agreed upon. The about page [? · GW] has:

To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.

But that's pretty brief, doesn't explicitly mention truth, and doesn't distinguish between "uncover important intellectual truths" and "cause all its members to have maximally accurate maps" or something.

Elsewhere [LW · GW], I've talked at length about the goal of intellectual progress for LessWrong. That's also unclear about what specific tradeoffs are implied when pursuing truths.

Important questions, probably the community should discuss them more. (I though my posting a draft of the new about page would spark this discussion, but it didn't.)

comment by Lukas_Gloor · 2019-07-20T11:01:46.396Z · LW(p) · GW(p)

I liked this post a lot and loved the additional comment about "Feeling and truth-seeking norms" you wrote here [LW(p) · GW(p)].

As a small data point: there have been at least three instances in the past ~three months where I was explicitly noticing certain norm-promoting behavior in the rationalist community (and Lesswrong in particular) that I found off-putting, and "truth-seeking over everything else" captures it really well.

Treating things as sacred can lead to infectiousness where items in the vicinity of the thing are treated as sacred too, even in cases where the link to it becomes increasingly indirect.

For instance, in the discussion about whether downvote notifications should be shown to users as often as upvote notifications, I saw the sentiment expressed that it would be against the "core of rationality" to ever "hide" (by which people really just meant make less salient) certain types of useful information. Maybe this was just an expression of a visceral sentiment and not something the person would 100% endorse, but just in case it was the latter: It is misguided to think of rationality in that way. "It is rational to do x regardless of how it affects people's quality of life and productivity" should never be an argument. Most people's life goals aren't solely about truth-seeking nor about always mastering unhelpful emotions.

I think I'm on board with locking in some core epistemic virtues related to truth-seeking "as though it were sacred". I think some version of that is going to be best overall for people's life goals. But it's an open question how large that core should be. The cluster of things I associate with "epistemic virtue" is large and fuzzy. I am pretty confident that it's good to treat the core of that cluster as sacred. (For instance, that might include principles like "don't lie, present arguments rather than persuade, engage productively and listen to others, be completely transparent about moderation decisions such as banning policies," etc.) I'm less confident it's good for things that are a bit less central to the cluster. I'm very confident we shouldn't treat some things in the outer layers as sacred (and doing that would kind of trigger me if I'm being honest).

I guess one could object to my stance by asking: Is it possible to treat only the clearest instances of the truth-seeking virtue cluster as sacred without slipping down the slope of losing all the benefits of having something be treated as sacred at all?

I'm not completely sure, but here are some reasons why I think it ought to be possible:

  • People seem to be intuitively good at dealing with fuzzy concepts. If Jill (in the OP) is transparent about conversations he's having like the one shown with John, I am optimistic that the vast majority of the audience could come to conclude that Jill is acting in the realm of what is reasonable, even if they would sometimes draw boundaries in slightly different places.
  • I feel like tradeoffs are often overstated. In cases where truth-seeking norms conflict with other very important things, the best solution is rarely to have a foundational discussion about what's more important to then kick out one of the two things. Rather, I have hope that usually one can come up with some alternative solution (such as e.g. moving discussions about veganism to a separate thread, and asking Jill to link to that separate thread with a short and discreet comment, as opposed to Jill riding her hobbyhorse on all the threads she wants to "derail").
  • Personally, I think there's just as much to lose from cultivating an overly large cluster of sacredness than from an overly small one. Goodhearting "rationality for rationality's sake" and evaporative cooling where people put off by certain community features start contributing less and less both seem like very real risks to me.
Replies from: Ruby
comment by Ruby · 2019-07-20T19:09:28.844Z · LW(p) · GW(p)
As a small data point: there have been at least three instances in the past ~three months where I was explicitly noticing certain norm-promoting behavior in the rationalist community (and Lesswrong in particular) that I found off-putting, and "truth-seeking over everything else" captures it really well.

Can you clarify which bit was off-putting? The fact that any norms were being promoted or the specific norms being promoted?

If the former, I think it's actually important that a community debates and determines it norms, and that members enforce those norms. I think it's overall healthy to me that norms are being discussed a lot present (even if not all the discussion happens in accordance with the norms I'd advocate).

"It is rational to do x regardless of how it affects people's quality of life and productivity" should never be an argument.

That doesn't feel true to me. Specific examples don't spring to mind, but I can't endorse that as a categorical statement in the abstract. People's quality of life and productivity (in the short-term) aren't sacred enough to me to be never be outweighed in any circumstance.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2019-07-20T21:13:18.966Z · LW(p) · GW(p)
Can you clarify which bit was off-putting? The fact that any norms were being promoted or the specific norms being promoted?

Only the latter. And also the vehemence with which these viewpoints seemed to be held and defended. I got the impression that statements of the sort "yay truth as the only sacred value" received strong support; personally I find that off-putting in many contexts.

Edit: The reason I find it off-putting isn't that I disagree with the position as site policy. More that sometimes the appropriate thing in a situation isn't just to respond with some tirade about why it's good to have an unempathetic site policy.

To give some more context: Only the first instance of this had to do with explicit calls for forum policy. This was probably the same example that inspired the dialogue between Jill and John above.

The second example was a comment on the question of making downvotes less salient. While I agree that the idea has drawbacks, I was a bit perplexed that a comment arguing against it got strongly upvoted despite including claims that felt to me like problematic "rationality for rationality's sake": Instead of allowing people to only look at demotivating information at specific times, we declare it antithetical to the "core of rationality" to hide information whether or not it overall makes people accomplish their goals better.

The third instance was an exchange you had about conversational tone and (lack of) charity. Toward the end you said that you didn't like the way you phrased your initial criticism, but my quick impression (and I probably only skimmed the lengthy exchange and also don't remember details) was that I generally thought your points seemed pretty defensible, and the way your conversation partner commented would have also thrown me off. "Tone and degree of charity are very important too" is a perspective I'd like to see represented more among LW users. (But if I'm in the minority, that's fine and I don't object to communities keeping their defining features if the majority feels that they are benefitting.)

That doesn't feel true to me.

Maybe I expressed it poorly, but what I meant was just that rationality is not an end in itself. If I complain that some piece of advice is not working for me because it makes me (all-things-considered, long-term) less productive (towards the things that are most important to me) and less happy, and my conversation partner makes some unqualified statement to the degree of "but it's rational to follow this type of advice", I will start to suspect that they are misunderstanding what rationality is for.

Replies from: Ruby, Zack_M_Davis
comment by Ruby · 2019-07-20T21:49:06.745Z · LW(p) · GW(p)
And also the vehemence with which these viewpoints seemed to be held and defended.

I agree there's something like vehemence and it's made all the conversations unpleasant and stressful. Someone countered to me that if you perceive someone to be threatening the very integrity of your ability to have conversations, it's appropriate to break frame and get up in arms. I'm not convinced it's warranted here, but maybe...

"Tone and degree of charity are very important too" is a perspective I'd like to see represented more among LW users. (But if I'm in the minority, that's fine and I don't object to communities keeping their defining features if the majority feels that they are benefitting.)

I'm not sure about the exact proportion of people's perspectives. There definitely is a cluster of people (myself included) who think "tone", etc. are significant. (This group also might be more averse to getting into online conflicts.) I'm also concerned about the number of people who would counterfactually engage more on LessWrong, except they dislike the conversations they'll end up in currently.

There are a bunch of conversations going on about the topic right (some in semi-private which might be public soonish). There's support (at least on the LW team) for an Archipelago type solution where people can opt-in into one of 2 or 3 norm sets. (Though that doesn't quite fix site-level things like the karma notifier settings.) One of those spaces should have much more "civility."

Maybe I expressed it poorly, but what I meant was just that rationality is not an end in itself.

Yeah, that's reasonable. I think that many people, while agreeing with that (or something close to it), get very afraid as soon as someone says it that because they fear it's going to be used to justify distinctly not-rational/damages the whole endeavor of being rational. I have some of this fear myself.

It seems to me that rationality is extremely fragile and vulnerable, such that even though rationality might serves other goals, you have to be very uncompromising with regards to rationality, especially core things like hiding information from yourself (I was lightly opposed to the negative karma hiding myself) even if it that has appararant costs.

But it's hard. I think there are tricky questions to answer, but the conversation currently can be civil/happen without vehemence.


Replies from: Lukas_Gloor, Lukas_Gloor
comment by Lukas_Gloor · 2019-07-21T09:00:20.138Z · LW(p) · GW(p)
There are a bunch of conversations going on about the topic right (some in semi-private which might be public soonish).

Cool! And I appreciate the difficulty of the task at hand. :)

When I model these conversations, one failure mode I'm worried about is that the "more civility" position gets lumped together with other things that Lesswrong is probably right to be scared of.

So, the following is to delineate my own views from things I'm not saying:

I could imagine being fine with Bridgewater culture in many (but not all) contexts. I hate that in "today's climate" it is difficult to talk about certain topics. I think it's often the case that people complaining about tone or about not feeling welcome shouldn't expect to have their needs accommodated.

And yet I still find some features of what I perceive to be "rationalist culture" very off-putting.

I don't think I phrased it as well in my first comment, but I can fully get behind what Raemon said elsewhere in this thread:

Some of the language about "holding truth sacred" [...] has came across to me with a tone of single-minded focus that feels like not being willing to put an upper bound on a heart transplant, rather than earnestly asking the question "how do we get the most valuable truthseeking the most effective way?"

So it's not that I'm saying that I'd prefer a culture where truth-seeking is occasionally completely abandoned because of some other consideration. Just that the side that superficially looks more virtuous when it comes to truth-seeking (for instance because they boldly proclaim the importance of not being bothered by tone/tact, downvote notifications, etc.) isn't automatically what's best in the long run.

Edited to add: I admit it's a delicate balance to walk. But sometimes, people are inconsiderate in a way that definitely harms discussions. The principle of charity isn't just a thing in philosophy to make people feel good; there's also some methodological use to it. Likewise with trying to understand that other people have different minds from one's own. There has to be a way to point out inconsiderateness that doesn't get met with a response a la "tact doesn't matter because truth is the only virtue."

comment by Lukas_Gloor · 2019-07-21T09:05:43.789Z · LW(p) · GW(p)
It seems to me that rationality is extremely fragile and vulnerable, such that even though rationality might serves other goals, you have to be very uncompromising with regards to rationality, especially core things like hiding information from yourself (I was lightly opposed to the negative karma hiding myself) even if it that has appararant costs.

I agree with that. But people can have very different psychologies. Most people are prone to overconfidence, but some people are underconfident and beat themselves up too much over negative feedback. If the site offers an optional feature that is very useful for people of the latter type, it's at least worth considering whether that's an overall improvement. I wasn't even annoyed that people didn't like the feature; it was more about the way in which the person argued. Generally, more display of awareness of people having different psychologies would please me. :)

comment by Zack_M_Davis · 2019-07-21T00:33:56.530Z · LW(p) · GW(p)

I got the impression that statements of the sort "yay truth as the only sacred value" received strong support; personally I find that off-putting in many contexts.

I also find it off-putting in many contexts—perhaps most contexts. But if there's any consequentialist value in having one space in the entire world [LW(p) · GW(p)] where (within the confines of that space) truth is the only sacred value, perhaps lesswrong.com is a Schelling point?

Replies from: Raemon
comment by Raemon · 2019-07-21T02:04:31.751Z · LW(p) · GW(p)

Something that I'm maybe able to put into words now:

The classical example of "sacred values run amok" in my mind is when you ask people how much money a hospital should spend on a heart transplant for a dying child. People try to dodge the question, avoiding trading off a sacred value for a mundane value. Despite the fact that money can buy hospital equipment that saves other lives.

It's plausible that hospital should hold "keeping people healthy and alive" as an overall sacred value, which they never trade off against. This might forbid some paths where resources are spent on things that weren't necessary to keep people healthy and alive. But it doesn't tell you what are the best strategies to go about it are. You're allowed to sacrifice a boy's life to buy hospital equipment. You're even allowed to sacrifice a boy's life to make sure your employees are well rested and not overly stressed. Running a hospital is a marathon, not a sprint.

Over the past couple years, I have updated to "yes, LessWrong should be the place focused on truthseeking." I think I came to believe that right around the time I wrote Tensions in Truthseeking [LW · GW], in the process of writing the paragraph about instrumental sacredness. But that tells us what the question is, not what the answer is.

Some of the language about "holding truth sacred" (things you've said, and others) has came across to me with a tone of single-minded focus that feels like not being willing to put an upper bound on a heart transplant, rather than earnestly asking the question "how do we get the most valuable truthseeking the most effective way?"

There's also the bit where operationalization matters. "Minimize falsehood" is a different function than "maximize true, good ideas over time" which is a different function than "maximize true, good ideas that are communicated well enough to impact the world."

Replies from: Zack_M_Davis, Zack_M_Davis
comment by Zack_M_Davis · 2019-07-21T04:00:13.399Z · LW(p) · GW(p)

I definitely agree that there could exist perverse situations where there are instrumental tradeoffs to be made in truthseeking of the kind I and others have been suspicious of. For lack of a better term, let me call these "instrumentally epistemic" arguments: claims of the form, "X is true, but the consequences of saying it will actually result in less knowledge on net." I can totally believe that some instrumentally epistemic arguments might hold. There's nothing in my understanding of how the universe works that would prevent that kind of scenario from happening.

But in practice, with humans, I expect that a solid supermajority of real-world attempts to explicitly advocate for norm changes on "instrumentally epistemic" grounds are going to be utterly facile rationalizations [LW · GW] with the (typically unconscious) motivation of justifying cowardice, intellectual dishonesty, ego-protection, &c.

I (somewhat apologetically) made an "instrumentally epistemic" argument in a private email thread recently, and Ben seemed super pissed in his reply (bold italics, incredulous tone, "?!?!?!?!?!" punctuation). But the thing is—even if I might conceivably go on to defend a modified form of my original argument—I can't blame Ben for using a pissed-off tone in his reply. "Instrumentally epistemic" arguments are an enormous red flag—an infrared flag thirty meters wide. Your prior should be that someone making an "instrumentally epistemic" argument can be usefully modeled as trying to undermine your perception of reality and metaphorically slash your tires (even if their conscious phonological loop never contains the explicit sentence "And now I'm going to try to undermine Ray Arnold's perception of reality").

Now, maybe that prior can be overcome for some arguments and some arguers! But the apparent failure of the one making the "instrumentally epistemic" argument to notice the thirty-meter red flag, is another red flag.

I don't think the hospital example does the situation justice. The trade-off of choosing whether to spend money on a heart transplant or nurse salaries doesn't seem analogous to choosing between truth and the occasional allegedly-instrumentally-epistemic lie (like reassuring your interlocutor that you respect them even when you don't, in fact, respect them). Rather, it seems more closely analogous to choice of inquiry area (like whether to study truths about chemistry, or truths about biology), with "minutes of study time" as the resource to be allocated rather than dollars.

If we want a maximally charitable medical analogy for "instrumentally epistemic" lies, I would instead nominate chemotherapy, where we deliberately poison patients in the hope of hurting cancer cells more than healthy cells. Chemotherapy can be good if there's solid evidence that you have a specific type of cancer that responds well to that specific type of chemotherapy. But you should probably check that people aren't just trying to poison you!

Replies from: Raemon, Wei_Dai
comment by Raemon · 2019-07-21T06:26:33.685Z · LW(p) · GW(p)

I'm not advocating lying here, I'm advocating learning the communication skills necessary to a) actually get people to understand your point (which they'll have a harder time with if they're defensive), and b) not wasting dozens of hours unnecessarily (which could be better spent on figuring other things out).

[and to be clear, I also advocate gaining the courage to speak the truth even if your voice trembles, and be willing to fight for it when it's important. Just, those aren't the only skills a rationalist or a rationalist space needs. Listening, communicating clearly, avoiding triggering people's "use language as politics mode", and modeling minds and frames different from your own are key skills too]

comment by Wei Dai (Wei_Dai) · 2019-07-21T06:48:17.984Z · LW(p) · GW(p)

Your prior should be that someone making an “instrumentally epistemic” argument can be usefully modeled as trying to undermine your perception of reality and metaphorically slash your tires (even if their conscious phonological loop never contains the explicit sentence “And now I’m going to try to undermine Ray Arnold’s perception of reality”).

Why do you think this prior is right?

But the apparent failure of the one making the “instrumentally epistemic” argument to notice the thirty-meter red flag, is another red flag.

This seems true only if your prior is so obviously right that one couldn't disagree with it in good faith. I'm not convinced of this.

(As I mentioned I'm sympathetic to both sides of the debate here, but I find myself wanting to question your side more, because it seems to display a lot more certainty (along with associated signals such as exasperation and incredulity), which doesn't seem justified to me.)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-21T08:20:18.012Z · LW(p) · GW(p)

I find myself wanting to question your side more

Thanks, I appreciate it a lot! You should be questioning my "side" as harshly as you see fit, because if you ask questions I can't satisfactorily answer, then maybe my side is wrong, and I should be informed of this in order to become less wrong.

Why do you think this prior is right?

The mechanism by which saying true things leads to more knowledge is at least straightforward: you present arguments and evidence, and other people evaluate those arguments and evidence using the same general rules of reasoning that they use for everything else, and hopefully they learn stuff.

In order for saying true things to lead to less knowledge, we need to postulate some more complicated failure mode where some side-effect of speech disrupts the ordinary process of learning. I can totally believe that such failure modes exist, and even that they're common. But lately I seem to be seeing a lot of arguments of the form, "Ah, but we need to coordinate in order to create norms that make everyone feel Safe, and only then can we seek truth." And I just ... really have trouble taking this seriously as a good faith argument rather than an attempt to collude to protect everyone's feelings? Like, telling the truth is not a coordination problem? You can just unilaterally tell the truth.

associated signals such as exasperation and incredulity

Hm, I think there's a risk of signal miscalibration here. Just because I feel [LW · GW] exasperated and this emotion leaks into my writing, doesn't necessarily mean implied probabilities close to 1? (Related: Say It Loud [LW · GW]. See also my speculative just-so story about why the incredulity is probably non-normative [LW(p) · GW(p)].)

(It's 1:20 a.m. on Sunday and I've used up my internet quota for the weekend, so it might take me a few days to respond to future comments.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T10:44:21.582Z · LW(p) · GW(p)
But lately I seem to be seeing a lot of arguments of the form, "Ah, but we need to coordinate in order to create norms that make everyone feel Safe, and only then can we seek truth." And I just ... really have trouble taking this seriously as a good faith argument rather than an attempt to collude to protect everyone's feelings?

I want to address something that I think is quite important in the context of this post, because I think you're pattern matching the "let's make a space where people's needs are addressed," to the standard social justice safe space, but there are actually 3 types of safe spaces [LW(p) · GW(p)], and the one you're imagining is not related to the ones this post is talking about.

The social justice kind, where nobody is allowed to bring up arguments that make you feel unsafe, is the one you're talking about. "We need to make everyone feel safe and can't seek truth until we do that" is describing an environment where truth seeking is basically impossible. I think private spaces like that are important in a rationalist environment, because some people are fragile and need to heal before they can participate in truth seeking, but are almost never right for an organization that has the goal of seeking truth.

Then there's the kind that this post is talking about. In this type of environment, it's safe to say "This conversation is making me feel unsafe, so I need to leave". It's also safe to say "It feels like your need for safety is getting in the way of truthseeking" as well as for other people to push back on that if they think that this person's need for safety is so great in this moment that we need to accommodate them for a bit and return to this topic later. I think the majority of public truth-seeking spaces would be served by adopting this type of safety, in lieu of something like Crocker's rules.

Then there's the third type of safe space. In this type of safe space, you can say "This topic is making me feel unsafe" and the expected response is "Awesome, then we're going to keep throwing you in as many situations like this as possible, poke that emotional wound, and help you work through it so you can level up as an individual and we can level up as an organization." In this case, the safety comes from the strict vetting procedures and strong culture that let you know that people poking your are sincere and skilled, and the people being poked have the emotional strength to deal with it. I think that a good majority of PRIVATE truth seeking spaces should strive to be this third type of safe space.

One of the mistakes I made in this post was conflate the second and third types of safe spaces, so for instance I posited a public space that also had radical transparency, which is really only a tool you should use in a culture with strong vetting. However, I definitely was not suggesting the first type of safe space, but I get the impression that that's what you keep imagining.





Replies from: Zack_M_Davis, SaidAchmiz
comment by Zack_M_Davis · 2019-07-22T16:01:56.440Z · LW(p) · GW(p)

In this type of environment, it's safe to say "This conversation is making me feel unsafe, so I need to leave".

I mean, in the case of a website that people use in their free time, you don't necessarily even need an excuse: if you don't find a conversation valuable (because it's making you feel unsafe or for any other reason), you can just strong-downvote them and stop replying.

There was a recent case on Less Wrong where one of two reasons I gave for calling for end-of-conversation was that I was feeling "emotionally exhausted" [LW(p) · GW(p)], which seems similar to feeling unsafe. But that was me explaining why I didn't feel like talking anymore. I definitely wasn't saying that my interlocutor should give equal weight to his needs, my needs, and the needs of the forum of the whole. I don't see how anyone is supposed to compute that.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-22T19:43:11.024Z · LW(p) · GW(p)

I don't see how anyone is supposed to compute that.

If your primary metaphor for thought is simple computations or mathematical functions, I can see how this would be very confusing, but I don't think that's actually the native architecture of our brains. Instead our brain is noticing patterns, creating reusable heuristics, and simulating other people using empathy.

When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable. The shared values and culture serve to make sure those heuristics are calibrated similarly between people. Reply

Replies from: Zack_M_Davis, philh
comment by Zack_M_Davis · 2019-07-23T07:21:40.118Z · LW(p) · GW(p)

When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer.

I don't think "reasonable" is the correct word here. You keep assuming away the possibility of conflict. It's easy to find a peaceful answer by simulating other people using empathy, if there's nothing anyone cares about more than not rocking the boat. But what about the least convenient possible [LW · GW] world where one party has Something to Protect [LW · GW] which the other party doesn't think is "reasonable"?

The shared values and culture serve to make sure those heuristics are calibrated similarly between people.

Riiiight, about that. The OP is about robust organizations in general without mentioning any specific organization, but given the three mentions of "truthseeking", I'd like to talk about the special case of this website, and set it in the context of a previous discussion we've had [LW(p) · GW(p)].

I don't think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong. I was there (first comment December 22, 2007) [LW(p) · GW(p)]. If the Less Wrong and "rationalist" brand names are now largely being held by a different culture with different values, I and the forces I represent have an interest in fighting to take them back.

Let me reply to your dialogue with another. To set the scene, I've been drafting a forthcoming post (working title: "Schelling Categories, and Simple Membership Tests") in my nascent Sequence on the cognitive function of categories, which is to refer back to my post "The Univariate Fallacy" [LW · GW]. Let's suppose that by the time I finally get around to publishing "Schelling Categories" (like the Great Teacher, I suffer from writer's molasses [LW · GW]), the Jill from your dialogue has broken out of her simulation, instantiated herself in our universe, and joined the LW2 moderation team.

Jill: Zack, I've had another complaint—separate from the one in May [LW · GW]—about your tendency to steer conversations towards divisive topics, and I'm going to ask you to tone it down a bit when on Frontpage posts [LW · GW].

Zack: What? Why? Wait, sorry—that was a rhetorical question, which I've been told is a violation of cooperative discourse norms. I think I can guess what motivated the complaint. But I want to hear you explain it.

Jill: Well, you mentioned this "univariate fallacy" again, and in the context of some things you've Tweeted, there was some concern that you were actually trying to allude to gender differences, which might make some community members of marginalized genders feel uncomfortable.

Zack: (aside) I'm guess I'm glad I didn't keep calling it Lewontin's fallacy.

(to Jill) So ... you're asking me to tone down the statistics blogging—on less wrong dot com—because some people who read what I write elsewhere can correctly infer that my motivation for thinking about this particular statistical phenomenon was because I needed it to help me make sense of an area of science I've been horrifiedly [LW(p) · GW(p)] fascinated [? · GW] with [LW(p) · GW(p)] for the last fourteen years, and that scientific question might make some people feel uncomfortable?

Jill: Right. Truthseeking is very important. However, it's clear that just choosing one value as sacred and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you've pointed at a clear tension in our values as they're currently stated: the tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at. I'm only asking you to give equal weight to your own needs, the needs of the people you're interacting with, and the needs of the organization as a whole.

Zack: (aside) Wow. It's like I'm actually living in Atlas Shrugged, just like Michael Vassar said. (to Jill) No.

Jill: What?

Zack: I said No. As a commenter on lesswrong.com, my duty and my only duty is to try to make—wait, scratch the "try" [LW · GW]—to make contributions that advance the art of human rationality. I consider myself to have a moral responsibility to ignore the emotional needs of other commenters—and symmetrically, I think they have a moral responsibility to ignore mine.

Jill: I'd prefer that you be more charitable and work to steelman what I said.

Zack: If you think I've misunderstood what you've said, I'm happy to listen to you clarify whatever part you think I'm getting wrong. The point of the principle of charity is that people are motivated to strawman their interlocutors; reminding yourself to be "charitable" to others helps to correct for this bias. But to tell others to be charitable to you without giving them feedback about how, specifically, you think they're misinterpreting what you said—that doesn't make any sense; it's like you're just trying to mash an "Agree with me" button. I can't say anything about what your conscious intent might be, but I don't know how to model this behavior as being in good faith—and I feel the same way about this new complaint against me.

Jill: Contextualizing norms [LW · GW] are valid rationality norms!

Zack: If by "contextualizing norms" you simply mean that what a speaker means needs to be partially understood from context, and is more than just what the sentence the speaker said means, then I agree—that's just former Denver Broncos quarterback Brian Griese philosopher of language H. P. Grice's theory of conversational implicature. But when I apply contextualizing norms to itself and look at the context around which "contextualizing norms" was coined, it sure looks like the entire point of the concept is to shut down ideologically inconvenient areas of inquiry. It's certainly understandable. As far as the unwashed masses are concerned, it's probably for the best. But it's not what this website is about—and it's not what I'm about. Not anymore. I am an aspiring epistemic rationalist. I don't negotiate with emotional blackmailers [LW · GW], I don't double-crux with Suicide Rock, and I've got Something to Protect.

Jill: (baffled) What could possibly incentivize you to be so unpragmatic?

Zack: It's not the incentives! [LW · GW] (aside) It's me!

(Curtain.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-23T17:55:58.960Z · LW(p) · GW(p)
I don't think "reasonable" is the correct word here. You keep assuming away the possibility of conflict. It's easy to find a peaceful answer by simulating other people using empathy, if there's nothing anyone cares about more than not rocking the boat. But what about the least convenient possible [LW · GW] world where one party has Something to Protect [LW · GW] which the other party doesn't think is "reasonable"?

Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn't be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its' values, that's an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.

I don't think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong.

I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.

I think it is in fact good to experiment with the norms you're talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.

Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-24T05:44:18.486Z · LW(p) · GW(p)

I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking

Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn't?

I think many of those norms originally caused the site to decline and people to go elsewhere.

I mean, that's one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we've traditionally avoided [LW · GW]), all the "intellectual energy" followed Scott to SSC.

Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?

I also got that this is a subject you care a lot about.

Yes. Thanks for listening.

Replies from: mr-hire, dxu
comment by Matt Goldenberg (mr-hire) · 2019-07-24T17:08:14.357Z · LW(p) · GW(p)
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn't?

Here are a few:

  • The importance of creating a culture that develops Kegan 5 leaders that can take over for the current leaders and help meaningfully change the values as the context changes, in a way that doesn't simply cause organizations to value drift along with the current broader culture.
  • How ignoring or not attending for people's needs creates incentives for motivated reasoning, and how to create spaces that get rid of those incentives WITHOUT being hijacked by whoever screams the loudest.
  • The importance of cultural tradition and ritual in embedding concepts in augmenting the teaching and telling people what concepts are important.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?

No because I think that our models are compatible. My model is about how to attract, retain, and develop people with high potential or skill that are in alignment your community's values, and your model says that not retaining, attracting, or developing people that matched our communities values and had high writing skill is what caused it to fail.

If you can give a specific model of why LW1 failed to attract, retain, and develop high quality writers, then I think there's a better space for comparison. Perhaps you can also point out some testable predictions that each of our models would make.

comment by dxu · 2019-07-24T18:24:52.143Z · LW(p) · GW(p)

In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we've traditionally avoided), all the "intellectual energy" followed Scott to SSC.

First, I want to state that I agree with this model. However, I also want to note that the SSC comments section tend to have fairly low-quality discussion (in comparison to the OB/LW 1.0 heyday), and I'm not sure why this is; candidate hypotheses include that Scott's explicit politics attracted people with lower epistemic standards, or that the lack of an explicit karma system allowed low-quality discussion to persist (but I don't think OB had an explicit karma system either?).

Overall, I'm unsure as to what kind of norms/technology maintains high-quality discussion (as opposed to just the presence of discussion in general), and it's plausible to me that the two may actually be somewhat mutually exclusive (in the sense that norms/technology designed to promote the volume of high-quality discussion may in fact reduce the volume of discussion in general). It's not clear to me how this tradeoff should be balanced.

Replies from: Zack_M_Davis, Raemon
comment by Zack_M_Davis · 2019-10-04T05:19:45.357Z · LW(p) · GW(p)

in part so that he could write about politics, which we've traditionally avoided

I want to state that I agree with this model.

(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable ("As I became more careful in my own writings [...]") in a way that I have been trained not to be. But I would need to learn to write faster.)

comment by Raemon · 2019-07-24T18:37:41.603Z · LW(p) · GW(p)

One thing is that I think early OBNYC and LW just actually had a lot of chaff comments too. I think people disproportionately remember the great parts.

comment by philh · 2019-07-23T15:39:15.909Z · LW(p) · GW(p)

When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable.

I'm not confident that it does. I perhaps expect people doing this using the native architecture to feel like they've found a reasonable answer. But I would expect them to actually be prioritising their own feelings, in most cases. (Though some people will underweight their own feelings. And perhaps some people will get it right.)

Perhaps they will get close enough for the answer to still count as "reasonable"?

If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole - how do we know whether they've got a reasonable answer? Does that just have to be left to moderator discretion, or?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-23T17:41:31.510Z · LW(p) · GW(p)
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole - how do we know whether they've got a reasonable answer? Does that just have to be left to moderator discretion, or?

Yes basically, but if the forum were to take on this direction, the idea would be to have enough case examples/explanations from the moderators about WHY they made that discretion to calibrate people's reasonable answers. See also this response to Zach [LW(p) · GW(p)]which goes more into details about the systems in place to calibrate people's reasonable answers.

comment by Said Achmiz (SaidAchmiz) · 2019-07-21T13:48:39.361Z · LW(p) · GW(p)

I’m rather confused about what you mean by ‘safe’. I thought I knew what the word meant, but the way you (and some others) are using it perplexes me. Could you explain how to interpret this notion of “safety”?

For instance, this part:

Then there’s the kind that this post is talking about. In this type of environment, it’s safe to say “This conversation is making me feel unsafe, so I need to leave”. It’s also safe to say “It feels like your need for safety is getting in the way of truthseeking” as well as for other people to push back on that if they think that this person’s need for safety is so great in this moment that we need to accommodate them for a bit and return to this topic later.

[emphasis mine]

  1. What do the bolded uses of ‘safe’ mean?

  2. Is it the same meaning as the other uses of ‘safe’ in your comment? If not, what other meanings are in use, in which parts of the comment?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T16:17:46.052Z · LW(p) · GW(p)

I think it's best defined by its' antonym. Unsafety, in this context, would mean anything that triggers a defensive or reactive reaction. Just like how bodily unsafety triggers fear, agression, etc, there are psychological equivalents that trigger the same reaction.

Safety is when a particualr circumstance doesn't trigger that reaction, OR alternatively there could be a meta safety (AKA, having that reaction doesn't trigger that reaction, because it's ok).

I think your bolded definitions of safe would actually be served by changing to the word allowed, which for many people correlates quite closely with their feeling of safety.

Replies from: Ruby
comment by Ruby · 2019-07-21T18:24:42.452Z · LW(p) · GW(p)

I think the question of "what is safety?" is a really good one. I'll write up some thoughts here both for this thread, but also to be to refer to generally (hence a bit more length).

Safety is when a particular circumstance doesn't trigger that reaction,

I'm not a fan of that definition. It's equating "feelings of safety" with "actual safety"

It's defining safety as the absence the response to perceived unsafety. It feels equivalent to saying "sickness is the thing your immune system fights, and health is the absence of your immune system being triggered to fight something." Which is very approximately true, but breaks down when you consider autoimmune disorders. With those, it's the mistaken perception of attack which is the very problem.

This definition can also put a lot of the power in the hands of those who are having a reaction. If we all agree that our conversation must be safe, and that any individual can declare it unsafe because they are having a reaction, this gives a lot power to individuals to force attention on the question of safety (and I fear too asymmetrically with others being blamed for causing the feelings of uncertainty).

-----

So here's the alternative positive account of "safety" I would give:

One *is* safe if one is unlikely to be harmed; one *feels* if they believe (S1 and/or S2) if they believe they won't be harmed.

This accords with the standard use of safety, e.g. safety goggles, safety precautions, safe neighborhood, etc.

In conversation, one can be "harmed socially", e.g. excluded from the group, being "punished" by the group, being made to look bad or stupid (with consequences on how they are treated), having someone act hostilely or aggressive to them (which is a risk of strong negative experience even if they S2 believe it won't come to any physical or lasting harm), etc. (this is not a carefully developed or complete list).

So in conversation and social spaces, safety equates to not being likely to be harmed in the above ways.

Much the same defenses that activate when feeling under physical threat also come online when feeling under social threat (for indeed, both can be very risky to a human). These are physiological states, fight or flight, etc. How adaptative these are in the modern age . . . more than 0, less than 1 . . .? Having these responses indicates that some part of your mind perceives threat, the question being whether it's calibrated.

On the question of space: a space can be perceived to have or lower risk of harm to individual (safety) and also higher or lower assessments of risk of harm related to taking specific actions, e.g. saying certain things.

---

With this definition, we can separately evaluate the questions of:

1) Are people actually safe vs likely to be harmed in various ways?

2) Are the harms people worried actually legitimate harms to be worried about?

3) Are people correct to be afraid of being harmed, that is, to feel unsafe?

4) Who should be taking action to cause people to feel unsafe? How is responsibility distributed between the individual and the group?

5) How much should the group/community worry about a) actual safety, and b) perceived safety?

I'm interested in how different people answer these questions generally and in the context of LessWrong.

Replies from: mr-hire, Ruby, Raemon
comment by Matt Goldenberg (mr-hire) · 2019-07-21T19:44:18.058Z · LW(p) · GW(p)

> I'm not a fan of that definition. It's equating "feelings of safety" with "actual safety"

I agree with this, but it's quite a mouthful to deal with. And I think "feelings of safety" are actually more important for truthseeking and creating a product - they're the things that produce defensiveness, motivated reasoning, etc.

I think mr-hire thinks the important success condition is that people feel safe and that it's important to design the space towards this goal, with something of a collective responsibility for the feelings of safety of each individual.

This seems rightish- but off in really important ways that I can't articulate. It's putting the emphasis on the wrong things and "collective responsiblity" is not an idea I like at all. I think I'd put my stance as something like "feeling unsafe is a major driver of what people say and do, and good cultures provide space to process and deal with those feelings of unsafety"

This definition can also put a lot of the power in the hands of those who are having a reaction. If we all agree that our conversation must be safe, and that any individual can declare it unsafe because they are having a reaction, this gives a lot power to individuals to force attention on the question of safety (and I fear too asymmetrically with others being blamed for causing the feelings of uncertainty).

Note that this issue is explicitly addressed in the original dialogue. If someones feelings are hurting the discourse, they need to take responsibility for that just as much as I need to take responsibility for hurting their feelings. No one is agreeing that all conversations must be safe for all people, but simply that taking into account when people feel unsafe is important.

Replies from: Ruby
comment by Ruby · 2019-07-21T22:49:32.600Z · LW(p) · GW(p)
I agree with this, but it's quite a mouthful to deal with

Yeah, but there's a really big difference! You can't give up that precision.

This seems rightish- but off in really important ways that I can't articulate.

Nods. Also agree that "collective responsibility" is not the most helpful concept to talk about.

Note that this issue is explicitly addressed in the original dialogue. If someones feelings are hurting the discourse, they need to take responsibility for that just as much as I need to take responsibility for hurting their feelings.

Indeed, the fact people can say ""It feels like your need for safety is getting in the way of truth-seeking"is crucial for it to have any chance.

My expectation based on related real-life experience though, is that if making your need for safety is an option, there will people who abuse this and use it to suck up a lot of time and attention. That technically someone could deny their claim and move on, but this will happen much later than optimal and in the meantime everyone's attention has been sucked into a great drama. Attempts to say "your safety is disrupting truth-seeking" are accused as being attempts to oppress someone, etc.

This is all imagining how it would go with typical humans. I'm guessing you're imagining better-than-typical people in your org who won't have the same failure mode, so maybe it'll be fine. I'm mostly anchored how I expect that approach to go if applied to most humans I've known (especially those really into caring about feelings and who'd be likely to sign up for it).

comment by Ruby · 2019-07-21T18:49:52.153Z · LW(p) · GW(p)

I think mr-hire thinks the important success condition is that people feel safe and that it's important to design the space towards this goal, with something of a collective responsibility for the feelings of safety of each individual.

I think Said things that individuals bear full responsibility their feelings of safety, and that it's actively harmful to make these something the group space has to worry about. I think Said might even believe that "social safety" isn't even important for the space, i.e., it's fine if people actually are attacked in social ways, e.g. reputationally harm, caused to be punished by the group, made to experience negative feelings due to aggression from others.

----

If I had to choose between my model of mr-hire's preferred space and my model of Said's preferred space, I think I would actually choose Said's. (Though I might not be correctly characterizing either - I wanted to state my prediction before I asked to test how successfully modeling other's views).

When it comes to truth seeking, I'd rather err on the side of people getting harmed a bit and having to do a bunch of work to "steel" themselves against the "harsh" environment, then give individuals such a powerful tool (the space being responsible for their perception of being harmed) to disrupt and interfere with discourse. I know that's not the intended result, but it seems too ripe for abuse to give feelings and needs the primacy I think is being given in the OP scenario. Something like an unachievable utopia: it sounds good, but I am very doubtful it can be done and also be a truth-seeking space.

[Also Said, I had a dream last night that I met you in Central Park, NY. I don't know what you look or sound like in person, but I enjoyed meeting my dream version of you.]


Replies from: SaidAchmiz, Ruby, Raemon
comment by Said Achmiz (SaidAchmiz) · 2019-07-21T19:49:45.382Z · LW(p) · GW(p)

I think Said things that individuals bear full responsibility their feelings of safety, and that it’s actively harmful to make these something the group space has to worry about.

Well, this is certainly not an egregious strawman by any stretch of the imagination—it’s a reasonable first approximation, really—but I would prefer to be somewhat more precise/nuanced. I would say this:

Individuals bear full responsibility for having their feelings (of safety, yes, and any other relevant propositional attitudes) match the reality as it in fact (objectively/intersubjectively verifiably) presents itself to them.[1]

This, essentially, transforms complaints of “feeling unsafe” into complaints of “being unsafe”; and that is something that we (whoever it is who constitute the “we” in any given case) can consider, and judge. If you’re actually made unsafe by some circumstance, well, maybe we want to do something about that, or prevent it. (Or maybe we don’t, of course. Likely it would depend on the details!) If you’re perfectly safe but you feel unsafe… that’s your own business; deal with it yourself![2]

I think Said might even believe that “social safety” isn’t even important for the space, i.e., it’s fine if people actually are attacked in social ways, e.g. reputationally harm, caused to be punished by the group, made to experience negative feelings due to aggression from others.

The relevant questions, again, are about truth and justice. Is it acceptable for people to be reputationally harmed? Well, how is this happening? Certainly libel is not acceptable. Revealing private information about someone (e.g., about their sexual preferences) is not acceptable. Plenty of other things that might cause reputational harm aren’t acceptable. But if you reveal that I, let us say, falsified scientific data (and if this actually is the case), great reputational harm will be done to me; and this is entirely proper. The fact of the harm itself, in other words, is not dispositive.

Similarly for punishment—punishment is proper if it is just, improper otherwise.

As far as “negative feelings” go… “aggression” is a loaded word; what do you mean by it? Suppose that we are having an in-person debate, and you physically assault me; this is “aggression” that would, no doubt, make me “experience negative feelings”; it would also, obviously, be utterly unacceptable behavior. On the other hand, if you did nothing of the sort, but instead made some cutting remark, in which you subtly impugned my intelligence and good taste—is that “aggression”? Or what if you simply said “Said, you’re completely wrong about this, and mistaken in every particular”… aggression? Or not? I might “experience negative feelings” in each of these cases! But the question of whether any of these behaviors are acceptable, or not, does not hinge primarily on whether they could conceivably be described, in some sense, as “aggression”.

In short… when it comes to deciding what is good and what is bad—as with so many other things—precision is everything.

When it comes to truth seeking, I’d rather err on the side of people getting harmed a bit and having to do a bunch of work to “steel” themselves against the “harsh” environment, then give individuals such a powerful tool (the space being responsible for their perception of being harmed) to disrupt and interfere with discourse. I know that’s not the intended result, but it seems too ripe for abuse to give feelings and needs the primacy I think is being given in the OP scenario.

On this, we entirely agree. (And I would add that it is not simply ripe for abuse; it is, in fact, abused, and rampantly, in all cases I have seen.)

[Also Said, I had a dream last night that I met you in Central Park, NY. I don’t know what you look or sound like in person, but I enjoyed meeting my dream version of you.]

Central Park is certainly a pleasant place to meet anyone! I can only hope that, should we ever meet in fact, I live up to the standards set by my dream self…


  1. “reality as it in fact (objectively/intersubjectively verifiably) presents itself to them”: By this somewhat convoluted turn of phrase I mean simply that it’s conceivable for someone to be deceived—made to perceive the facts erroneously, through adversarial action—in which case it would, obviously, be unreasonable to say that it’s entirely the victim’s responsibility to have their feelings about reality match actual reality instead of reality as they are able to discern it; nevertheless this is not license to say “well, this is what the reality feels like to me”, because “what should you reasonably conclude is the reality, given the facts that, as we can all see, are available to you” is something that may be determined and agreed upon, and in no sense is an individual incorrigible on that question. ↩︎

  2. Which, of course, does not mean that “what’s the best technique for dealing with feeling unsafe when you’re actually safe” isn’t a topic that the group might discuss. ↩︎

Replies from: Ruby
comment by Ruby · 2019-07-21T22:35:09.023Z · LW(p) · GW(p)

Thanks for the precise and nuanced write-up, and for not objecting to my crude attempt to characterize your position.

Nothing in your views described here strikes me as gravely mistaken, it seems like a sensible norm set. I suspect that many of our disagreements appear once we attempt be precise around acceptable and not acceptable behaviors and how they are handled.

I agree that "aggression" is fuzzy and that simply causing negative emotions is certainly not the criteria by which to judge the acceptability of behavior. I used those terms to indicate/gesture rather than define.

I have a draft, Three ways to upset people with your speech, which attempts to differentiate between importantly different cases. I find myself looking forward to your comments on it once I finally publish it. I don't think I would have said that a week ago, and I think it's largely feeling safer with you, which is in turn the result of greater familiarity (I've never been active in the LW comments as much as in the last few weeks). I'm more calibrated about the significance of your words now, the degree of malice behind them (possibly not that much?), and even the defensible positions underlying them. I've also updated that it's possible to have a pleasant and valuable exchange.

(I do not say these things because I wish to malign you with my prior beliefs about you, but because I think they're useful and relevant information.)

Your warm response to my mentioning dream-meeting you made me feel warm (also learning your Myers Briggs type).

(Okay, now please forgive me for using all the above as part of an "argument"; I mean it all genuinely, but it seems to be a very concrete applied way to discuss topics that have been in the air of late.)

This gets us into some tricky questions I can place in your framework. I think it will take us (+ all the others) a fair bit of conversation to answer, but I'll mention them here now to at least raise them. (Possibly just saying this because I'm away this week and plan not to be online much.)

My updates on you (if correct) suggest that largely Said's comments do not threaten me much and I shouldn't feel negative feelings as a result. Much of this is just how Said talks, and he's still interested in honest debate, not just shutting you down with hostile talk. But my question is the "reality as it presents itself to me" you mentioned. The reality might be that Said is safe, but was I, given my priors and evidence available to me before, wrong to be afraid before I gained more information about how to interpret Said?

(Maybe I was, but this is not obvious.)

Is the acceptability of behavior determined by what the recipient reasonably could have believed (as judged by . . . ?) or by the actual reality. Or there are three possibilities even: 1) what I could have reasonably believed was the significance of your actions, 2) what you could have reasonably believed was the significance of your actions, 3) what the actual significance of your actions were (if this can even be defined sensibly).

It does seem somewhat unfair if the acceptability of your behavior is impacted by what I can reasonably believe. It also seems somewhat unfair that I should experience attack because I reasonably lacked information.

How do we handle all this? I don't definitively know. Judging what is acceptable/reasonable/fair and how all different perspectives add up . . . it's a mess that I don't think gets better even with more attempt at precision. I mostly want to avoid having to judge.

This is in large part what intuitively pushes me towards wanting people to be proactive in avoiding misinterpretations and miscalibrations of other's intent - so we don't have to judge who was at fault. I want people to give people enough info that they correctly know even when I'm harsh, I still want them to feel safe. Mostly applies to people who don't know me well. Once the evidence has accrued and you're calibrated on what things mean, you require little "padding" (this is my version of Combat culture [LW · GW] essentially), but you've got to accrue that evidence and establish the significance of actions with others first.

--

Phew, like everything else, that was longer than expected. I should really start expected everything to be long.

Curious if this provides any more clarity on my position (even if it's not persuasive) and curious where you disagree with this treatment.

comment by Ruby · 2019-07-21T21:31:35.096Z · LW(p) · GW(p)

Here's the couple of thousand words [LW(p) · GW(p)] that fell out when I attempted to write up my thoughts re safety and community norms.

Replies from: Benquo
comment by Benquo · 2019-07-21T22:15:30.892Z · LW(p) · GW(p)

Link seems broken

Replies from: Ruby
comment by Ruby · 2019-07-21T23:00:57.897Z · LW(p) · GW(p)

Thanks, fixed! It's a little bit repetitive with everything else I've written lately, but maybe I'm getting it clearer with each iteration.

comment by Raemon · 2019-07-21T19:46:17.931Z · LW(p) · GW(p)

I ended up having thoughts here that grew beyond the context (how to think about this feels related to how to think about depleted willpower). Wrote a shortform post. [LW(p) · GW(p)]

My current best guess is that you'd get the best results (measured roughly "useful ideas generated and non-useful ideas pruned") from a collection of norms where

a) people need to take responsibility for their own feelings of fear, and there is help/guidelines on how to go about that if you're not very good at it yet, and

b) people also take responsibility for learning to the social and writing skills to avoid particularly obvious failure modes.

i.e. "I'm feeling defensive" shouldn't be a get out of jail free card. (in particular, any request that someone change their communication behavior should come with a corresponding costly signal that you are working on improving your ability to listen while triggered)

And while I think I've believed this for a couple weeks, I don't think I was doing the work to actually embody it, and I think that's been a mistake I've been making.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-21T20:19:51.778Z · LW(p) · GW(p)

I'm holding the frame you wrote on your shortform feed re defensiveness for a bit to see how I feel about it.

comment by Raemon · 2019-07-21T18:29:20.727Z · LW(p) · GW(p)

I’ve been trying to use the phrase ‘feeling of safety’ when it comes up but it has the unfortunate property that ‘aspiring rationalist’ had, where there isn’t a stable equilibrium where people reliably say the whole phrase.

Replies from: Ruby
comment by Ruby · 2019-07-21T22:52:35.197Z · LW(p) · GW(p)

I hereby proclaim that "feelings of safety" be shortened to "fafety." The domain of worrying about fafety is now "fafety concerns."

Problem solved. All in a day's work.

Replies from: Raemon
comment by Raemon · 2019-07-21T23:11:43.371Z · LW(p) · GW(p)

Strong upvote

comment by Zack_M_Davis · 2019-07-21T03:56:23.862Z · LW(p) · GW(p)

Over the past couple years, I have updated to "yes, LessWrong should be the place focused on truthseeking."

Updated to? This wording surprises me, because I'm having trouble forming a hypothesis as to what your earlier position could have been. (I'm afraid I haven't studied your blogging corpus.) What else is this website for, exactly?

Replies from: Raemon
comment by Raemon · 2019-07-21T05:48:13.517Z · LW(p) · GW(p)

Instrumental rationality?

Replies from: Benquo, Zack_M_Davis
comment by Benquo · 2019-07-21T07:18:38.516Z · LW(p) · GW(p)

My steelman of this position is something like, “I favored focusing instrumental rationality because it seemed, well, useful. At the time I figured that this was just a different subject than epistemic rationality, & focusing on it would at worst mean less progress improving the accuracy of our beliefs. But in hindsight this involved allowing epistemics to get worse for the sake of more instrumental success. I’ve now updated towards that having been a bad tradeoff.”

How close is that?

Replies from: Raemon
comment by Raemon · 2019-07-21T07:43:51.995Z · LW(p) · GW(p)

Thanks! I'm not sure this is a place where steelmanning is quite the appropriate tool. My past self was optimized for being my past self, not being right. He was mostly just not trying to solve this question.

But, in this case, I think the best tool is more properly called "modeling people" and maybe "empathy".

Things my past self cared about and/or believed included:

  • All the probability stuff feels too hard to think about, and it doesn't seem like it's really going to help me that much even if I put a lot of work into it. So for me personally, I'm just going to try to "remember base rates" and a few other simple heuristics and call it a day. I was glad other people took it more seriously though
  • Truth seems like one of many important things. What matters is getting things accomplished. (I've never been optimizing against truth, I have just prioritized other things. There's been times where I, say, only put 20 minutes into checking an essay for being right, rather than 2 hours, when I had reason to suspect I might have had motivated reasoning.)
  • I thought (and still think, although less strongly and for more nuanced reasons) that the in person rationality community is unhealthy because it only selects for a few narrow types of person, who are min-maxed in a particular skillset. And I think the in person community is important (both for epistemic and instrumental reasons). It is important to be a community that doesn't actively drive away people who bring other skills to the table.

I still roughly believe all that. The main update is that there should a) be dedicated spaces that focus on truthseeking as their [probably] sacred value, b) that LessWrong should be such a space. (But, as noted in Tensions in Truthseeking, there are still different tradeoffs you can make in your truthseeking frame, and I think it's good to have spaces that have made different min-max tradeoffs to explore those tradeoffs. For example, there might be math-heavy spaces, there might be "blunt communication" spaces that optimize for directness, there might be feelings-heavy spaces that optimize for understanding and owning your internal state)

(I have made a bit of conceptual progress on probability stuff. I probably will never do real Bayesian Wizardry but I think grok it better now – I can follow some conversations I didn't used to be able to follow and in some cases I can participate in and uphold norms that help others on their way to learning it better than I)

There is an interesting thing in all this space I recently re-read while perusing the old critiques of Gleb [EA(p) · GW(p)]. A paraphrase of the linked comment is:

I think a problem with effective altruists is they often end up with a conception that marketing is icky, and that without marketing they are ineffective. I think Gleb might have just said "I'd rather be effective and icky than ineffective and pure." And this is maybe an unhelpful frame that other people are implicitly using. There are ways you can market effectively without actually being icky.

And, while I'm not sure, I think I might have held a frame somewhat like that (I don't have clear memories of biting either particular bullet). But my current position is "effective altruists should hold to a high epistemic standard, even when marketing. But, learn to market well within those constraints."

comment by Zack_M_Davis · 2019-07-21T06:00:53.318Z · LW(p) · GW(p)

Okay, but I thought the idea was that instrumental rationality and epistemic rationality are very closely related. Two sides of the same coin, not two flavors of good thing that sometimes trade off against each other. That agents achieve their goals by means of building accurate models, and using those models to "search out paths through probability" [LW · GW] that steer the world into the desired goal-state. If the models aren't accurate, the instrumental probability-bending magic doesn't work and cannot work.

Replies from: Raemon
comment by Raemon · 2019-07-21T06:13:04.068Z · LW(p) · GW(p)

Okay, but geez man, my past self had different beliefs. What do you want here? What is your incredulity here aiming to accomplish? If you can't simulate the mind of a person who showed up on LessWrong with one set of beliefs and gradually updated their beliefs in a set of directions that are common on the site, I think you should prioritize learning to simulate other minds a bit

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-21T07:01:01.058Z · LW(p) · GW(p)

What is your incredulity here aiming to accomplish?

I genuinely feel incredulous and am trying to express what I'm actually thinking in clear language? I mean, it's also totally going to be the case that the underlying generator of "genuinely felt incredulity" is no doubt going to be some sort of subconscious monkey-politics status move designed by evolution to make myself look good at the expense of others. It's important to notice that! But the mere fact of having noticed that doesn't make the feeling go away, and given that the feeling is there, it's probably going to leak into my writing. I could expend more effort doing a complicated System-2 political calculation that tries to simulate you and strategically compute what words I should say in order to have the desired effect on you. But not only is that more work than saying what I'm actually thinking in clear language, I also expect it to result in worse writing. Use the native architecture!

I mean, if it'll help, we can construct a narrative in which my emotion of incredulity that was designed by evolution to make me look good, actually makes me look bad in local social reality? That's a win-win Pareto improvement: I don't have to mutilate my natural writing style in the name of so-called "cooperative" norms, and you don't have to let my monkey-politics brain get away with "winning" the interaction.

How about this? Incredulity is, definitionally, a failed prediction. The fact that I felt incredulous means that my monkey status instincts are systematically distorting my anticipations about the world, making me delusionally perceive things as "obvious" exactly when they're things that I coincidentally happened to already know, and not because of their actual degree-of-obviousness as operationalized by what fraction of others know them. (And conversely, I'll delusionally perceive things as "nonobvious" exactly when I coincidentally happened to not-know them.)

(Slaps forehead) Hello, Megan! Ten years into this "rationality" business, and here I am still making rookie mistakes like this! How dumb can I get?

I think you should prioritize learning to simulate other minds a bit

Thanks, this is a good suggestion! I probably am below average at avoiding the typical mind fallacy [LW · GW]. You should totally feel superior to me on this account!

Replies from: Raemon
comment by Raemon · 2019-07-21T07:48:06.738Z · LW(p) · GW(p)

I think there are separate worthwhile skills of "focus on learning empathy/modeling and let clear language flow from that", and also "writing skills exist that are separate from epistemics" (such as brevity, which I think actually factors in here a bit)

Something that may not have been clear from my past discussion is that when I say "this could have been written in a way that was less triggering", or something, I'm not (usually) meaning that to be a harsh criticism. Just, the sort of thing that you should say 'ah, that makes sense. I will work on that' for the future.


Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-21T08:10:48.187Z · LW(p) · GW(p)

Just, the sort of thing that you should say 'ah, that makes sense. I will work on that' for the future.

It's actually not clear to me that I should work on that. As a professional hazard of my other career, I'm pretty used to people trying to use "You would be more persuasive if you were nicer" as an attempted silencing tactic; if I just believed everyone who told me that, I would never get anything done.

comment by Slider · 2019-07-20T04:47:39.372Z · LW(p) · GW(p)

If you supress a signal it's hard to know how representative it is of the whole population. If people stop expressing when their feelings are hurt it becomes next to impossible to keep a representative statistic. Why vote when my vote is one among thousands and very unlikely to be a swing vote? If a speech act infact impacts a big portion but each believes they are a single person minority you get more suppression than desired. Also general danger of revolving around common denominators.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T06:14:50.065Z · LW(p) · GW(p)

Agree strongly with this. Was this meant to be a reply to that other post on my short term feed about feelings?

Replies from: Slider
comment by Slider · 2019-07-20T06:19:02.661Z · LW(p) · GW(p)

Mainly thinking what could go wrong with what John says just before Jill says "Exactly!". Attribution who prevents the action is unclear and it could be the speaker-to-be selfcensoring or a moderator intervening.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T06:30:59.641Z · LW(p) · GW(p)

Nobody intervenes in this case, The speaker is allowed to say it makes her uncomfortable, and is allowed to leave if the conversation is too intense for her. If she won't allow the conversation to go on repeatedly , then Jill may have to a conversation like this one.

If she thinks many people are uncomfortable but not speaking up, there's a bunch of things she can do next.

Replies from: Slider
comment by Slider · 2019-07-24T02:05:37.056Z · LW(p) · GW(p)

I had trouble reading this as it felt like there were a lot of presumptions in conflict.

If you let people bring random norms in the quality of the discussion will be random. In selecting some norms to be non-random the site must somehow encourage certain norms and suppress incompatible ones. If people were "smart" they could know the norms beforehand and then all speech would be flawless from a norm standpoint. But the interesting case is when someone in the discussion fails to effectively employ a norm. After the norm discussion speech should be norm compliant and I think change from non-compliant to compliant means supressing the "offending" parts. If no supression happens the harmful elements are left alive to do their damage.

Thus if a norm-officer is talking to you there is some goal how your speech is supposed to change. It's good goal to try get to this goal by selling why the norm is a good idea. However I think the norm will be or should be enforced even if such "selling" fails. At the very least the moderator needs to make a call whether the conversation is sufficient remedy for the detected danger or whether the issue should be escalated to less discussive "actual action" levels. If the discussion concluded "I will raise veganism just as often as I have previously" escalation would be the conclusion (or some kind of weird thing were the defiance tries to upset the whole norm structure with the defiant party banking on that the wider community will overrule the moderators effected principles).

Compliant people wil lnot constatnlyu trigger the norm-violations but that places a limit on effectively expressible stances and I think the effective communicaiton what does and does not cause schism falls outside of that and the closest thing you get is a kind of plausible stereotype needs. At worst talking about what does and does not cause a schism causes a schism which would trigger supression of schism analysis. Thus what a schism is is largely depend on the inertia of how it was understood when schism supression is implemented and is less responsive to peoples actual needs. My thoughts might be too muddy about it but it might culminate to a point where there is a "it is a norm violation to have those private values or declare them as targets that the system should care about even in slight degree".

I don't know the following model it is too shaky a model but I am banking on the norm of describing how you think rather than what is convincing. Have starting situations

*1: 1000 flies, 1 human

  • A:1000 humans ,1 fly

and a later distribution of

  • 2:5000 flies, 100 000 humans
  • B: 5000 humans and 100 000 flies.

Assume there is a brown substance that is olfactorily attrractive to flies and repulsive for humans. a situation that goes from 1 to 2 is likely to be substance decorated as all humans that joined must be "fly needs compatible" or atleast find the whole deal of the community participation to be worth it overall. However if there is no such inertia then a situation that goes from A to B means the substance decoration will be introduced (and would be conductive to cause the abandon rate of the humans to skyrocket). The scheme of making the minority to conform relieves value tension but makes the overall values of the organizaiton to drift. What started as a human organization but drifted to a fly organization might no longer be human-aligned. For entities that try to survive this might not matter that much. But for things that are tools if it starts to do another task that can plausibly be counted as a malfunction (althoguth if my hammers randomly morphed into saws I might still find saws not to be useless but if I made a highly spesific tool and it morphed into a generic one I would probably be pretty upset).

I guess there aer two distinct points. If you allow changes based on how your organization serves the general lifes of it's participants this will drift the communitys purpose away from being highly specialised in one task. And that majorities can't be relied onto keep the macro aligment stable, it's not that we are trading microaligment for better macroaligment but that there is a real chance that macroaligment will also be compromised (or I am missing the fence that keeps mission critical aligment operating on different rules than irrelevant aligment)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-25T21:59:36.494Z · LW(p) · GW(p)
After the norm discussion speech should be norm compliant and I think change from non-compliant to compliant means supressing the "offending" parts. If no supression happens the harmful elements are left alive to do their damage.

Yes I think this is true and thought it was obvious. Just like any other community or organization, people who aren't following the norms repeatedly should be kicked out.

But the important part that seperates good organizations from bad is the procedures to teach and find consensus on norms.

I guess there aer two distinct points. If you allow changes based on how your organization serves the general lifes of it's participants this will drift the communitys purpose away from being highly specialised in one task.

One of the central underlying points here is that if you ignore the participants lives or make them taboo, they'll make everything about that ANYWAY, while pretending to be about the norms of the organization. See moral mazes, see corporate america.

In a private organization, the solution to that is to point out every time it happens, go meta, and create norms that consistently call people out on their private shit getting in the way of the community/organization, while supporting them in in working through that private shit.

In a more public space like described here, you haven't done the vetting to make that model work, so you simply have to acknowledge that it exists, but not let people put those needs above other people's needs, or the values of the organization.

Replies from: Slider
comment by Slider · 2019-07-26T01:16:25.083Z · LW(p) · GW(p)

So in the general itervention ought to happen.But I still find it contradctory that "nobody intervenes" in this case. I think the intervention needs to happen in some form or its a case of passive "let the problem fester" type of situation. It is expressed in passive voice when actual situations happen when particular humans do stuff. I think in my mind there are two models with different primary moving actors which it is not clear which one is to be followed or if either is implied by the principle.

When a lot of responcibility is placed on the speaker to moderate themselfs those decisons are less accessible to the public discussion. There is a conflict. One one hand you need to express what your needs are so that others know to balance their needs against yours. But on the other hand if you appear as "needy" you will be percieved as a problem element. If people statistically signicifantly can't say what their needs are the Maturity principle becomes relativily empty as the needs of others are not known. It's a relevant case where the "needs of the many people in the community" are known to be inaccurate or outright fictious (and to get to there there would need to be intervening stages where they are suspected to be so etc).With fixed goals everybody will try to disguise their objective as the accepted objective but with flexible objectives there is a new kind of strife about whos private objectives are among the flex goals. And the tension between people whos objectives are just-in and just-out can get pretty intense.

comment by Raemon · 2019-07-19T23:53:55.921Z · LW(p) · GW(p)

There's a lot I like about this post (I was mulling over a similar sort of post, spelling out what collection of norms I actually think would actually work best for a dedicated truthseeking space).

There are two crystallizations here that I like, which I'd been struggling to articulate: over the past year I've updated harder in the "yes, it's really important for LessWrong's highest value to be truthseeking, and not to make any tradeoffs for other things." But something about that still felt nagging to me. I grappled a bit with it in Tensions in Truthseeking [LW · GW] but wasn't satisfied with my ability to articulate it.

But:

  • "You can have sacred or meta-level values that don't trade off against non-sacred values... but you still have to figure out how to tradeoff sacred values against each other."

And

  • "You should expect value to be fragile, and picking a single value to optimize is likely to leave your space impoverished"

feel like they do a better job of articulating my fear.

comment by Pattern · 2019-07-19T23:34:46.192Z · LW(p) · GW(p)

What would go horribly wrong?

I'm not sure what the effect of having your every word recorded and freely available would have on in person conversations. Or having Crocker's Rules instituted.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-19T23:44:51.688Z · LW(p) · GW(p)

I think both of those would go horribly wrong instituted in a bad institution that didn't have value aligned individuals, clear values, rituals to support values, and decent standards of vetting either before or after the fact. On the other hand I think that radical things like recording all conversations (a Bridgewater practice) and instituting crockers rule in a specific room or context (another bridgewater practice called fishbowling) can go well when implemented in an organization that has the values, practices, standards, and people to support it.

I'm curious what horribly wrong things you think would happen from both of these policies?

Replies from: Raemon
comment by Raemon · 2019-07-20T00:10:29.139Z · LW(p) · GW(p)

This was the one part of this essay I had qualms about. I think it's best articulated by this essay about radical honesty [EA · GW].

For many years, I thought privacy was a fake virtue and only valuable for self-defense. I understood that some people would be unfairly persecuted for their minority sexuality, say, or stigmatized disease status, but I always saw that more as a flaw in society and not a point in favor of privacy. I thought privacy was an important right, but that the ideal was not to need it.
I’m coming back around to privacy for a few reasons, first of which was my several year experiment with radical transparency. For a lot of that time, it seemed to be working. Secrets didn’t pile up and incubate shame, and white lies were no longer at my fingertips. I felt less embarrassed and ashamed over the kind of things everyone has but no one talks about. Not all of it was unhealthy sharing, but I knew I frequently met the definition of oversharing– I just didn’t understand what was wrong with that.
I noticed after several years of this behavior that I wasn’t as in touch with my true feelings. At first I thought my total honesty policy had purged me of a lot of the messy and conflicted feelings I used to have. But there was something suspiciously shallow about these more presentable feelings. I now believe that, because I scrupulously reported almost anything to anyone who asked (or didn’t ask), I conveniently stopped being aware of a lot of my most personal and tender feelings. (Consequently, I 100% believe Trivers’s theory of self-deception.) I had calloused my feelings by overexposing them, and made them my armor. When my real, tender feelings went underground, the “transparency” only got more intense, because I was left free to believe more flattering and shareable things about myself in the gap, conscience completely clear.

I think it's possible that radical honesty is net-positive, but it's also possible that it drives dishonesty even deeper into your psyche where it's even harder to track it.

(Separately, I've heard a lot of things about Bridgewater culture feeling pretty abusive. I've worked in one company run by ex-Bridgewater people who were trying to be "Bridgewater, but kinder and coming from a place of compassion", but that still failed and still felt fairly traumatizing. I think it's much easier to get this wrong than right and getting it wrong is costly)

I do think underlying goal you're pointing at is good – finding some way to increase honesty and accountability, esp. among leaders, is good.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T00:14:15.560Z · LW(p) · GW(p)
I think it's possible that radical honesty is net-positive, but it's also possible that it drives dishonesty even deeper into your psyche where it's even harder to track it.

I think is possibly very true, and I suspect a "Crocker's Rules" type policy to have a similar issue, however, I think the radical transparency (as opposed to radical honesty, which is having to share everything) on net is pretty good because it prevents political games.

Re:Bridgewater - I expect most DDOs to feel abusive, wrong, or culty to the wrong people. I've been part of two organizations in my life that I would consider "doing DDOs right", and both of them I think would be horrible and abusive places for certain people.

Replies from: Raemon
comment by Raemon · 2019-07-20T00:15:47.068Z · LW(p) · GW(p)
I think is possibly very true, and I suspect a "Crocker's Rules" type policy to have a similar issue, however, I think the radical transparency (as opposed to radical honesty, which is having to share everything) on net is pretty good because it prevents political games.

Is that an update for you or something that seemed true when you wrote the OP, that you feel you accounted for when proposing the radical transparency thing?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T00:23:47.538Z · LW(p) · GW(p)

It's an update for me.

Replies from: Raemon
comment by Raemon · 2019-07-20T00:28:01.052Z · LW(p) · GW(p)

:thumbsup: