post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Benquo · 2019-07-19T07:29:56.170Z · LW(p) · GW(p)

It might help for me to also try to make a positive statement of what I think is at stake here.

I agree with the underlying point that side-channel communication around things like approval is real and common, and it's important to be able to track and criticize such communication.

What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue.

Replies from: Raemon, Ruby
comment by Raemon · 2019-07-19T17:29:43.175Z · LW(p) · GW(p)
What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup

My core claim is: "right now, this isn't possible, without a) it being heard by many people as an attack, b) without people having to worry that other people will see it as an attack, even if they don't."

It seems like you see this something as "there's a precious thing that might be destroyed" and I see it as "a precious thing does not exist and must be created, and the circumstances in which it can exist are fragile." It might have existed in the very early days of LessWrong. But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can't be the same thing as what works now.

[in public. In private things are much easier. It's *also* the case that private channels enable collusion – that was an update i've made over the course of the conversation. ]

And, while I believe that you earnestly believe that the quote paragraph is important, your individual statements often look too optimized-as-an-obfuscated-attack for me to trust that they are not. I assign substantial probability to a lot of your motives being basically traditional coalition-political and you are just in denial about it, with a complicated narrative to support them. If that's not true, I realize it must be extremely infuriating to be treated that way. But the nature of the social landscape makes it a bad policy for me to take you at your word in many of the cases.

Wishing the game didn't exist doesn't make the game not exist. We could all agree to stop playing at once, but a) we'd need to credibly believe we were all actually going to stop playing at once, b) have enforcement mechanisms to make sure it continues not being played, c) have a way to ensure newcomers are also not playing.

And I think that's all possibly achievable, incrementally. I think "how to achieve that" is a super important question. But attempting to not-play the game without putting in that effort looks me basically like putting a sign that says "cold" on a broken refrigerator and expecting your food to stay fresh.

...

I spent a few minutes trying to generate cruxes. Getting to "real" cruxes here feels fairly hard and will probably take me a couple hours. (I think this conversation is close to the point where I'd really prefer us to each switch to the role of "Pass each other's ITTs, and figure out what would make ourselves change our mind" rather than "figure out how to explain why we're right." This may require more model-sharing and trust-building first, dunno)

But I think the closest proximate crux is: I would trust Ben's world-model a lot more if I saw a lot more discussion of how the game theory plays out over multiple steps. I'm not that confident that my interpretation of the game theory and social landscape are right. But I can't recall any explorations of it, and I think it should be at least 50% of the discussion here.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-19T18:06:31.324Z · LW(p) · GW(p)

But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can’t be the same thing as what works now.

Is this a claim that people are almost certainly going to be protecting their reputations (and also beliefs related to their reputations) in anti-epistemic ways when large amounts of money are at stake, in a way they wouldn't if they were just members of a philosophy club who didn't think much money was at stake?

This claim seems true to me. We might actually have a lot of agreement. And this matches my impression of "EA/rationality shift from 'that which can be destroyed by the truth should be' norms towards 'protect feelings' norms as they have grown and want to play nicely with power players while maintaining their own power."

If we agree on this point, the remaining disagreement is likely about the game theory of breaking the bad equilibrium as a small group, as you're saying it is.

(Also, thanks for bringing up money/power considerations where they're relevant; this makes the discussion much less obfuscated and much more likely to reach cruxes)

[Note, my impression is that the precious thing already exists among a small number of people, who are trying to maintain and grow the precious thing and are running into opposition, and enough such opposition can cause the precious thing to go away, and the precious thing is currently being maintained largely through willingness to forcefully push through opposition. Note also, if the precious thing used to exist (among people with strong stated willingness to maintain it) and now doesn't, that indicates that forces against this precious thing are strong, and have to be opposed to maintain the precious thing.]

Replies from: Raemon
comment by Raemon · 2019-07-19T20:14:21.096Z · LW(p) · GW(p)

An important thing I said earlier in another thread was that I saw roughly two choices for how to do the precious thing, which is something like:

  • If you want to do the precious thing in public (in particular when billions of dollars are at stake, although also when narrative and community buy-in are at stake), it requires a lot of special effort, and is costly
  • You can totally do the precious thing in small private, and it's much easier

And I think a big chunk of the disagreement comes from the 'small private groups are also a way that powerful groups collude, and be duplicitous, and other things in that space.'

[There's a separate issue, which is that researchers might feel more productive, locally, in private. But failure to write up their ideas publicly means other people can't build on them, which is globally worse. So you also want some pressure on research groups to publish more]

So the problem-framing as I currently see it is:

  • What are the least costly ways you can have plainspoken truth in public, without destroying (or resulting in someone else destroying) the shared public space. Or, what collection of public truthseeking norms output the most useful true things per unit of effort in a sustainable fashion
  • What are ways that we can capture the benefits of private spaces (sometimes recruiting new people into the private spaces), while having systems/norms/counterfactual-threats in place to prevent collusion and duplicity, and encourage more frequent publishing of research.

And the overall strategy I currently expect to work best (but with weak confidence, haven't thought it through) is:

  • Change the default of private conversations from 'stay private forever' to 'by default, start in private, but with an assumption that the conversation will usually go public unless there's a good reason not to, with participants having veto* power if they think it's important not to go public."
    • An alternate take on "the conversation goes public" is "the participants write up a distillation of the conversation that's more optimized for people to learn what happened, which both participants endorse." (i.e. while I'm fine with all my words in this private thread being shared, I think trying to read the entire conversation might be more confusing than it needs to be. It might not be worth anyone's time to write up a distillation, but if someone felt like it I think that'd be preferable all else being equal)
  • Have this formally counterbalanced by "if people seem to be abusing their veto power for collusion or duplicitous purposes, have counterfactual threats to publicly harm each other's reputation (possibly betraying the veto-process*), which hopefully doesn't happen, but the threat of it happening keeps people honest.

*Importantly, a formal part of the veto system is that if people get angry enough, or decide it's important enough, they can just ignore your veto. If the game is rigged, the correct thing to do is kick over the gameboard. But, everyone has a shared understanding that a gameboard is better than no gameboard, so instead, people are incentivized to not rig the game (or, if the game is currently rigged, work together to de-rig it)

Because everyone agrees that these are the rules of the metagame, betraying the confidence of the private space is seen as a valid action (i.e. if people didn't agree that these were the meta-rules, I'd consider betraying someone's confidence to be a deeply bad sign about a person's trustworthiness. But if people do agree to the meta-rules, then if someone betrays a veto it's a sign that you should maybe be hesitant to collaborate with that person, but not as strong a sign about their overall trustworthiness)

Replies from: jessica.liu.taylor, Raemon
comment by jessicata (jessica.liu.taylor) · 2019-07-20T03:08:27.089Z · LW(p) · GW(p)

I'm first going to summarize what I think you think:

  • $Billions are at stake.
  • People/organizations are giving public narratives about what they're doing, including ones that affect the $billions.
  • People/organizations also have narratives that function for maintaining a well-functioning, cohesive community.
  • People criticize these narratives sometimes. These criticisms have consequences.
  • Consequences include: People feel the need to defend themselves. People might lose funding for themselves or their organization. People might fall out of some "ingroup" that is having the important discussions. People might form coalitions that tear apart the community. The overall trust level in the community, including willingness to take the sensible actions that would be implied by the community narrative, goes down.
  • That doesn't mean criticism of such narratives is always bad. Sometimes, it can be done well.
  • Criticisms are important to make if the criticism is really clear and important (e.g. the criticism of ACE). Then, people can take appropriate action, and it's clear what to do. (See strong and clear evidence)
  • Criticisms are potentially destructive when they don't settle the matter. These can end up reducing cohesion/trust, splitting the community, tarnishing reputations of people who didn't actually do something wrong, etc.
  • These non-matter-settling criticisms can still be important to make. But, they should be done with sensitivity to the political dynamics involved.
  • People making public criticisms willy-nilly would lead to a bunch of bad effects (already mentioned). There are standards for what makes a good criticism, where "it's true/well-argued" is not the only standard. (Other standards are: is it clear, is it empathetic, did the critic try other channels first, etc)
  • It's still important to get to the truth, including truths about adversarial patterns. We should be doing this by thinking about what norms get at these truths with minimum harm caused along the way.

Here's a summary of what I think (written before I summarized what you thought):

  • The fact that $billions are at stake makes reaching the truth in public discussions strictly more important than for a philosophy club. (After all, these public discussions are affecting the background facts that private discussions, including ones that distribute large amounts of money, assume)
  • The fact that $billions are at stake increases the likelihood of obfuscatory action compared to in a philosophy club.
  • The "level one" thing to do is to keep using philosophy club norms, like old-LessWrong. Give reasons for thinking what you think. Don't make appeals to consequences or shut people up for saying inconvenient things; argue at the object level. Don't insult people. If you're too sensitive to hear the truth, that's for the most part your problem, with some exceptions (e.g. some personal insults). Mostly don't argue about whether the other people are biased/adversarial, and instead make good object-level arguments (this could be stated somewhat misleadingly as "assume good faith"). Have public debates, possibly with moderators.
  • A problem with "level one" norms is that they rarely talk about obfuscatory action. "Assume good faith", taken literally, implies obfuscation isn't happening, which is false given the circumstances (including monetary incentives). Philosophy club norms have some security flaws.
  • The "level two" thing to do is to extend philosophy club norms to handle discussion of adversarial action. Courts don't assume good faith; it would be transparently ridiculous to do so.
  • Courts blame and disproportionately punish people. We don't need to do this here, we need the truth to be revealed one way or another. Disproportionate punishments make people really defensive and obfuscatory, understandably. (Law fought fraud, and fraud won)
  • So, "level two" should develop language for talking about obfuscatory/destructive patterns of social action that doesn't disproportionately punish people just for getting caught up in them. (Note, there are some "karmic" consequences for getting caught up in these dynamics, like having the organization be less effective and getting a reputation for being bad at resisting social pressure, but these are very different from the disproportionate punishments typical of the legal system, which punish disproportionately on the assumption that most crime isn't caught)
  • I perceive a backslide from "level one" norms, towards more diplomatic norms [LW · GW], where certain things are considered "rude" to say and are "attacking people", even if they'd be accepted in philosophy club. I think this is about maintaining power illegitimately.

Here are more points that I thought of after summarizing your position:

  • I actually agree that individuals should be using their discernment about how and when to be making criticisms, given the political situation.
  • I worry that saying certain ways of making criticisms are good/bad results in people getting silenced/blamed even when they're saying true things, which is really bad.
  • So I'm tempted to argue that the norms for public discussion should be approximately "that which can be destroyed by the truth should be", with some level of privacy and politeness norms, the kind you'd have in a combination of a philosophy club and a court.
  • That said, there's still a complicated question of "how do you make criticisms well". I think advice on this is important. I think the correct advice usually looks more like advice to whistleblowers than advice for diplomacy.

Note, my opinion of your opinions, and my opinions, are expressed in pretty different ontologies. What are the cruxes?

Suppose future-me tells me that I'm pretty wrong, and actually I'm going about doing criticisms the wrong way, and advocating bad norms for criticism, relative to you. Here are the explanations I come up with:

  • "Scissor statements" are actually a huge risk. Make sure to prove the thing pretty definitively, or there will be a bunch of community splits that make discussion and cooperation harder. Yes, this means people are getting deceived in the meantime, and you can't stop that without causing worse bad consequences. Yes, this means group epistemology is really bad (resembling mob behavior), but you should try upgrading that a different way.
  • You're using language that implies court norms, but courts disproportionately punish people. This language is going to increase obfuscatory behavior way more than it's worth, and possibly result in disproportionate punishments. You should try really, really hard to develop different language. (Yes, this means some sacrifice in how clear things can be and how much momentum your reform movement can sustain)
  • People saying critical things about each other in public (including not-very-blamey things like "I think there's a distortionary dynamic you're getting caught up in") looks really bad in a way that deterministically makes powerful people, including just about everyone with money, stop listening to you or giving you money. Even if you get a true discourse going, the community's reputation will be tarnished by the justice process that led to that, in a way that locks the community out of power indefinitely. That's probably not worth it, you should try another approach that lets people save face.
  • Actually, you don't need to be doing public writing/criticism very much at all, people are perfectly willing to listen to you in private, you just have to use this strategy that you're not already using.

These are all pretty cruxy; none of them seem likely (though they're all plausible), and if I were convinced of any of them, I'd change my other beliefs and my overall approach.

There are a lot of subtleties here. I'm up for having in-person conversations if you think that would help (recorded / written up or not).

Replies from: Raemon, Ruby
comment by Raemon · 2019-07-20T04:59:20.074Z · LW(p) · GW(p)

This is an awesome comment on many dimensions, thanks. I both agree with your summary of my position, and I think your cruxes are pretty similar to my cruxes.

There are a few additional considerations of mine which I'll list, followed by attempting to tease out some deeper cruxes of mine about "what facts would have to be true for me to want to backpropagate the level of fear it seems like you feel into my aesthetic judgment." [This is a particular metaframe I'm currently exploring]

[Edit: turned out to be more than a few straightforward assumptions, and I haven't gotten to the aesthetic or ontology cruxes yet]

Additional considerations from my own beliefs:

  • I define clarity in terms of what gets understood, rather than what gets said. So, using words with non-standard connotations, without doing a lot of up-front work to redefine your terms, seems to me to be reducing clarity, and/or mixing clarity, rather than improving it.
    • I think it's especially worthwhile to develop non-court language, for public discourse, if your intent is not to be punative – repurposing court language for non-punative action is particularly confusing. The first definition for "fraud" that comes up on google is "wrongful or criminal deception intended to result in financial or personal gain". The connotation I associate it with is "the kind of lying you pay fines or go to jail for or get identified as a criminal for".
  • By default, language-processing is a mixture of truthseeking and politicking. The more political a conversation feels, the harder it will be for people to remain in truthseeking mode. I see the primary goal of a rationalist/truthseeking space to be to ensure people remain in truthseeking mode. I don't think this is completely necessary but I do think it makes the space much more effective (in terms of time spent getting points across).
  • I think it's very important for language re: how-to-do-politics-while-truthseeking be created separately from any live politics – otherwise, one of the first things that'll happen is the language get coopted and distorted by the political process. People are right/just to fear you developing political language if you appear to be actively
  • Fact that is (quite plausibly) my true rejection – Highly tense conversations that I get defensive at are among the most stressful things I experience, which cripple my ability to sleep well while doing them. This is high enough cost that if I had to do it all the time, I would probably just tune them out.
    • This is a selfish perspective, and I should perhaps be quite suspicious of the rest of my arguments in light of it. But it's not obviously wrong to me in the first place – having stressful weeks of sleep wrecked is really bad. When I imagine a world where people are criticizing me all the time [in particular when they're misunderstanding my frame, see below about deep model differences], it's not at all obvious that the net benefit I or the community gets from people getting to express their criticism more easily outways the cost in productivity (which would, among other things, be spent on other truthseeking pursuits). When I imagine this multiplied across all orgs it's not very surprising or unreasonable seeming for people to have learned to tune out criticism.
  • Single Most Important Belief that I endorse – I think trying to develop a language for truthseeking-politics (or politics-adjaecent stuff) could potentially permanently destroy the ability for a given space do politics sanely. It's possible to do it right, but also very easy to fuck up, and instead of properly transmitting truthseeking-into-politics, politics backpropogates into truthseeking, causes people to view truthseeking norms as a political weapon. I think this is basically what happened with the American Right Wing and their view of science (and I think things like the March for Science are harmful because they exacerbate Science as Politics).
    • In the same way that it's bad to tell a lie, to accomplish some locally good thing (because the damage you do to the ecosystem is far worse than whatever locally good thing you accomplished), I think it is bad to try to invent truthseeking-politics-on-the-fly without explaining well what you are doing while also making claims that people are (rightly) worried will cost them millions of dollars. Whatever local truth you're outputting is much less valuable than the risks you are playing with re: the public commons of "ability to ever discuss politics sanely."
    • I really wish we had developed good tools to discuss politics sanely before we got access to billions of dollars. That was an understandable mistake (I didn't think about it until just this second), but it probably cost us deeply. Given that we didn't, I think creating good norms requires much more costly signaling of good faith (on everyone's part) than it might have needed. [this paragraph is all weak confidence since I just thought of it but feels pretty true to me]
  • People have deep models, in which certain things seem obvious them that are not obvious to others. I think I drastically disagree with you about what your prior should be that "Bob has a non-motivated deep model (or, not any more motivated than average) that you don't understand", rather than "Bob's opinion or his model is different/frightening because he is motivated, deceptive and/or non-truth-tracking."
    • My impression is that everyone with a deep, weird model that I've encountered was overly biased in favor of their deep model (including you and Ben), but this seems sufficiently explained by "when you focus all your attention on one particular facet of reality, that facet looms much larger in your thinking, and other facets loom less large", with some amount of "their personality or circumstance biased them towards their model" (but, not to a degree that seems particularly weird or alarming).
      • Seeing "true reality" involves learning lots of deep models into narrow domains and then letting them settle.
    • [For context/frame, remember that it took Eliezer 2 years of blogging every day to get everyone up to speed on how to think in his frame. That's roughly the order-of-magnitude of effort that seems like you should expect to expend to explain a counterintuitive worldview to people]
    • In particular, a lot of the things that seem alarming to you (like, Givewell's use of numbers that seem wrong) is pretty well (but not completely) explained by "it's actually very counterintuitive to have the opinions you do about what reasonable numbers are." I have updated more towards your view on the matter, but a) it took me a couple years, b) it still doesn't seem very obvious to me. Drowning-Children-are-Rare is a plausible hypothesis but doesn't seem so overdetermined that anyone thinks otherwise must be deeply motivated or deceptive.
    • I'm not saying this applies across the board. I can think of several people in EA or rationalist space who seem motivated in important ways. My sense of deep models specifically comes from the combination of "the deep model is presented to me when I inquire about it, and makes sense", and "they have given enough costly signals of trustworthiness that I'm willing to give them the benefit of the doubt."
  • I have updated over the past couple years on how bad "PR management" and diplomacy are for your ability to think, and I appreciate the cost a bit more, but it still seems less than the penalties you get for truthseeking when people feel unsafe.
  • I have (low confidence) models that seem fairly different from Ben (and I assume your) model of what exactly early LessWrong was like, and what happened to it. This is complicated and I think beyond scope for this comment.
  • Unknown Unknowns, and model-uncertainty. I'm not actually that worried about scissor-attacks, and I'm not sure how confident I am about many of the previous models. But they are all worrisome enough that I think caution is warranted.

"Regular" Cruxes

Many of the above bullet-points are cruxy and suggest natural crux-reframes. I'm going to go into some detail for a few:

  • I could imagine learning that my priors on "deep model divergence" vs "nope, they're just really deceptive" are wrong. I don't actually have all that many data points to have longterm confidence here. It's just that so far, most of the smoking guns that have been presented to me didn't seem very definitive.
    • The concrete observations that would shift this are "at least one of the people that I have trusted turns out to have a smoking gun that makes me think their deep model was highly motivated" [I will try to think privately about what concrete examples of this might be, to avoid a thing where I confabulate justifications in realtime.]
  • It might be a lot easier than I think to create a public truthseeking space that remains sane in the face of money and politics. Relatedly, I might be overly worried about the risk of destroying longterm ability to talk-about-politics-sanely.
    • If I saw an existing community that operated on a public forum and onboarded new people all the time, which had the norms you are advocating, and interviewing various people involved seemed to suggest it was working sanely, I'd update. I'm not sure if there are easier bits of evidence to find.
  • The costs that come from diplomacy might be higher than the costs of defensiveness.
    • Habryka has described experiences where diplomacy/PR-concerns seemed bad-for-his-soul in various ways. [not 100% sure this is quite the right characterization but seems about right]. I think so far I haven't really been "playing on hard mode" in this domain, and I think there's a decent chance that I will be over the next few years. I could imagine updating about how badly diplomacy cripples thought after having that experience, and for it to turn out to be greater than defensiveness.
  • I might be the only person that suffers from sleep loss or other stress-side-effects as badly as I do.

These were the easier ones. I'm trying to think through the "ontology doublecrux" thing and think about what sorts of things would change my ontology. That may be another while.

Replies from: Raemon
comment by Raemon · 2019-07-20T05:01:26.271Z · LW(p) · GW(p)

(And yes I'd be interested in talking more in person if you're up for it)

comment by Ruby · 2019-07-20T03:25:35.677Z · LW(p) · GW(p)

This comment was very helpful, I'll be spending some time thinking over this. Thanks.

comment by Raemon · 2019-07-19T20:17:38.863Z · LW(p) · GW(p)

This is all confounded by people having very different deep models and worldviews that look alien and scary to each other, which is hard to distinguish from duplicity and/or harmfulness.

comment by Ruby · 2019-07-19T22:45:38.424Z · LW(p) · GW(p)
What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction.

It feels important to me to protect this as well. I haven't thought about this topic in depth though, I might fall in the camp who think there are "equivalent" tones which are better which you think are not equivalent, but it's hard to say in the abstract.

Replies from: Benquo
comment by Benquo · 2019-07-20T00:18:42.147Z · LW(p) · GW(p)

Interestingly the readiest example I have at hand comes from Zack Davis. Over email, he suggested four sample edits to Drowning Children are Rare, claiming that this would say approximately the same thing with a much gentler tone. He suggested changing this:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives. Either scenario clearly implies that these estimates are severely distorted [...]

To this:

Either charities like the Gates Foundation and Good Ventures are accumulating funds that could be used to prevent millions of deaths, or the low cost-per-life-saved numbers are significantly overestimated. My former employer GiveWell in particular is notable here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried about "crowding out" other donors. Either scenario clearly implies that these estimates are systematically mistaken [...]

Some of these changes seemed fine to me (and subsequent edits reflect this), but one of them really does leave out quite a lot, and that kind of suggestion seems pretty typical of the kind of pressure I'm perceiving. I wonder if you can tell which one I mean and how you'd characterize the difference. If not, I'm happy to try explaining, but I figure I should at least check whether the inferential gap here is smaller than I though.

Replies from: Raemon, Ruby
comment by Raemon · 2019-07-20T01:57:36.768Z · LW(p) · GW(p)

Registering a prediction that your objection was to shifting "problem here" to "notable here"

comment by Ruby · 2019-07-20T01:53:21.453Z · LW(p) · GW(p)

This example is really helpful for understanding your concerns thanks.

I agree, those are meaningfully and significantly different. Let's see if I'm perceiving what you are:

Disclaimer: I didn't read Drowning Children, but I do know of your concern that EA is self-recommending/organizations collecting resources and doing nothing with it.

1. Hoarding has the meaning of holding on to something for not other reason than holding on to it. To say they are hoarding money is say they're holding onto the money just because they want to control it.

    • Accumulating funds could be innocuous. It makes me think of someone "saving up" before a planned expenditure, e.g. CFAR "accumulated funds" so they could buy a venue.

2. Exaggeration has the meaning to me of mistating the degree of something for personal benefit, and in this context definitely connotes it being an intentional/motivated misstatement.

    • Overestimated is a plausibly honest mistake one could make. Estimating can be hard-- of course it's suspicious when someone's estimates systematically deviate in a direction advantageous to them.

3. Notable vs Problem here. Since you're talking about an overall problem, each version still implies that you think Givewell is guilty of the behavior described, one just being slightly more direct.

4. Unfair way to save lies - if this is really what they said, then that's an outlandish statement. If people are dying, why are you worried about fair?? What does it even mean for a way to be fair/unfair way of saving lives? Unfair to who? The people dying?

    • "Crowding other" other donors - I'm not sure what this means exactly. If the end result is those charities get fully funded and the people get saved and maybe even more money gets donated then otherwise - could plausibly be a good reason.

5. Severely distorted vs systematically mistaken - the first has more over connotations of an agent intentionally changing the estimates while it's possible something could be systemic without being so deliberate. This is similar to exaggerate vs overestimate. They feel almost, though not quite, equivalent in terms of the accusation made.

In terms of significant meaning-altering, I think it goes 1, 4, 2, 5, 3.

I'm guessing the one you're most concerned about is 1, but maybe 4.

Replies from: Benquo, Ruby
comment by Benquo · 2019-08-14T22:11:44.583Z · LW(p) · GW(p)

You got it right, 1 was the suggested change I was most disappointed by, as the weakening of rhetorical force also took away a substantive claim that I actually meant to be making: that GiveWell wasn't actually doing utilitarian-consequentialist reasoning about opportunity cost, but was instead displaying a sort of stereotyped accumulation behavior. (I began Effective Altruism is Self-Recommending with a cute story about a toddler to try to gesture at a similar "this is stereotyped behavior, not necessarily a clever conscious scheme," but made other choices that interfered with that.)

4 turned out to be important too, since (as I later added a quote and link referencing) "unfairness" literally was a stated motivation for GiveWell - but Zack didn't know that at the time, and the draft didn't make that clear, so it was reasonable to suggest the change.

The other changes basically seemed innocuous.

comment by Ruby · 2019-07-20T02:00:46.427Z · LW(p) · GW(p)

For the record, I would advocate against any enforced norm that says you couldn't use the first version. I would argue against anyone who thought you shouldn't be able to say either of those in some public form generally or on LessWrong specifically. I would update negatively on GiveWell or anyone else who tried to complain about your statements because you used the first version and not something like the second.

I understand the fear, if not terror, at the idea of someone claiming you shouldn't be able to do that. I expect I'd feel some of that myself if I thought someone was advocating it.


I can also see how some of my statements, especially in conversation with Zack, might have implied I had a different position here. I believe I do have an underlying coherent and consistent frame which discriminates between the cases and explains my different reactions, but I suspect it will take time and care to convey successfully.

I do think we (you + others in this convo + me) have some real disagreements though. I can say that I want to defend your ability to make those statements, but you might see some of my other positions as more dangerous to your ability to speak here than you think I realize/acknowledge. That could be, and I want to understand why things I'm worried about might be outweighed by these other things.

Replies from: Benquo
comment by Benquo · 2019-08-14T22:21:21.443Z · LW(p) · GW(p)

I think my basic worry is that if there's not an active culture-setting drive against concern-trolling, then participating on this site will mean constant social pressure against this sort of thing. That means that if I try to do things like empathize with likely readers, take into account feedback, etc., I'll either gradually become less clear in the direction this kind of concern trolling wants, or oppositionally pick fights to counteract that, or stop paying attention to LessWrong, or put up crude mental defenses that make me a little more obtuse in the direction of Said. Or some combination of those. I don't think any of those are great options.

No one here since Eliezer seems to have had both the social power and the willingness to impose new - not quite standards, but social incentive gradients. The mod team has the power, I think.

Thanks for clarifying that you're firmly in favor of at least tolerating this kind of speech. That is somewhat reassuring. But the culture is also determined by which things the mods are willing to ban for being wrong for the culture, and the implicit, connotative messages in the way you talk about conflicts as they come up. The generator of this kind of behavior is what I'm trying to have an argument with, as it seems to be to be basically embracing the drift towards pressure against the kind of clarity-creation that creates discomfort for people connected to conventional power and money. I recognize that asking for an active push in the other direction is a difficult request, but LessWrong's mission is a difficult mission!

Replies from: Ruby
comment by Ruby · 2019-08-15T03:57:45.203Z · LW(p) · GW(p)

^ acknowledged, though I am curious what specific behaviors you have in mind by concern-trolling and whether you can point to any examples on LessWrong.

Reflecting on the conversations in thread, I'm thinking/remembering that my attention and your (plus others) attention were on different things: if I'm understanding correctly, most of your attention has been on discussions with a political element (money and power) [1], yet I have been focused on pretty much (in my mind) apolitical discussions which have little to do with money or power.

I would venture (though I am not sure), that the norms and moderation requirements/desiderata for those contexts are different and can be dealt with differently. That is, that when someone makes a fact post about exercise or productivity, or someone writes about something to do with their personal psychology, or even someone is conjecturing about society in general-- these cases are all very different from when bad behavior is being pointed out, e.g. in Drowning Children.

I haven't thought much about the latter case, it feels like such posts, while important, are an extreme minority on LessWrong. One in a hundred. The other ninety-nine are not very political at all, unless raw AI safety technical stuff is actually political. I feel much less concerned that there are social pressures pushing to censor views on those topics. I am more concerned that people overall have productive conversations they find on net enjoyable and worthwhile, and this leads me to want to state that it is, all else equal, virtuous to be more "pleasant and considerate" in one's discussions; and all else equal, one ought to invest to keep the tone of discussions collaborate/cooperative/not-at-war, etc.

And the question is maybe I can't actually think about these putatively "apolitical" discussions separately from discussions of more political significance. Maybe whatever norms/virtues we set in the former will dictate how conversations about the latter are allowed to proceed. We have to think about the policies for all types of discussions all at once. I could imagine that being true, though it's not clear to me that it definitely is.

I'm curious what you think.

[1] At one point in the thread you said I'd missed the most important case, and I think this was relative to your focus.

comment by habryka (habryka4) · 2019-07-19T04:06:44.942Z · LW(p) · GW(p)

Lots of meta-level thoughts. I again apologize for not participating on the object-level. My life currently feels extremely busy and I've been stressing out about a bunch of other stuff that has prevented me from engaging with a lot of this.

Because the opinions of people are the best pointer I currently have, without spending a lot of time writing things myself, here is roughly where I stand in terms of what I agree and disagree with. I would usually prefer to just explain my own position, but because time is short and I am the closest to a central decision-maker that LessWrong has, it seems particularly important for people to have a model of where I am coming from.

I think Ruby said a bunch of correct things, in particular in the original proposed norms document. I think it was quite a good choice to make a concrete set of norms, and felt like the discussion on that was pretty useful.

I think this is the core part of Ruby's post that I had the strongest reaction to:

I applaud the efforts of all those trying to do good, those who donate their time, attention, and money towards making this world better. Unfortunately, good intentions don’t always cause good outcomes. I regret to say that after careful investigation I believe cause X is in fact harmful (as I will elaborate), and that those who have supported it should place their efforts elsewhere. It is important that we pay attention to factors A and B . . .

As both Ray and Zvi pointed out, I think this as the norm would be quite exceptionally bad and I would hate to have to participate in a forum in which this is the norm (and the fact that this kinda feels like the norm on the EA Forum is quite bad and makes me a lot less interested in participating. See this comment [EA(p) · GW(p)] as an example of this norm being enforced, which I strongly downvoted). I think this as the norm can make some sense if you are in a context that's kind of like the UN, where if you accidentally annoy someone war might happen, and the cultural barriers + language barriers prevent almost any form of long-term trust to happen, but is a very bad choice in most environments, including LW.

I do also think version 2 is a bad idea, though I do think it's less of a bad idea than the first one, mostly because of the reasons that Benquo outlined in Blatant Lies are the Best Kind! [LW · GW]. The second version feels like its at least something I can call out as wrong. The first version feels like something where in order to call it out I would have to construct a whole ontology of incentives on the fly, which would inevitably cause me to lose the interest of the audience, and feels much harder to deal with.

I think the disagreement between version 1 and version 2 is maybe what all the rest of the discussion is about, but I don't really know. I had some sense of people talking past each other in ways that wasn't super useful.

I feel like the correct plan at this point in the situation is to make guesses at potential norms and rules we could have and curiously inspect what goes wrong if we decide to follow them. I think we are pretty far from having a concrete set of norms we can implement more widely on LessWrong, but feel optimistic about our ability to make progress on things.

In the spirit of pointing out incentives, I will say straightforwardly that I am somewhat afraid of both Benquo and Jessica in this conversation and so feel a lot of non-endorsed feelings of wanting to appease you and not get into conflict with you. I don't intend to act on those feelings, but it's definitely framing this discussion a bunch for me and I expect will cause me to sometimes say things I don't endorse.

Replies from: Ruby
comment by Ruby · 2019-07-19T05:57:01.847Z · LW(p) · GW(p)

The three Versions have been subject of greater attention than I expected, I suppose examples are fairly memorable and easy to point at. For clarity, their main purpose was not to exemplify norms, but merely to demonstrate the grammatical concept of "core message" vs "extra information" with positively- and negatively-valenced examples. This is why they were so over the top-- I wanted it to be unmistakable to even the most "tone-deaf" reader that the imagined author was saying a lot more than the putative propositional object-level thesis.

By which I mean to say, none of the three Versions encapsulate the norms I currently estimate are best, and in my mind, the discussion hasn't been about arguing for one over the others, e.g. 1 vs 2. Afterall, they're not norms to be argued for (though they help in describing norms).

However, clarifying my position with reference to the three Versions:

  • Most of the time, on LW, I'd want communication to look like Version 3 where nobody's putting anything much positive/negative in the side channels which isn't also about the core message. Instead thoughts are stated clearly and plainly without decoration.
  • [I edited this one after reflection.] I think that the extremely politics-y Version 1 is not something I want to see much in my ideal world, and not much on my ideal LessWrong. It's not something I'd push people to do. Not like that. That said, some topics are very fraught and I think it's better to discuss them with clear honest signalling of intent that is as rich as Version 1 than not at all or in a way that is highly destructive.
    • I can read Version 1 as being said by someone very politics-y, but also by someone being very compassionate. I could see that approach being key to enable important and productive discussions that couldn't happen otherwise, e.g. where political tensions are already tight.
  • I do think there is a "steel" Version 1 which is in commonly virtuous but probably supererogatory. It's not something I think I'd want to enforce in any way, but I might encourage it. By this I mean something like it is virtuous for an individual to put some effort into appearing non-X when they don't actually mean to be X, and being perceived as X is bad. X could be hostile/dismissive/demeaning/judgmental/threatening/etc..
    • I suspect even a much milder version of Version 1 would seem bad to many here. Going forward, I want to dig into what "steel Version 1" would look like and whether it's in fact virtuous/even assuming it was, whether it's good to encourage it. Plus when even full-blown Version 1 is actually a good idea too.
    • [Further edit] Admittedly, I also have a strong prior that it's a mistake most of the time to be hostile/dismissive/demeaning/judgmental/threatening/etc. on LessWrong and would encourage people to update towards it probably not being right to relate that way to others.
Replies from: Benquo
comment by Benquo · 2019-07-19T07:33:17.800Z · LW(p) · GW(p)

I don't think I can honestly accept mere explicit endorsements of a high-level opinion here, because as far as I can tell, such endorsements are often accompanied by behavior, or other endorsements, that seem (to me) to contradict them. I guess that's why the examples have attracted so much attention - I have more of an expectation that they correspond to the intuitions people will make decisions with, and those are what I want to be arguing with.

Having written that, it occurs to me that I too could do a better job giving examples that illustrate the core considerations I'm trying to draw attention to, so I'll make an effort to do that in the future.

comment by Benquo · 2019-07-17T15:15:19.638Z · LW(p) · GW(p)

Overall, the feeling I get from this post is one of being asked to make a move in a game I don't fully understand (or am being lied* to about to persuade me that my understanding of it is too cynical). I'm asked to believe that I'm not giving up anything important by making this move, but people seem to care a lot about it in a way that casts some suspicion on that claim.

The last time I agreed to a major request that felt like this, I ended up spending an extra year I shouldn't have in a deeply toxic situation.

Here's my impression of what's going on. Most people socialize by playing a zero-sum coalitional game with a blame component. To conceal the adversarial nature of what's going on - to confuse the victims and bystanders - coalition membership isn't explicit or discrete. Instead, approval and disapproval mechanisms are used to single out people for exclusion and expropriation. Getting away with insulting someone is a way of singling them out in this way. Any criticism that challenges someone's social narrative is going to have similar effects in this game, even if it's not intended mainly as a move in the game.

I'm being asked, by someone who cares about their stakes in that game, to make sure to declare that I think the person I'm criticizing should still be in the coalition, rather than an expropriation target. I also see requests that others actually conceal information that's valuable to me in the games I am trying to play, to accommodate this coalitional game. For instance, your request that Zack pretend [LW(p) · GW(p)] he thought you were being more reasonable than he actually thought.

I am actually not trying to play this game. I don't consent to this game. I don't want to give it more power. I don't think the person should be in the coalition because I don't like or trust the coalition and think that the game it's playing is harmful. This is not how I find or keep my friends. I've seen this kind of behavior pervert the course of justice. I find it deeply sketchy to be asked to make a move in this game on LessWrong in a way that keeps the game covert. I fled the Bay Area in part because I got sick and tired of this game.

*At the very least, in the the sense that people are dismissing my model because it sounds bad and they don't feel like bad people, without checking whether it's, well, true. The technical term for this might be bullshit but there's no nonawkward passive verb form of that.

Replies from: Ruby
comment by Ruby · 2019-07-18T01:55:07.517Z · LW(p) · GW(p)

I've read your comments. I'm thinking through them. I might not have a response in the next few days, but I have read them.

comment by Benquo · 2019-07-17T14:38:48.259Z · LW(p) · GW(p)

Your treatment of cause 2 seems like it's actually asking people to lie or conceal crucial information. Obviously Version 2 is unhelpful, but the implication that we have to choose between version 1 or 2 when talking about the people involved - that judging people is just about directing positive or negative affect towards them - is misleading. I thought this was supposed to be a rationality forum. Talking about the patterns that generate mistakes seems like a core activity for a rationality forum.

Replies from: Ruby
comment by Ruby · 2019-07-18T01:54:31.005Z · LW(p) · GW(p)
cause 2

Did you mean Version 2?

Replies from: Benquo
comment by Benquo · 2019-07-18T04:08:03.786Z · LW(p) · GW(p)

I mean the section titled "2. Secondary information perceived in your message is upsetting."

comment by Ruby · 2019-07-15T19:04:00.765Z · LW(p) · GW(p)

Also relevant here (especially to Version 3) is my shortform post on Communal Buckets [LW(p) · GW(p)].

But I can see a way in which being wrong/making mistakes (and being called out for this) is upsetting even if you personally aren't making a bucket error. The issue is that you might fear that other people have the two variables collapsed into one. Even if you might realize that making a mistake doesn't inherently make you a bad person, you're afraid that other people are now going to think you are a bad person because they are making that bucket error.
The issue isn't your own buckets, it's that you have a model of the shared "communal buckets" and how other people are going to interpret whatever just occured. What if the community/social reality only has a single bucket here?

Even if you have no intention to attack someone, have complete respect for them, just want to share what you think is true, etc., your public criticism could be correctly perceived as damaging to them. For reasons like that, I think it's well worth it to expend a little effort dispelling the likely incorrect* interpretation of the significance of your speech acts.

*Something I haven't touched on is self-deception/motivated cognition behind speech acts, where criticisms are actually tinged with political motives even if the speaker doesn't recognize them. Putting in effort to ensure you're not sending any such signals ("intentionally" or accidentally) is something of a guard against your subconscious, elephant-in-brain motives to criticize-as-attack under the guise of criticize-as-helpful.

More succinctly. You might actually be in Version 2 (actually "intentionally" sending hostile signals) even when you believe you aren't, and putting in some effort to be nice/considerate/reconciliatory is a way to protect against that.

comment by Ruby · 2019-07-15T18:31:57.630Z · LW(p) · GW(p)

mr-hire has a recent related post [LW(p) · GW(p)] on the topic here. He brings different arguments from me for approximately the same conclusion. (I haven't thought about his S1/motivated reasoning line of reasoning enough for fully back it, but seems quite plausible.)


comment by Ruby · 2019-07-15T18:16:12.842Z · LW(p) · GW(p)

Comments from the Google Doc.

Zvi:

Version 3 is usually best. I shouldn't have to be raising the status of people just because I have information that lowers it, and am sharing that information. Version 1 is pretty sickening to me. If I said that, I'd probably be lying. I have to pay lip service to everyone involved being good people because they're doing nominally good-labeled things, in order to point out those things aren't actually good? Why?

Ruby:

Take Jim's desire to speak out against the harms of advocating (poor) vegan diets. I'd like him to be able to say that and I'd like it to go well, with people either changing their or changing behavior.
I think the default is that people feel attacked and it all goes downhill from there. This is not good, this not how I want to be, but it seems the pathway towards "people can say critical things about each other and that's fine" probably has to pass through "people are critical but try hard to show that they really don't mean to attack."
I definitely don't want you to lie. I just hope there's something you could truthfully say that shows that you're not looking to cast these people out or damage them. If you are (and others are towards you), then either no one can ever speak criticism (current situation, mostly) or you get lots of political conflicts. Neither of those seems to get towards the maximum number of people figuring out the maximum number of true things.

Ray:

Partly in response to Zvi's comment elsethread:
1) I think Version 1 as written comes across very politics-y-in-a-bad-way. But this is mostly a fact about the current simulacrum 3/4 world of "what moves are acceptable."
2) I think it's very important people not be obligated (or even nudged) to write "I applaud X" if they don't (in their heart-of-hearts) applaud X.
But, separate from that, I think people who don't appreciate X for their effort (in an environment like the rationalsphere) are usually making a mistake, in a similar way to pacifists who say "we should disband the military" are making a mistake. There is a weird missing mood here.
I think fully articulating my viewpoint here is a blogpost or 10, so probably not going to try in this comment section. But the tl;dr is something like "it's just real damn hard to get anything done. Yes, lots of things turn out to be net negative, but I currently lean towards "it's still better to err somewhat on rewarding people who tried to do something real."
This is not a claim about what the conversation norms or nudges should be, but it's a claim about what you'd observe in a world where everyone is roughly doing the right thing
Replies from: Ruby
comment by Ruby · 2019-07-15T18:37:35.330Z · LW(p) · GW(p)
I shouldn't have to be raising the status of people just because I have information that lowers it, and am sharing that information.

Zvi, what's the nature of this "should''? Where does its power come from? I feel unsure of the normative/meta-ethical framework you're invoking.

Relatedly, what's the overall context and objective for you when you're sharing information which you think lowers other people's status? People are doing something you think is bad, you want to say so. Why? What's the objective/desired outcome? I think it's the answer to these questions which shape how one should speak.

I'm also interest in your response to Ray's comment.

Replies from: Zvi, Zvi, Zvi, Zvi, Zvi
comment by Zvi · 2019-07-15T22:04:52.267Z · LW(p) · GW(p)

(5) Splitting for threading.

Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I'm coming from a lot better, which is why I'm trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.

Anyway, I still have to give a response to Ray's comment, so here goes.

Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it's because that answer is sickeningly political! It's saying "First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they're slaughtering all these people, that you consider having them do less of that?"

I mean, that's not fair. But it's also not all that unfair, either.

(2) we strongly agree.

Pacifists who say "we should disband the military" may or may not be making the mistake of not appreciating the military - they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is "a lot," I don't know to what extent the military should be appreciated.

As for appreciation of people's efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don't have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don't appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won't name them in print, but might in conversation.

So I don't think there's a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don't, and I don't see why I'm reiterating things 'everybody knows' are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.

That doesn't mean that I wouldn't reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).

But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying "I see you trying to do a thing! I think it's harmful and you should stop." and you saying "oops!" should net you points without me having to say "POINTS!"

Replies from: Benquo
comment by Benquo · 2019-07-17T15:18:46.374Z · LW(p) · GW(p)
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying "I see you trying to do a thing! I think it's harmful and you should stop." and you saying "oops!" should net you points without me having to say "POINTS!"

Huh. I think part of what's bothering me here is that I'm reading requests to award points (on the assumption that otherwise people will assign credit perversely) as declaring intent to punish me if I publicly change my mind in a way that's not savvy to this game, insofar as implying that perverse norms are an unchangeable fait accompli strengthens those norms.

Replies from: Zvi
comment by Zvi · 2019-07-17T17:36:21.589Z · LW(p) · GW(p)

Ah. That's my bad for conflating my mental concept of "POINTS!" (a reference mostly to the former At Midnight show, which I've generalized) with points in the form of Karma points. I think of generic 'points' as the vague mental accounting people do with respect to others by default. When I say I shouldn't have to say 'points' I meant that I shouldn't have to say words, but I certainly also meant I shouldn't have to literally give you actual points!

And yeah, the whole metaphor is already a sign that things are not where we'd like them to be.

Replies from: Benquo
comment by Benquo · 2019-07-17T20:51:20.569Z · LW(p) · GW(p)

I didn't think I was disagreeing with you - I meant to refer to the process of publicly explicitly awarding points to offset the implied reputational damage

Replies from: Zvi
comment by Zvi · 2019-07-17T21:37:22.702Z · LW(p) · GW(p)

Ah again, thanks for clarifying that.

comment by Zvi · 2019-07-15T20:51:46.858Z · LW(p) · GW(p)

(1) Glad you asked! Appreciate the effort to create clarity.

Let's start off with the recursive explanation, as it were, and then I'll give the straightforward ones.

I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It's a great question to be asking if you don't understand, or are unsure if you understand or not, and you want to know. If you're confused about this, and especially if others are as well, it's important to clear it up.

Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.

On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people's social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using "should" which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?

Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.

(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)


comment by Zvi · 2019-07-15T21:20:28.214Z · LW(p) · GW(p)

(4) Splitting for threading.

Pure answer / summary.

The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.

The meta-ethical framework I'm using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.

I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one's information in order to score political points, so don't do that. But it's also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.

The power of this "should" is that I'm denying the legitimacy of coercing me into doing something in order to maintain someone else's desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why "should" attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.

The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.

But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people's maps and create clarity. That's rhetoric, not discourse, no?

Replies from: Ruby, Ruby
comment by Ruby · 2019-07-16T19:44:53.232Z · LW(p) · GW(p)

(2 out of 2)

Trying to communicate my impression of things:

As an agent I want to say that you are responsible (in a causal sense) for the consequences of your actions, including your speech acts. If you have preferences of the state of the world are care about how your actions shape it, then you ought to care about the consequences of all your actions. You can't argue with the universe and say it "it's not fair that my actions caused result X, that shouldn't be my responsibility!"

You might say that there are cases where not caring (in a direct way) about some particular class of actions has better consequences about worrying of them, but I think you have to make an active argument that ignoring something actually is better. You can also move into a social reality where "responsibility" is no longer about causal effects and is instead about culpability. Causally, I may be responsible for you being upset even if we decide that morally/socially I am not responsible for preventing that upsetness or fixing it.

I want to discuss what we should set the moral/social responsibility given the actual causal situation in the world. I think I see the conclusions you feel are true, but I feel like I need to fill in the reasoning for why you think this is the virtuous/TDT-appropriate way to assign social responsibility.

So what is the situation?

1a) We humans are not truth-seekers devoid of all other concerns and goals. We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps. There are trade-offs here, like how I won't cut-off my arm to learn any old true fact.

1b) Speech acts between humans (exceedingly social as we are) have many consequences. Those consequences happen regardless whether you want or care about them happening them or not. These broader consequences will affect things in general but also our ability to create accurate maps. That's simply unavoidable.

2) Do you have opt-in?

Starting out as an individual you might set out with the goal of improving the accuracy of people's beliefs. How you speak is going to have consequences for them (some under their control, some not). If they never asked you to improve their beliefs, you can't say "those effects aren't my responsibility!", responsibility here is a social/moral concept that doesn't apply because they never accepted your system which absolves you of the raw consequences of what you're doing. In the absence of buying into a system, the consequences are all there are. If you care about the state of the world, you need to care about them. You can't coerce the universe (or other people) into behaving how you think is fair.

Of course, you can set up a society which builds a layer on top of the raw consequences of actions and sets who gets to do what in response to them. We can have rules such as "if you damage my car, you have to pay for it". The causal part is that when I hit your car, it gets damaged. The social responsibility part is where we coordinate to enforce you pay for it. We can have another rule saying that if you painted your car with invisible ink and I couldn't see it, then I don't have to pay for the damage of accidentally hitting it.

So what kind of social responsibilities should we set up for our society, e.g. LessWrong? I don't think it's completely obvious which norms/rules/responsibilities will result in the best outcomes (not that we've exactly agreed on exactly which outcomes matter). But I think everything I say here applies even if you all you care about is truth and clarity.

I see the intuitive sense of a system where we absolve people of the consequences of saying things which they believe are true and relevant and cause accurate updates. You say what you think is true, thereby contributing to the intellectual commons, and you don't have worry about the incidental consequences - that'd just get in the way. If I'm part of this society, I know that if I'm upset by something someone says, that's on me to handle (social responsibility) notwithstanding them sharing in the causal responsibility. (Tell me if I'm missing something.)

I think that just won't work very well, especially for LessWrong.

1. You don't have full opt-in. First, we don't have official, site-wide agreement that people are not socially/morally responsible for the non-truth parts of speech. We also don't have any strong initiation procedures that ensure people fully understand this aspect of the culture and knowingly consenting to it. Absent that, I think it's fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.

Further, LessWrong is a public website which can be read by anyone - including people who haven't opted into your system saying it's okay to upset, ridicule, accuse them, etc., so long as you're speaking what you think is true. You can claim they're wrong for not doing so (maybe they are), but you can't claim your speech won't have the consequences that it does on them and that they won't react to them. I, personally, with the goals that I have, think I ought to be mindful of these broader effects. I'm fairly consequentialist here.

One could claim that it's correct to cause people to have correct updates notwithstanding other consequences even when they didn't ask for it. To me, that's actually hostile and violent. If I didn't consent to you telling me truth things in an upsetting or status lowering way, then it's entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that's not the right way to go about promoting truth and clarity.

2. Even among people who want to opt-in to a "we absolve each other of the non-truth consequence of our speech" system, I don't think it works well because I think most people are rather poor at this. I expect it to fail because defensiveness is real and hard to turn off and it does get in the way thinking clearly and truth-seeking. Aspirationally we should get beyond it, but I don't think that's so much the case that we should legislate it to be the case.

3. (This is the strongest objection I have.)

Ordinary society has "politeness" norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it's reasonable to get upset).These norms are not so different against the norms against theft and physical violence. The politeness norms are fuzzier, but we remarkably seem to agree on them for the most part and it works pretty well.

When you propose absolving people of the non-truth consequences of their speech, you are disbanding the politeness norms which ordinarily prevent people from harming each other verbally. There are many ways to harm: upsetting, lowering status, insulting, trolling, calling evil or bad, etc. Most of these are symmetric weapons too which don't rely on truth.

I assert that if you "deregulate" the side-channels of speech and absolve people of the consequences of their actions, then you are going to get bad behavior. Humans are reprobate political animals (including us upstanding LW folk), if you make attack vectors available, they will get you used. 1) Because ordinary people will lapse into using them too, 2) because you're genuinely bad actors will come about and abuse the protection you've given them.

If I allow you to "not worry about the consequences of your speech", I'm offering protection to bad actors to have a field day (or field life) as they bully, harass, or simply troll under the protection of "only the truth-content" matters.

It is a crux for me that such an unregulated environment where people are consciously, subconsciously, and semi-consciously attacking/harming each other is not better for truth and clarity than one where there is some degree of politeness/civility/consideration expected.


Replies from: Zvi, jessica.liu.taylor, jessica.liu.taylor, jessica.liu.taylor, Zvi
comment by Zvi · 2019-07-17T13:14:20.560Z · LW(p) · GW(p)

Echo Jessica's comments (we disagree in general about politeness but her comments here seem fully accurate to me).

I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn't normally mention such things, but in context I expect you would want to know this.

Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn't just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can't think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have 'negative consequences' than... we're done, right?

We all agree that if someone is bullying, harassing or trolling as their purpose and using 'speaking truth' as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.

The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is... well, I notice I am confused if that isn't a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:

It should be presumed that saying true things in order to improve people's models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one's voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don't respond with 'what if you knew how to build an unsafe AGI or a biological weapon' or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.

On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let's talk about that. But also often not that. Often it's just, side effects and unintended consequences are a thing, and sometimes things don't benefit from particular additional truth.

That's life. Sometimes those consequences are bad, and I do not completely subscribe to "that which can be destroyed by the truth should be" because I think that the class of things that could be so destroyed is... rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn't say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people's time, so don't do that! Or it would give a false impression even though the statement is true, so again, don't do that. In both cases, additional words may be a good idea to prevent this.

Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can't find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.

But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it's cheap to do so, especially close-by harm. But hurting people's ability to say X in general, or this X in particular, and be heard, is big harm.

If it's not particularly efficient to prevent Z, though, and Y>Z, I shouldn't have to then prevent Z.

I shouldn't be legally liable for Z, in the sense that I can be punished for Z. I also shouldn't be punished for Z in all cases where someone else thinks Z>Y.

Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.

Or if I damn well knew or should have known Z happens and Z>>Y, and then... maybe? Sometimes? It gets weird. Full legal theories get complex.

If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don't want to stand anywhere near where that's the case.

Also important here is that we were talking about an example where the 'bad effect' was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn't an obviously bad effect! It's a by-default good effect to do this. If resources were being extracted under false pretenses, it's good to prevent that, even if the resources were being spent on [good thing]. If you don't think that, again, I'm confused why this website is interesting to you, please explain.

I also can't escape the general feeling that there's a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we've established what we all are, and 'now we're talking price.' Except, no.

The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.

If I need to do another long-form exchange like this, I think we'd need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.

Replies from: Ruby
comment by Ruby · 2019-07-18T02:48:41.941Z · LW(p) · GW(p)
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn't normally mention such things, but in context I expect you would want to know this.

I am glad you shared it and I'm sorry for the underlying reality you're reporting on. I didn't and don't want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you'd like to describe the elements you didn't like, I'll try hard to avoid them going forward.

(*I did feel frustrated that it seemed to me you didn't really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on - I can somewhat see it reviewing my comment. I'm sorry if I caused distress in that way.)

It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it's easy to communicate more complicated ideas; but maybe these benefits are outweighed.

I do want to make progress in this discussion and want to persist until it's clear we can make no further progress. I think it's a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you've understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I'd like to know you can pass my ITT. We might not reach agreement at the end, but I'd least like if we can pass each other's ITTs.

-----------------------------------------------------------------------------

I think it's better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we're trying to build [LW · GW] with this discussion, to borrow Ray's terminology.

The couple of things I do want to respond to now are:

We all agree that if someone is bullying, harassing or trolling as their purpose and using 'speaking truth' as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.

I definitely did not know that we all agreed to that, it's quite helpful to have heard it.

The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is... well, I notice I am confused if that isn't a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words.

1. I haven't read your writings on Blackmail (or anyone else's beyond one or two posts, and of those I can't remember the content). There was a lot to read in that debate and I'm slightly averse to contentious topics; I figured I'd come back to the discussions later after they'd died down and if it seemed a priority. In short, nothing I've written above is derived from your stated positions in Blackmail. I'll go read it now since it seems it might provide clarity on your thinking.

2. I wonder if you've misinterpreted what I meant. In case this helps, I didn't mean to say that I think any party in this discussion believes that if you're saying true things, then it's okay to be doing anything else with your speech ("complete absolution of responsibility"). I meant to say that if you don't have some means of preventing people abusing your policies, then that will happen even if you think it shouldn't. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don't backfire even worse. That's the part where it gets fuzzy and difficult to me.

Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can't find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it's cheap to do so, especially close-by harm. But hurting people's ability to say X in general, or this X in particular, and be heard, is big harm.
If it's not particularly efficient to prevent Z, though, and Y>Z, I shouldn't have to then prevent Z.
I shouldn't be legally liable for Z, in the sense that I can be punished for Z. I also shouldn't be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then... maybe? Sometimes? It gets weird. Full legal theories get complex.

This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.

comment by jessicata (jessica.liu.taylor) · 2019-07-17T08:07:07.498Z · LW(p) · GW(p)

We are em­bod­ied, vuln­er­a­bly, fleshy be­ings with all kinds of needs and wants. Re­sul­tantly, we are af­fected by a great many things in the world be­yond the ac­cu­racy of our maps.

Related to Ben's comment chain here [LW(p) · GW(p)], there's a significant difference between minds that think of "accuracy of maps" as a good that is traded off against other goods (such as avoiding conflict), and minds that think of "accuracy of maps" as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they're conceptualized pretty differently)

That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?

When I consider things like "making the map less accurate in order to get some gain", I don't think "oh, that might be worth it, epistemic rationality isn't everything", I think "Jesus Christ you're killing everyone and ensuring we're stuck in the dark ages forever". That's, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it's normative to protect the wrongness from the truth), then we're in the dark ages indefinitely, and won't get life extension / FAI / benevolent world order / other nice things / etc. (This doesn't entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it's more like a strong prediction that it's extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)

Replies from: Ruby, Benquo, Ruby
comment by Ruby · 2019-07-18T15:54:35.161Z · LW(p) · GW(p)
Related to Ben's comment chain here [LW(p) · GW(p)], there's a significant difference between minds that think of "accuracy of maps" as a good that is traded off against other goods (such as avoiding conflict), and minds that think of "accuracy of maps" as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they're conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?

I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I'm advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.

Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don't care if it risked destroying me. So maybe I'm misguided in what will lead to truth, but I'm not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.

None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you're not listening to me.

It feels like you keep repeating the 101 arguments and I want to say "I get them, I really get them, you're boring me" -- can you instead engage with why I think we can't use "but I'm saying true things" as free license to say anything in way whatsoever? That this doesn't get you a space where people discuss truth freely.

I grow weary of how my position "don't say things through the side channels of your speech" gets rounded down to "there are things you can't say." I tried to be really, really clear that I wasn't saying that. In my proposal doc I said "extremely strong protection for being able to say directly things you think are true." The things I said you shouldn't do, is smuggle your attacks in "covertly." If you want to say "Organization X is evil", good, you should probably say. But I'm saying that you should make that your substantive point, don't smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don't mean to say they're evil and you want to declare war, then it's supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.

I'm not saying you can't say people are bad and are doing bad; I'm saying if you have any desire to continue to collaborate with them - or hope you can redeem them - then you might want to include that in your messages. Say that at least think they're redeemable. If that's not true, I'm not asking you to say it falsely. If your only goal is to destroy, fine. I'm not sure it's the correct strategy, but I'm not certain it isn't.

Replies from: Zvi, jessica.liu.taylor, Benquo
comment by Zvi · 2019-07-18T18:48:37.302Z · LW(p) · GW(p)

I'm out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:

It feels like you keep repeating the 101 arguments and I want to say "I get them, I really get them, you're boring me" -- can you instead engage with why I think we can't use "but I'm saying true things" as free license to say anything in way whatsoever? That this doesn't get you a space where people discuss truth freely.

I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven't engaged with the fact that this might be false. It's quite frustrating.

I also note that there seems to be something like "impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I'm punishing are probably bad actors, because who else would be impolite?" Which is Parable of the Lightning stuff.

(If you want more detail on my position, I endorse Jessica's Dialogue on Appeals to Consequences) [LW · GW].

comment by jessicata (jessica.liu.taylor) · 2019-07-18T17:50:50.912Z · LW(p) · GW(p)

Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than "don't insult people in side channels". (I might even be in favor of such a restriction if it's clearly defined and consistently enforced)

Here's an instance:

Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.

This didn't specify "just side-channel consequences." Ordinary society blames people for non-side-channel consequences, too.

Here's another:

One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.

This doesn't seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it's upsetting or status lowering ("forcing what you think is true on other people"). (Note, there's ambiguity here regarding "in an upsetting or status lowering way", which could be referring to side channels; but, "forcing what you think is true on other people" has no references to side channels)

Here's another:

Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).

This isn't just about side channels. There are certain things it's impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment [LW(p) · GW(p)]). And, people are often upset by direct, frank speech.

You're saying that I'm being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between "upsetting people through direct content" and "upsetting people through side channels". But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.

The problem is that I don't know how to construct a coherent worldview that generates both "I'm only trying to restrict side-channel insults" and "causing people to have correct updates notwithstanding status-lowering consequences is violent." I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.

Replies from: Ruby
comment by Ruby · 2019-07-19T02:29:26.103Z · LW(p) · GW(p)

This comment is helpful, I see now where my communication wasn't great. You're right that there's some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.

I'm wary that I can't convey my entire position well in a few paragraphs, and that longer text isn't helping that much either, but I'll try to add some clarity before giving up on this text thread.

1. As far as group norms and moderation go, my position is as stated in the original doc I shared.

2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren't ones I'm trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn't adequately clear between these views and the ones I'd actually promote/enforce as group norms.

3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. ("Violent" is maybe an overly evocative word, perhaps "hostile" is more directly descriptive of what I mean.) But:

  • Foremost, I say this descriptively and as words of caution.
  • I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they'd really rather you didn't.
    • I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you're doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
    • I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
    • I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
  • Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn't assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
    • For the reasons I shared above, I think that it's hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
    • Because we won't achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can't get in trouble for something. That behavior isn't forbidden doesn't mean it's optimal.
      • I'm realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won't elaborate on here-- but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.

4. I don't think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don't allow for that, you'll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.

5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people's attitudes and intent. One of my primary arguments for "niceness" is that if you actually respect someone/like them/want to cooperate with them, then it's a good idea to invest in making sure they don't incorrectly update away from that. I'm not saying it's zero effort, but I think it's better than having people incorrectly infer that you think they're terrible when you don't think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)

6. As per the above point, I'm not encouraging anyone to say things they don't believe or feel (I am not advocating lip service) just to "get along". That said, I do think that it's very easy to decide that other people are incorrigibly acting in bad faith, that you can't cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I've had a bad prior on many cases.

Hmm. As always, that's about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates "I hate you this much." There's no hate in this comment. I still think it's worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)


comment by Benquo · 2019-07-18T16:15:09.084Z · LW(p) · GW(p)
It feels like you keep repeating the 101 arguments and I want to say "I get them, I really get them, you're boring me" -- can you instead engage with why I think we can't use "but I'm saying true things" as free license to say anything in way whatsoever? That this doesn't get you a space where people discuss truth freely.

I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn't get it - by the Gricean maxim of relevance - even if you verbally affirmed it. Your framing didn't distinguish between "don't say things through the side channels of your speech" and "don't criticize other participants." You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.

(The conversational move I want to recommend to you here is something like, "You keep saying X. It sort of seems like you think that I believe not-X. I'd rather you directly characterized what you think I'm getting wrong, and why, instead of arguing on the assumption that I believe something silly." If you don't explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it's generally rude to "put words in other people's mouths" and people get unhelpfully defensive about that pretty reliably, so it's natural to try to let you save face by skipping over the unpleasantness there.)

I think there's also a big disagreement about how frequently someone's motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you're thinking of that as something like the "nuclear option," which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.

Then there's also a problem where it's a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack's "What? Why?" seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it's a lot of extra work to turn that sort of tone into content reliably, and most people - including most people on this forum - don't know how to do it. It's fine to ask for extra work, but it's objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.

Replies from: Ruby, Ruby
comment by Ruby · 2019-07-20T08:08:19.503Z · LW(p) · GW(p)

[Attempt to engage with your comment substantively]

(The conversational move I want to recommend to you here is something like, "You keep saying X. It sort of seems like you think that I believe not-X. I'd rather you directly characterized what you think I'm getting wrong, and why, instead of arguing on the assumption that I believe something silly." If you don't explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it's generally rude to "put words in other people's mouths" and people get unhelpfully defensive about that pretty reliably, so it's natural to try to let you save face by skipping over the unpleasantness there.)

Yeah, I think that's a good recommendation and it's helpful to hear it. I think it's really excellent if someone says "I think you're saying X which seems silly to me, can you clarify what you really mean?" In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I'm used to talking with. Though seems quite plausible others don't share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(

I think there's also a big disagreement about how frequently someone's motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this.

I'm not sure how to measure, but my confidence interval feels wide on this. I think there probably isn't any big disagreement here between us here.

It seems like you're thinking of that as something like the "nuclear option," which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.

If this means "talking about someone's motivations for saying things", I agree with you that that is very important for a rationality space to be able to do that. I don't see it as a nuclear option, not by far. I'd hope often that people would respond very well to it "You know what? You're right and I'm really you mentioned it. :)"

I have more thoughts on my exchange with Zack, though I'd want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.

comment by Ruby · 2019-07-18T17:23:26.043Z · LW(p) · GW(p)

This response makes me think we've been paying attention to different parts of the picture. I haven't been focused the "can you criticize other participants and their motives" part of the picture (to me the answer is yes but I'm going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.

My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done "positively" (admittedly over the top), "negatively", and "not at all". Those examples weren't about illustrating all legitimate and illegitimate behavior - only that concerning side channels. (And like, if you want to impugn someone's motives in a side channel - maybe that's okay, so long as they're allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)

I think there's also a big disagreement about how frequently someone's motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you're thinking of that as something like the "nuclear option," which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.

I pretty much haven't been thinking about the question of "criticizing motives" being okay or not throughout this conversation. It seemed besides the point-- because I assumed that, in essence, was okay and I thought my statements indicated I believed that.

I'd venture that if this was the concern, why not ask me directly "how and when do you think it's okay to criticize motives?" before assuming I needed a moral lecturin'. Also seems like a bad inference to say it seemed "I really didn't get it" because I didn't address something head on the way you were thinking about it. Again, maybe that wasn't the point I was addressing. The response also didn't make this clear. It wasn't "it's really important to be able to criticize people" (I would have said "yes, it is"), instead it was "how dare you trade off truth for other things." <- not that specific.

On the subject of motives though, a major concern of mine is that half the time (or more) when people are being "unpleasant" in their communication, it's not born of truth-seeking motive, it's because it's a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it'd be bad to render people defenseless and say "you can never call people out for how they're speaking to you," you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it's virtuous (though not mandatory) to show people that you're not playing political games if they're not interested in that.

You want to be able to call people out on bad motives for their reasoning/conclusions.

I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it's egregious, but this already the case.)

Then there's also a problem where it's a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack's "What? Why?" seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it's a lot of extra work to turn that sort of tone into content reliably, and most people - including most people on this forum - don't know how to do it. It's fine to ask for extra work, but it's objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.

I think I am incredulous that 1) it is that much work, 2) that the burden doesn't actually fall to others to do it. But I won't argue for those positions now. Seems like a long debate, even if it's important to get to.

I'm not sure why you think I was implying it was costless (I don't think I'd ever argue it was costless). I asked him not to do it when talking to me, that I wasn't up for it. He said he didn't know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking-- showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.

Replies from: Benquo
comment by Benquo · 2019-07-18T19:28:48.326Z · LW(p) · GW(p)
My attention has been on which parts of speech it is legitimate to call out.

Do you think anyone in this conversation has an opinion on this beyond "literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable"? If so, what?

I thought we were arguing about which speech is in fact objectionable, not which speech it's okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we've been talking past each other.

Replies from: Ruby
comment by Ruby · 2019-07-20T07:47:49.878Z · LW(p) · GW(p)
I thought we were arguing about which speech is in fact objectionable, not which speech it's okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we've been talking past each other.

I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.

comment by Benquo · 2019-07-17T14:50:50.159Z · LW(p) · GW(p)
When I consider things like "making the map less accurate in order to get some gain", I don't think "oh, that might be worth it, epistemic rationality isn't everything", I think "Jesus Christ you're killing everyone and ensuring we're stuck in the dark ages forever".

To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.

comment by Ruby · 2019-07-20T07:43:07.105Z · LW(p) · GW(p)

Responding more calmly to this (I am sorry, it's clear I still have some work to on managing my emotions):

I agree with all of this 100%. Sorry for not that plainly stating that.

When I consider things like "making the map less accurate in order to get some gain" . . .

I feel the same, but I don't consider the positions I've been advocating as making such a sacrifice. I'm open to the possibility that I'm wrong about about the consequences of my proposals and that they do equate to that, but currently they're actually my best guess as to what gets you the most truth/accuracy/clarify overall.

I think that people's experience and social relations are crucial [justification/clarification needed]. That short-term diversion of resources to these things and even some restraint on what one communicates will long-term create environments of greater truth-seeking and collaboration-- and that not doing this can lead to their destruction/stilted existence. These feelings are built on many accumulated observations, experiences, and models. I have a number of strong fears about what happens if these things are neglected. I can say more at some point if you'd like to know them (or anyone else)

I grant there are costs and risks to the above approach. Oli's been persuasive to me in fleshing these out. It's possible you have more observations/experiences/models of the costs and risks which make them much more salient and scary to you. Could be you're right and I've mostly been in low-stakes, sheltered environments, but if adopted my views would ensure we're stuck in the dark ages. Could be you're wrong and if acted on your views would have the same effect. With what's at stake (all the nice things), I definitely want believe what is true here.

comment by jessicata (jessica.liu.taylor) · 2019-07-17T08:26:35.897Z · LW(p) · GW(p)

Most of these are symmetric weapons too which don’t rely on truth.

The whole point of pro-truth norms is that only statements that are likely to be true get intersubjectively accepted, though...

This makes me think that you're not actually tracking the symmetry/asymmetry properties of different actions under different norm-sets.

comment by jessicata (jessica.liu.taylor) · 2019-07-17T07:44:55.003Z · LW(p) · GW(p)

"You're responsible for all consequences of your speech" might work as a decision criterion for yourself, but it doesn't work as a social norm. See this comment [LW(p) · GW(p)], and this post [LW · GW].

In other words, consequentialism doesn't work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.

Politeness isn't really about consequences directly; there are norms about what you're supposed to say or not say, which don't directly refer to the consequences of what you say (e.g. it's still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike "you are responsible for all consequences of your speech". (Of course, consideration of consequences is important in designing the politeness norms)

[EDIT: I expanded this into a post here [LW · GW]]

comment by Zvi · 2019-07-17T13:18:29.553Z · LW(p) · GW(p)

Short version:

I don't think the above is a reasonable statement of my position.

The above doesn't think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring "regulation."

I don't think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I'd want to be.

comment by Ruby · 2019-07-16T18:45:53.004Z · LW(p) · GW(p)

(1 out of 2)

Thanks for taking the time to write up all your thoughts.

The nature of this should is that status evaluations are not why I am sharing the information.

I object to "status evaluations" being the stand-in term for all the "side-effects" of sharing information. I think we're talking about a lot more here-- consequences is a better, more inclusive term that I'd prefer. "Status evaluations" trivializes what we're talking about in the same way I think "tone" diminishes the sheer scope of how information-dense the non-core aspects of speech are.

If I am reading you right, you are effectively saying that one shouldn't have to bear responsibility for the consequences of the speech over and beyond ensuring that what you are saying is accurate. If what you are saying is accurate and is only causing accurate updates, you shouldn't have to worry about what effects it will have (because this gets in the way of sharing true and relevant information, and creating clarity).

The power of this "should" is that I'm denying the legitimacy of coercing me into doing something in order to maintain someone else's desire for social frame control.

In my mind, this discussion isn't about you (the truth-speaker) should be coerced by some outside regulating force. I want to discuss what you (and I) should judge for ourselves is the correct approach to saying things. If you and all your fellow seekers of clarity are getting together to create a new community of clarity-seekers, what are the correct norms? If you are trying to accomplish things with your speech, how best to go about it?

I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens.

You haven't explicitly stated the decision theory/selection of virtues which leads to the conclusion, but I think I can infer it. Let me know if I'm missing something or getting it wrong. 1) If you create any friction around doing something, it will reduce how much it happens. 2) Particularly in this case, if you allow for reasons to silence truth, people will actively do this to stifle truths they don't like - as we do see in practice. Overall, truth-seeking is something to be precious to be guarded. Something that needs to be protected from our own rationalizations and the rationalizations/defensiveness of others. Any rules, regulations, or norms which restricts what you say are actually quite dangerous.

I think the above position is true, but it's ignoring key considerations which make the picture more complicated. I'll put my own position/response in the next comment for threading.

Replies from: jessica.liu.taylor, Zvi, Zvi
comment by jessicata (jessica.liu.taylor) · 2019-07-17T07:27:05.487Z · LW(p) · GW(p)

In my mind, this dis­cus­sion isn’t about you (the truth-speaker) should be co­erced by some out­side reg­u­lat­ing force.

Norms are outside regulating forces, though. (Otherwise, they would just be heuristics)

Replies from: Ruby
comment by Ruby · 2019-07-20T07:19:45.806Z · LW(p) · GW(p)

This might have gotten lost in the convo and likely I should have mentioned it again, but I advocated for the behavior under discussion to be supererogatory/ a virtue [1]: not something to be enforced, but still something individuals ought to do of their own volition. Hence "I want to talk about why you freely should want to do this" and not "why I should be allowed to make you do this."

Even when talking about norms though, my instinct is to first clarify what's normative/virtuous for individuals. I expect disagreements there to be cruxes for disagreements about groups. I guess because I expect both one's beliefs about what's good for individuals and what's good for groups to do to arise from the same underlying models of what makes actions generally good.

(Otherwise, they would just be heuristics)

Huh, that's a word choice I wouldn't have considered. I'd usually say "norms apply to groups" and "there's such a thing is ideal/virtuous/optimal behavior for individuals relative to their values/goals." I guess it's actually hard to determine what is ideal/virtuous/optimal, and so you only have heuristics? And virtues really are heuristics. This doesn't feel like a key point, but let me know if you think there's an important difference I'm missing.

____________________

[1] I admit that there are dangers even in just having something as a virtue/encouraged behavior, and that your point expressed in this comment [LW(p) · GW(p)] to Ray is a legitimate concern.

I worry that saying certain ways of making criticisms are good/bad results in people getting silenced/blamed even when they're saying true things, which is really bad.

I think that's a very real risk and really bad when it happens. I think there are large costs in the other direction too. I'd be interested in thinking through together the costs/benefits of having vs not saying certain ways of saying things are better. I think marginal thoughts/discussion could cause me to update where the final balance lies here.

comment by Zvi · 2019-07-17T12:20:13.418Z · LW(p) · GW(p)

Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one's accurate speech - in an inevitably Asymmetric Justice / CIE fashion - seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.

And echo Jessica that it's not reasonable to say that all of this is voluntary within the frame you're offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.

comment by Zvi · 2019-07-17T12:13:32.275Z · LW(p) · GW(p)

I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.

At some point I hope to write a virtue ethics sequence, but it's super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won't really work at getting people to reconsider. Alas.

comment by Zvi · 2019-07-15T21:05:41.663Z · LW(p) · GW(p)

(3) (Splitting for threading)

Sharing true information, or doing anything at all, will cause people to update.

Some of those updates will cause some probabilities to become less accurate.

Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people's heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?

Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?

Why should the burden be on me to explain should here, anyway? I'm not claiming a duty, I'm claiming a negative, a lack of duty - I'm saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.

Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.

Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!

comment by Zvi · 2019-07-15T20:59:46.723Z · LW(p) · GW(p)

(2) (Splitting these up to allow threading)

Sharing true information will cause people to update.

If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. "I appreciate all the effort you have put in towards various causes, I think that otherwise you're a great person and I'm a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn't shot me in the face. Twice.")