Comment by Ruby on [deleted post] 2019-07-15T19:04:00.765Z

Also relevant here (especially to Version 3) is my shortform post on Communal Buckets.

But I can see a way in which being wrong/making mistakes (and being called out for this) is upsetting even if you personally aren't making a bucket error. The issue is that you might fear that other people have the two variables collapsed into one. Even if you might realize that making a mistake doesn't inherently make you a bad person, you're afraid that other people are now going to think you are a bad person because they are making that bucket error.
The issue isn't your own buckets, it's that you have a model of the shared "communal buckets" and how other people are going to interpret whatever just occured. What if the community/social reality only has a single bucket here?

Even if you have no intention to attack someone, have complete respect for them, just want to share what you think is true, etc., your public criticism could be correctly perceived as damaging to them. For reasons like that, I think it's well worth it to expend a little effort dispelling the likely incorrect* interpretation of the significance of your speech acts.

*Something I haven't touched on is self-deception/motivated cognition behind speech acts, where criticisms are actually tinged with political motives even if the speaker doesn't recognize them. Putting in effort to ensure you're not sending any such signals ("intentionally" or accidentally) is something of a guard against your subconscious, elephant-in-brain motives to criticize-as-attack under the guise of criticize-as-helpful.

More succinctly. You might actually be in Version 2 (actually "intentionally" sending hostile signals) even when you believe you aren't, and putting in some effort to be nice/considerate/reconciliatory is a way to protect against that.

Comment by Ruby on [deleted post] 2019-07-15T18:37:35.330Z
I shouldn't have to be raising the status of people just because I have information that lowers it, and am sharing that information.

Zvi, what's the nature of this "should''? Where does its power come from? I feel unsure of the normative/meta-ethical framework you're invoking.

Relatedly, what's the overall context and objective for you when you're sharing information which you think lowers other people's status? People are doing something you think is bad, you want to say so. Why? What's the objective/desired outcome? I think it's the answer to these questions which shape how one should speak.

I'm also interest in your response to Ray's comment.

Comment by Ruby on [deleted post] 2019-07-15T18:31:57.630Z

mr-hire has a recent related post on the topic here. He brings different arguments from me for approximately the same conclusion. (I haven't thought about his S1/motivated reasoning line of reasoning enough for fully back it, but seems quite plausible.)


Comment by Ruby on [deleted post] 2019-07-15T18:16:12.842Z

Comments from the Google Doc.

Zvi:

Version 3 is usually best. I shouldn't have to be raising the status of people just because I have information that lowers it, and am sharing that information. Version 1 is pretty sickening to me. If I said that, I'd probably be lying. I have to pay lip service to everyone involved being good people because they're doing nominally good-labeled things, in order to point out those things aren't actually good? Why?

Ruby:

Take Jim's desire to speak out against the harms of advocating (poor) vegan diets. I'd like him to be able to say that and I'd like it to go well, with people either changing their or changing behavior.
I think the default is that people feel attacked and it all goes downhill from there. This is not good, this not how I want to be, but it seems the pathway towards "people can say critical things about each other and that's fine" probably has to pass through "people are critical but try hard to show that they really don't mean to attack."
I definitely don't want you to lie. I just hope there's something you could truthfully say that shows that you're not looking to cast these people out or damage them. If you are (and others are towards you), then either no one can ever speak criticism (current situation, mostly) or you get lots of political conflicts. Neither of those seems to get towards the maximum number of people figuring out the maximum number of true things.

Ray:

Partly in response to Zvi's comment elsethread:
1) I think Version 1 as written comes across very politics-y-in-a-bad-way. But this is mostly a fact about the current simulacrum 3/4 world of "what moves are acceptable."
2) I think it's very important people not be obligated (or even nudged) to write "I applaud X" if they don't (in their heart-of-hearts) applaud X.
But, separate from that, I think people who don't appreciate X for their effort (in an environment like the rationalsphere) are usually making a mistake, in a similar way to pacifists who say "we should disband the military" are making a mistake. There is a weird missing mood here.
I think fully articulating my viewpoint here is a blogpost or 10, so probably not going to try in this comment section. But the tl;dr is something like "it's just real damn hard to get anything done. Yes, lots of things turn out to be net negative, but I currently lean towards "it's still better to err somewhat on rewarding people who tried to do something real."
This is not a claim about what the conversation norms or nudges should be, but it's a claim about what you'd observe in a world where everyone is roughly doing the right thing
Comment by ruby on Habryka's Shortform Feed · 2019-07-15T05:14:33.032Z · score: 4 (2 votes) · LW · GW
The complexity limit of any individual idea in your field is a lot higher, since the ideas get primarily transmitted via high-bandwidth channels

Depends if you're sticking specifically to "presentation at a conference", which I don't think is necessarily that "high bandwidth". Very loosely, I think it's something like (ordered by "bandwidth"): repeated small group of individual interaction (e.g. apprenticeship, collaboration) >> written materials >> presentations. I don't think I could have learned Kaj's models of multi-agent minds from a conference presentation (although possibly from a lecture series). I might have learnt even more if I was his apprentice.

Comment by ruby on Ben Pace's Shortform Feed · 2019-07-14T19:12:01.767Z · score: 2 (1 votes) · LW · GW

Seems related to Causal vs Social Reality.

Comment by ruby on Diversify Your Friendship Portfolio · 2019-07-14T16:04:57.786Z · score: 6 (3 votes) · LW · GW

I suspect there are challenges for rationalists in joining new communities beyond introversion. I've found it jarring to be getting along with some new folk and then people start saying ridiculous things, but worse, having no real interest in determining whether the things they say are actually true. Or even when they try, being terrible at discussion. I don't need to nitpick everything or correct every "wrong" thing I hear, but it is hard to feel like beliefs aren't real to people - they're just things you say. A performance.

There are people outside the rationality community who are fine at the above, but being used to rationalists does introduce some novel challenges. It'd be nice if we ever accumulated communal knowledge on how to bridge such cultural gaps.

Comment by ruby on Open Thread July 2019 · 2019-07-10T22:52:52.399Z · score: 6 (3 votes) · LW · GW

We can also look at the composition commenter commenting frequency. Here I've been the commenters for each week into a bin/bucket and seen how they've changed. Top graph is overall volume, bottom graph is the percentage of commenting population in each frequency bucket:

I admit that we must conclude that high-frequency commenters (4+ comments/week) have diminished in absolute numbers and as a percentage over time, though a slight upward trend in the last six months.

Comment by ruby on Open Thread July 2019 · 2019-07-10T22:52:21.344Z · score: 12 (3 votes) · LW · GW

The question is fuzzier than it might seem at first. The issues is that population of commenters size changes too. You can have a world where the number of very frequent commenters has gone up but the average per commenter has gone done because the number of infrequent commenters has grown even faster than the number of frequent commenters.


There are also multiple possible causes for growth/decline, change in frequency, etc., that I don't think you could really link them to a mechanism as specific as being more educated about where your opinion is valuable. Though I'd definitely link the number of comments per person to the number of other active commenters and number of conversations going, a network effects kind of thing.


Anyhow, some graphs:

Indeed, the average (both mean and median) comments per active commenter each week has gone down.

But it's generally the case that the number of comments and commenters went way down, recovering only late 2017 at the time of Inadequate Equilibria and LessWrong 2.0

Comment by ruby on Ruby's Short-Form Feed · 2019-07-10T18:31:08.297Z · score: 4 (2 votes) · LW · GW

Yeah, I think the overall fear would be something like "I made a mistake but now overall people will judge me as a bad person" where "bad person" is above some threshold of doing bad. Indeed, each bad act is an update towards the threshold, but the fear is that in the minds of others, a single act will be generalized and put you over. The "fear of attribution error" seems on the mark to me.

Comment by ruby on Black hole narratives · 2019-07-10T16:22:07.309Z · score: 4 (2 votes) · LW · GW

Somewhat relevant here is a recent shortform post I wrote: Communal Buckets.

But I can see a way in which being wrong/making mistakes (and being called out for this) is upsetting even if you personally aren't making a bucket error. The issue is that you might fear that other people have the two variables collapsed into one. Even if you might realize that making a mistake doesn't inherently make you a bad person, you're afraid that other people are now going to think you are a bad person because they are making that bucket error.
The issue isn't your own buckets, it's that you have a model of the shared "communal buckets" and how other people are going to interpret whatever just occured. What if the community/social reality only has a single bucket here?
Comment by ruby on Can I automatically cross-post to LW via RSS? · 2019-07-10T06:00:38.351Z · score: 4 (2 votes) · LW · GW

I'd actually been intending to message you about this. Glad you beat me to it - would love to see your stuff on here.

Comment by ruby on Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? · 2019-07-10T04:23:12.903Z · score: 4 (2 votes) · LW · GW

I recall there being startups in this space. I think Laura Vaughn might have been working at one for a time. I think they were easy but maybe expensive.

Comment by ruby on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-09T05:10:35.665Z · score: 7 (3 votes) · LW · GW

I haven't thought about this topic much and don't have a strong opinion here yet, but I wanted to chime in with some personal experience which makes me suspect there might be distinct categories:

I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told "yes, we've already got that feature (we hadn't) and we already have several clients successfully using that (we hadn't)." Others were invited to be part an "existing beta program" alongside others just like them (in fact, they would have been the very first). When I objected, I was told "no one wants to be the first, so you have to say that." Another time, they denied that they ever lied, but they did, and it was more than motivated cognition. There is a very vast gulf between "we've built this feature already" and "we haven't even asked the engineers what they think" and no amount of motivated cognition bridges it. It's less work than faking data, but it's no more subtle.

Motivated cognition is bad, but some people are really very willing to abandon truth for their own benefit in a completely adversarial way. The motivated cognition comes in to justify why what they're doing is okay, but they have a very clear model of the falsehoods they're presenting (they must in order to protect them).

I think they lie to themselves that they're not lying (so that if you search their thoughts, they never think "I'm lying"), but they are consciously aware of the different stories they have told different people, and the ones that actually constrain their expectations. And it's such a practiced way of being that even involving System 2, it's fluid. Each context activating which story to tell, etc., in a way that appears natural from the outside. Maybe that's offline S2, online S1? I'm not sure. I think people who interact like that have a very different relationship with the truth than do most people on LW.


Comment by ruby on This is a test post · 2019-07-07T21:04:44.028Z · score: 2 (1 votes) · LW · GW

Test comment. Does this show up?

Comment by ruby on LW authors: How many clusters of norms do you (personally) want? · 2019-07-07T20:42:29.690Z · score: 9 (4 votes) · LW · GW

I'm not sure I'd place the emphasis on authors as strongly as you did. In terms of words written, hours spent, and intellectual progress generated, commenters might be equal to post authors. Further, it's frequent commenters who pay the high cost of there behind being multiple norm sets.

Comment by ruby on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-07T17:36:28.189Z · score: 3 (2 votes) · LW · GW
This post isn't particularly concerned with the full suite of tools you'd need to deploy to rigorously show that my argument is correct. And that isn't its intention. I wrote this because people asked me to surface all the relevant considerations I was aware of for home purchase.

I haven't been following the thread here, but from a quick glance, seems like that would have been good to include an epistemic status or other preamble about your intention with the post. From my quick read, I had the impression that it was very carefully considered/rigorous.

Comment by ruby on Ruby's Short-Form Feed · 2019-07-07T01:23:10.678Z · score: 15 (4 votes) · LW · GW

Communal Buckets

A bucket error is when someone erroneously lumps two propositions together, e.g. I made a spelling error automatically entails I can't be great writer, they're in one bucket when really they're separate variables.

In the context of criticism, it's often mentioned that people need to learn to not make the bucket error of I was wrong or I was doing a bad thing -> I'm a bad person. That is, you being a good person is compatible with making mistakes, being wrong, and causing harm since even good people make mistakes. This seems like a right and true and a good thing to realize.

But I can see a way in which being wrong/making mistakes (and being called out for this) is upsetting even if you personally aren't making a bucket error. The issue is that you might fear that other people have the two variables collapsed into one. Even if you might realize that making a mistake doesn't inherently make you a bad person, you're afraid that other people are now going to think you are a bad person because they are making that bucket error.

The issue isn't your own buckets, it's that you have a model of the shared "communal buckets" and how other people are going to interpret whatever just occured. What if the community/social reality only has a single bucket here?

We're now in the territory of common knowledge challenges (this might not require full-blown common knowledge, but each person knowing what all the others think). For an individual to no longer be worried about automatic entailment between "I was wrong -> I'm bad", they need to be convinced that no one else is thinking that. Which is hard, because I think that people do think that.

(Actually, it's worse, because other people can "strategically" make or not make bucket errors. If my friend does something wrong, I'll excuse it and say they're still a good person. If it's someone I already disliked, I'll take any wrongdoing is evidence of their inherent evil nature. There's a cynical/pessimistic model here where people are likely to get upset anytime something is shared which might be something they can be attacked with (e.g. criticism of their mistakes of action/thought), rightly or wrongly.)

Comment by ruby on Causal Reality vs Social Reality · 2019-07-07T00:11:37.979Z · score: 2 (1 votes) · LW · GW
should be done only with very large hesitation and after many iterations of conversations, and I expect this to stay our site policy for the foreseeable future.

For a long-standing community member, this does seem correct to me.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-05T02:09:47.646Z · score: 11 (4 votes) · LW · GW

That is pretty much my picture. I agree completely about the trickiness of it all.

and this might not be the best path to go down.

At some point I'd be curious to know your thoughts on the other potential paths.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-05T02:06:52.027Z · score: 2 (1 votes) · LW · GW

I agree with that. Granting to yourself that you feel legitimately defensive because of a true external attack does not equate to necessarily responding directly (or in any other way). You might say "I am legitimately defensive and it is good my mind caused me to notice the threat", and then still decide to "suck it up."

Comment by ruby on Causal Reality vs Social Reality · 2019-07-04T23:56:14.051Z · score: 7 (3 votes) · LW · GW
>> I, at least, am a social monkey.
I basically don’t find this compelling, for reasons analogous to No, It’s not The Incentives, it’s you.

I think that's a good complaint and I'm glad Vaniver pointed it out.

The question I have for anyone who says this sort of thing is… do you endorse this reaction? If you do, then don’t hide behind the “social monkey” excuse; honestly declare your endorsement of this reaction, and defend it, on its own merits. Don’t say “I got defensive, as is only natural, what with your tone and all”; say “you attacked me”, and stand behind your words.

I think this is a very good question. Upon reflection, my answer is that I do endorse it on many occasions (I can't say that I endorse it on all occasions, especially in the abstract, but many). I think that myself and others find ourselves feeling defensive not merely because of uncleared bucket errors, but because we have been "attacked" to some lesser or greater extent.

You are right, the "social monkey" thing is something of an excuse, arguably born out of perhaps excessive politeness. You offer such an excuse when requesting someone else change in order to be polite, to accept some of the blame for the situation yourself rather than be confrontational and say it's all them. Trying to paint a way out of conflict where they can save face . (If someone's behavior already feels uncomfortably confrontational to you and you want to de-escalate, the polite behavior is what comes to mind.)

In truth though, I think that my "monkey brain" (and those of others) pick up on real things: real slights, real hostility, real attempts to do harm. Some are minor, but they're still real, and it's fair to push back on them. Some defensiveness is both justified and adaptive.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-04T20:48:45.811Z · score: 2 (1 votes) · LW · GW

I agree. Was planning to request this.

Comment by ruby on Raemon's Shortform · 2019-07-04T02:21:15.864Z · score: 7 (3 votes) · LW · GW

If Ray's talking about me as the newly onboarded member, I can say I didn't examine any individual votes outside of due process. (I recall one such case of due process where multiple users were reporting losing karma on multiple posts and comments - we traced it back to a specific cause.)

I do a lot of the analytics, so when I first joined I was delving into the data, but mostly at the aggregate metrics level. Since I was creating new ways to query the data, Ray correctly initiated a conversation to determine our data handling norms. I believe this was last September.

For further reassurance, I can say that vote data is stored only with inscrutable ID numbers for comments, posts, and users. We have to do multiple lookups/queries if we want to figure who voted on something, which is more than enough friction to ensure we don't ever accidentally see individual votes.

We do look at aggregate vote data that isn't about a user voting on a specific thing, e.g. overall number of votes, whether a post is getting a very large proportion of downvotes (anyone can approximately infer this by comparing karma and number of votes via the hover-over).

Comment by ruby on Causal Reality vs Social Reality · 2019-07-03T05:57:32.018Z · score: 6 (3 votes) · LW · GW

Thank you.

Comment by ruby on Self-consciousness wants to make everything about itself · 2019-07-03T05:29:31.195Z · score: 20 (6 votes) · LW · GW
I don't think being a good person is a concept that is really that meaningful.

My not-terribly-examined model is that good person is a social concept masquerading as a moral one. Human morality evolved for social reasons to service social purposes [citation needed]. Under this model, when someone is anxious about being a good person, their anxiety is really about their acceptance by others, i.e. good = has met the group's standards for approval and inclusion.

If this model is correct, then saying that being a good person is not a meaningful moral concept could be interpreted (consciously or otherwise) by some listeners to mean "there is no standard you can meet which means you have gained society's approval". Which is probably damned scary if you're anxious about that kind of thing.

My quick thoughts on the implications is something like maybe it's good to generally dissolve good person as a moral concept and then consciously factor out the things you do to gain acceptance within groups (according to their morality) from the actions you take because you're trying to optimize for your own personally-held morality/virtues/value.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-03T04:53:40.166Z · score: 26 (8 votes) · LW · GW

Some Updates and an Apology:

I've been thinking about this thread as well as discourse norms generally. After additional thought, I've updated that I responded poorly throughout this thread and misjudged quite a few things. I think I felt disproportionately attacked by Zack's initial comment (perhaps because I haven't been active enough online to ever receive a direct combative comment like that one), and after that I was biased to view subsequent comments as more antagonistic than they probably were.

Zack's comments contain some reasonable and valuable points. I think they could be written better to let the good points be readily be seen (content, structure, and tone), but notwithstanding it's probably on the whole good that Zack contributed them, including the first one as written.

The above update makes me also update towards more caution around norms which dictate how one communicates. I think it probably would be bad if there'd been norms I could have invoked to punish or silence when I felt upset with Zack and Zack's comments. (This isn't a final statement of my thoughts, just an interim update, as I continue to think more carefully about this topic.)

So lastly, I'm sorry @Zack. I shouldn't have responded quite as I did, and I regret that I did. I apologize for the stress and aggravation that I am responsible for causing you.. Thank you for contributions and persistence. Maybe we'll have some better exchanges in the future!?

Comment by ruby on Causal Reality vs Social Reality · 2019-07-02T20:31:28.017Z · score: 16 (5 votes) · LW · GW

To be fair, in this context, I did say upthread that I wanted to ban Zack from my posts and possibly the entire site. As someone with moderator status (though I haven't been moderating very much to date) I should have been much more cautious about mentioning banning people, even if that's just me, no matter my level of aggravation and frustration.

I'm not sure what the criteria for "interesting" is, but my current personal leaning would be to exert more pressure than banning just crackpots and "violated norms really hard", but I haven't thought about this or discussed it all that much. I would do so before advocating hard for particular standard to be adopted widely.

But these are my personal feelings, not ones I've really discussed with the team and definitely not any team consensus about norms or policies.

(Possibly relevant or irrelevant I wrote before habryka's most recent comment below.)

Comment by ruby on Raemon's Shortform · 2019-07-02T03:28:31.613Z · score: 4 (2 votes) · LW · GW

Quick comment to say that I think there are some separate disagreements that I don't want to get collapsed together. I think there's 1) "politeness/there are constraints on how you speak" vs "no or minimal constraints on how you speak", and 2) Combat vs Nurture / Adversarial vs Collaborative. I think the two are correlated but importantly distinct dimensions. I really don't want Combat culture, as I introduced the term, to get rounded off to "no or minimal constraints on how you can speak".

Comment by ruby on Causal Reality vs Social Reality · 2019-07-01T23:24:45.472Z · score: 7 (3 votes) · LW · GW

The debate here feels like something more than combat vs other cultures of discussion. There are versions of combative cultures which are fine and healthy and which I like a lot, but also versions which are much less so. I would be upset if anyone thought I was opposed to combative discussion altogether, though I do think they need to be done right and with sensitivity to the significance of the speech acts involved.

Addressing what you said:

I think in an ideal world there would be two vibrant LW2s, one for each conversational culture, because right now it's not clear where people who strongly prefer combat culture are supposed to go.

I think there's some room on LessWrong for that. Certainly under the Archipelago model, authors can set the norms they prefer for discussions on their posts. Outside of that, it seems fine, even good, if users who've established trust with each other and have both been seen to opt-in a combative culture choose to have exchanges which go like that.

I realize this isn't quite the same as a website where you universally know without checking that in any place on the site one can abide by their preferred norms. So you might be right - the ideal world might be require more than one LessWrong and anything else is going to fall short. Possibly we build "subreddits" and those could have an established universal culture where you just know "this is how people talk here".

I can imagine a world where eventually it was somehow decided by all (or enough of the relevant) parties that the default on LessWrong was an unfiltered, unrestrained combative culture. I could imagine being convinced that actually that was best . . . though it'd be surprising. If it was known as the price of admission, then maybe that would work okay.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-01T22:45:16.529Z · score: 2 (1 votes) · LW · GW

I appreciate you noting that. I'm hoping to wrap up my involvement on this thread soon, but maybe we will find future opportunities to discuss further.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-01T15:44:06.827Z · score: -3 (6 votes) · LW · GW

Edit 19-07-02: I think I went too far with this post and I wish I'd said different things (both in content and manner, some of the positions and judgments I made here I think were wrong). With more thought, this was not the correct response in multiple ways. I'm still processing and will eventually say more somewhere.

. . .

That is persuasive that you respect my ability to think and even flattering. I would have also taken it as strong evidence if you'd simply said "I respect your thinking" at some earlier point. Yet, 1) when I said that someone (at least acting) as though they respected my thinking was pivotal in whether I wanted to talk to them and expected the conversation to be productive, you forcefully argued that respect wasn't important. 2) You emphasized that it was important that when someone is wrong, everyone is made aware of it. In combination, this led me to think you weren't here to have a productive conversation with someone you thought was a competent thinker, instead you'd come in order to do me the favor of informing me I was flat-out, no questions about it, wrong.

I want to emphasize again that the key thing here is someone acts in a way I interpret as some level of respect and consideration. It matters more to me that they be willing to act that way than they actually feel it. Barring the last two comments (kind of), your writing here has not (as I try to explain) registered that way to me.

I am sympathetic to positions that fear certain norms prioritize politeness over truth-seeking and information exchange. I wrote Conversational Cultures: Combat vs Nurture in which I expressed that the combative style was natural to me, but I also wrote a follow-up that each culture depended on appropriate context. I am not combative (I am not sure if I would describe your style this way, but maybe approximately) in online comments - certainly not with someone I don't know and don't feel safe with. The conditions do not pertain for it.

I am at my limit of explaining my position regarding respect and politeness, etc., and why I think those are necessary. I grant that there are legitimate fears and also that I haven't fully expressed my comprehension of them and countered them fully and rigorously. But I'm at my limit. I'm inclined to think that your behavior is in part the result of considered principles which aren't crazy, though naive and maybe willfully dismissive of counter-considerations.

I can see that at the core you are a person with ideas who is theoretically worth talking to and with whom I could have a valuable discussion. But also this entire exchange has been stressful and aggravating. Your initial comments were already unpleasant, and this further exchange about conversational norms has been the absolute lowlight of my weekend (indeed, receiving your comments has made my whole week feel worse). I am not sure if your excitedness as expressed by your bangs (!) indicates that you're having fun, but I'm not. I've persisted because it seemed like the right thing to do. I'm at my limit of explaining why my position is reasonable. I'm at my limit of willingness to talk to you.

I am strongly tempted to ban you from commenting on any of my posts to save myself further aggravation (as any user above 50 [on Personal blogposts] or 2000 karma [Frontpage posts] threshold can do). I generally want people to know they don't have to put with stuff like this. I hesitate because some of my posts are posted somewhat in a non-personal capacity such as the welcome page, FAQ, and even my personal thoughts about LessWrong strategy; I feel less authorized to unilaterally ban you from those. Though were it up to me, I think I would probably ban you from the entire site. I think you are making conversation worse and I fear that for everyone who talks in your style, people experience lots of really unpleasant feelings and we lose a dozen potential commenters who don't want to be in a place where people discourse like this. (My low-fidelity shoulder habryka is suspicious of this kind of reasoning, but we can clarify that later.) Given that I think those abstract people are being quite reasonable, I am sad to lose them and I feel like I want to protect the garden:

But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay.  Being too humble, doubting themselves an order of magnitude more than I would have doubted them.  It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
This about the Internet:  Anyone can walk in.  And anyone can walk out.  And so an online community must stay fun to stay alive.  Waiting until the last resort of absolute, blatant, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late. [emphasis added]

I see the principles behind your writing style and why these seem reasonable to you. I am telling you how I perceive them and the reaction they provoke in me (stress, aggravation - not fun). I writing this to say that if you make no alterations to how you write (which you are not forced to generally), then I do not want to talk to you and personally would advocate for your removal from our communal places of discussion.

This is not because I think politeness is more important than truth. Emphatically not. It is because I think your naive (and perhaps willfully oblivious) stances emphatically get in the way of productive of valuable truth-seeking discussion between humans as they exist (and I don't think those humans are being unreasonable).

I place few limits on what people can say to each other content-wise and would fight against any norms that go in the way of that. I don't think anyone should ever to had to hide what they think, why, or that they think something is really dumb. I do think people ought to invest some effort in communicating in a way that indicates some respect and consideration for their interlocutors (of their feelings even if not their thinking). I grant that that can be somewhat costly and effortful - but I think it's a necessary cost and I'm unwilling to converse with people unwilling to go to that effort barring exceptional exceptions. Unwillingness to do so (to me) reads as someone prioritizing their own effort and experience as completely outweighing my own.

(A nice signal that you cared about how I felt would have been that if after I'd said your bangs (!) and rhetorical question marks (?) felt condescending to me, you'd made an effort to reduce your usage rather than ramping them up to 11. At least for this conversation to show some good will. I'm actually quite annoyed about this. You said "I don't know how I could have written my post more politely without making it worse", I pointed out a few things. You responded by doing more of those things. Way more. Literally 11 bangs and 9 question marks.)

It's not just about respecting my thinking, it's about someone showing that they care at all about how I feel and how their words impact me. A perhaps controversial opinion is that I think that claims of "if you cared about truth, you'd be happy to learn from my ideas no matter how I speak" are used to excuse an emotional selfishness ("I want to write like this, it'd be less fun for me if I had to otherwise - can't you see how much I'm doing you a favor by telling you you're wrong?") and that if we accept such arguments, we giving people basically a free license to be unpleasant jerks who can get away with rudeness, bullying, belittling, attack etc., all under the guise of "information exchange".

Comment by ruby on What's the best explanation of intellectual generativity? · 2019-07-01T05:11:35.239Z · score: 2 (1 votes) · LW · GW

Good comment. Strong upvote.

Comment by ruby on What's the best explanation of intellectual generativity? · 2019-07-01T01:41:48.190Z · score: 2 (1 votes) · LW · GW

I see the problem. As you identified, there are two questions here. 1) Is it necessary to have new jargon here? Can't we just say "creativity"? 2) Assuming new jargon is warranted, how do we ensure it is properly defined and introduced?

I was addressing the first question, though I completely agree the second is awfully important. I'm not sure how much of a definition is warranted at this put, but I do thing the OP should have offered at least a few sentences describing the thing rather than introducing solely as a term which is often used in their circle.

Comment by ruby on Causal Reality vs Social Reality · 2019-07-01T00:16:50.548Z · score: 18 (7 votes) · LW · GW
Whether or not you respect them as a thinker is off-topic.

Unless I evaluate someone else to be far above my level or I have a strong credence that there's definitely something I have to learn from them, then my interest in conversing heavily depends on whether I think they will act as though they respect me. It's not just on-topic, it's the very default fundamental premise on which I decide to converse with people or not - and a very good predictor of whether the conversation will be at all productive. I have greatly reduced motivation to talk to people who have decided that they have no respect for my reasoning, are only there to "enlighten" me, and are going to transparently act that way.

"You said X, but this is wrong because of Y" isn't a personal attack!

Not inherently. But "tone" is a big deal and yours is consistently one of attack around statements which needn't be so.

For example, I'm not sure how I'm supposed to rewrite my initial comment on this post to be more collaborative without making it worse writing.

Some examples of unnecessary aspects of your writing which make it hostile and worse:

What? Why?

As you said yourself, this was rhetorical, even feigned surprise, and that you were attempting "naive Socratic ducking."

Of course, not everyone has the skillset to do biotechnology work!
That's a pretty small and indirect effect, though!
Whoops!

The tone of these sentences, appending an exclamation mark to trivial statements, registers to me as actually condescending and rude. It's as though you're trying to educate me as though I were a child, adding energy and surprise to your lessons.

Content-wise, it's a bit similar. You're using a lot of examples to back up a very simple point (that clamoring in streets isn't an effective strategy). In fact I think you misunderstood my point (which is not unfair, the motivating example was only a paragraph) and if you'd simply said "I don't see how this is surprising, that wouldn't be very strategic", I could have clarified that since my predictions of people's behavior does not assume people are strategic, that something isn't strategic doesn't reduce my surprise. Or maybe that wasn't quite the misunderstanding or complaint - but you could expect it was something.

In practice, you spent 400 words elaborating on points which are kinda basic and with which I don't disagree. Decomposing my impression that your comment was hostile, I think a bunch of it stemmed from the fact that you thought you needed to explain those points to me - that you thought my statement which seemed wrong to you was based on my failure to comprehend a very simple thing rather than perhaps me failing to have communicated a more complicated thing to you which made more sense.

Thinking about it, I think your comment is worse writing for how you've gone about it. A more effective version might go like this:

"I find your statement surprising. Of course people aren't clamoring in the streets - that'd hardly be effective. Effective things might be something like my friend's company . . .

You needn't educate me about comparative advantage and earning to give. I won't assume you're paying attention, but you might have noticed that I post a moderate amount on LessWrong and am in fact a member of the LessWrong 2.0 team. I'm not new this community. Probably, I've heard of comparative advantage and earning to give.

It also feels to me (though I grant this could entirely be in my head) like you're maybe leaning a bit too much on your sources/references/links for credibility in a way that also registers as condescending. As though you're 1) trying to make your position seem like the obviously correct one because of the scholarship backing it up despite those links going to elementary resources and concepts, 2) trying to make yourself seem like the teacher who is educating me.

Rules that require people to pretend to be more uncertain than they actually are (because disagreement is disrespect) run a serious risk of degenerating into "I accept a belief from you if you accept a belief from me" social exchange.

I disagree. I haven't seen that happen in any rationalist conversation I've been a part of. What I have seen (and made the mistake myself too many times) is people being overconfident that they're correct. A norm, aka cultural wisdom, that says maybe you're not so obviously right as you think helps correct for this in addition to the fact that conversations go better when people don't feel they're being judged and talked down to.

Yes, marketing is important.

1) As another example, this is dismissive and rude too. 2) I don't think anything I described fits a reasonable definition of marketing. I want to guess that marketing here is being used somewhat pejoratively, and at best as a noncentral fallacy.

At this point, I've think we'd better wrap up this discussion. I doubt neither of us is going to start feeling more warmly towards the other with further comments, nor do I expect us to communicate much more information than we already have. I'm happy to read another reply, but I probably won't respond further.


Comment by ruby on What's the best explanation of intellectual generativity? · 2019-06-30T21:32:25.029Z · score: 5 (2 votes) · LW · GW

It does seem that "creativity" could technically be used instead. A guess for why someone initiated "intellectual generativity" instead is that creativity primarily has connotations of the creative arts: painting, fiction, poetry, music. So when I think of someone being "intellectual creative", I'm imagining them coming up with lots of interesting, zany hypotheses. "Generative" has less of that connotation to me, it's more about just having lots of intellectual output.

Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T19:34:20.635Z · score: 3 (2 votes) · LW · GW

There's an implication in your comment I don't necessarily agree with, now that you point it out: "we should be much more hesitant about assuming an object-level error we spot is real" -> "we should ask for clarification when we notice something."

Person A argues X, Person B thinks X is wrong and wants to respond with argument Y. I don't think they have to ask for clarification, I think it's enough that they speak in a way that grants that maybe they're missing something, in a way that's consistent with having some non-negligible prior that the other person is correct. More about changing how you say things than what you say. So if asking for clarification isn't helpful, don't do it.

Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T19:26:43.260Z · score: 4 (2 votes) · LW · GW
I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn't make sense in the context of trying to explain the "causal reality vs. social reality" frame.

I wonder if this is a point where I being misunderstood. Based on this and a few in-person conversations, people think I'm taking a normative stance here. I'm not. Not primarily. I am trying to understand a thing I am confused about and to explain my observations. I observe that my models lead me to expect that people would be doing X, but I do not observe that - so what am I missing?

Fore the record, for all those reading:

This post isn't trying to tell anyone to do anything, and I'm not actively stating a judgment. I haven't thought about what people should be doing. I'm not saying they should be clamoring in the streets. There is no active admonishing directed at anyone here. There is no thesis. I haven't thought about what people should be doing enough - I haven't thought through what would actually be strategic for them. So I don't know. Not with any confidence, not enough to tell them what to do.

Given this is about my confusion about what I expect people to do and that I don't expect people to be strategic, the question of whether or not doing X would be strategic isn't really relevant. My model doesn't predict people to be strategic, so the fact that strategic action might not to be do X doesn't make me less confused.

(A valid counter to my confusion is saying that people are in fact strategic, but I'm rather incredulous. I'm not sure if you or Benquo were saying that?)

The magnitude of the chance matters! Have you read the Overly Convenient Excuses Sequence? I think Yudkowsky explained this well in the post "But There's Still a Chance, Right?".

I am a bit confused, I might not be reading you carefully enough, but it feels here like you're trying to explain people's behavior with reference to normative behavior rather than descriptive (in this comment and earlier ones).

It's precisely because I expect most people to think "but there's still a chance right" that I would expect the possibility of life extension to motivate to action - more so than if they cared about the magnitude. (Also, caring about magnitude is a causal reality thing, I would say as the notion of probabilities is, seemingly.)

Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T19:09:25.782Z · score: 5 (3 votes) · LW · GW
I would think that if someone's reasoning is obviously wrong, then that person and everyone else should be informed that they are wrong

As you say in your next paragraph, one should be careful before asserting someone is obviously wrong. But sometimes they are. But if the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private - but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. The harsh calling out might be effective for onlookers, I suppose. But the strength of the "wrongness assertion" really should come from the arguments behind it, not the rhetoric force of the speaker. If the arguments are solid, it should be damning even with a gentle tone. If people ought to update that my reasoning is poor, they can do so even if the speaker was being polite and according respect.

Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." If neither of those is true, you're in a tough position. Maybe you want them to go away, or you just want other people not believe false things. There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't. No one is going to respond well to discussion with someone who they think doesn't respect them and is happy to broadcast that judgment to everyone else (doing so is legitimately quite a hostile social move).

The hard thing is here is that's about perceptions more than intentions. People interpreting things differently, people have different fears and anxieties, and that means things can come across as more hostile than they're intended. Or the receiver judges is more afraid about others will think than the speaker (reasonably - the receiver has more at stake).

Though around here, people are pretty good at admitting they're wrong. But I think certain factors about how they're communicated with can determine whether it feels like a helpful correction vs a personal attack.

More generally, I'm in favor of politeness norms where politeness doesn't sacrifice expressive power,

This might be because I'm overly confident in my writing ability, but I don't think maintaining politeness would ever curtail my expressive power, although admittedly it can take a lot more time. Do you have any examples, real or fictional, where you feel expressiveness was sacrificed to politeness?

but I'm wary of excessive emphasis on collaborative norms (what some authors would call "tone-policing") being used to obfuscate information exchange or even shut it down

At the risk of sparking controversy, can you link to any examples of this on LessWrong from the past few years? I want to know if we're actually at in danger of this at all.

(what some authors would call "tone-policing") being used to obfuscate information exchange or even shut it down (via what Yudkowsky characterized as appeal-to-egalitarianism conversation-halters).

I think the tough thing is that all norms can be weaponized and abused. Basically, if you have a goal which isn't truth-seeking (which we all do), then there is no norm I can imagine which on its own will stop you. The absence of tone-policing permits heated angry exchanges, attacks, and bullying - but so does a tone policing which is selectively enforced.

On LessWrong, I think we need to strike a balance. We should never say "you used a mean tone, therefore you are wrong and must immediately leave the discussion" or "all opinions must be given equal respect" (cue your link); but we should still say "no, you can't call people idiots here" and "if you're going to argue with someone, this can go a lot better if you're open to the fact that you could be the wrong one."

Naturally, there's a lot of grey area in the middle. I like the idea of us being a community where we discuss what ideal discussion looks like, continually refining our norms to something that works really well.

(Hence my writing all this at length - trying to get my own thoughts in order and have something to refer back to/later compile into a post.)

Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T18:19:48.807Z · score: 4 (2 votes) · LW · GW

(Meta: Yup, that's much better. I appreciate the effort. To share some perspective from my end, I think this has been my most controversial post to date. I think I understand now why many people say posting can be very stressful. I know of one author who removed all their content from LW after finding the comments on their posts too stressful. So there's a probably a trade off [I also empathize with the desire to express emphatic opinions as you feel them], where writing more directly can end up dissuading many people from posting or commenting at all.)

Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I'm instead interpeting as a contrast between between transhumanist social reality vs. "normie" social reality.

I think that's a reasonable point. My counter is that I'd argue that "transhumanist social reality" is more connected to the causal world than mainstream social reality. Transhumanists, even if they are biased and over-optimistic, etc., at least invoke arguments and evidence from the general physical world: telomeres, nanotechnology, the fact that turtles lives a really long time, experiments on worms, etc. Maybe they repeat each other's socially sanctioned arguments, but those arguments invoke causal reality.

In contrast, the mainstream social reality appears to be very anchored on the status quo and history to date. You might be able to easily imagine that there's an upper bound on humanly-achievable medical technology, but I'd wager that's not the thought process most people go through when (assuming they ever even consider the possibility) they judge whether they think life-extension is possible or not. To quote the Chivers passage again:

“The first thing that pops up, obviously, is I vaguely assume my children will die the way we all do. My grandfather died recently; my parents are in their sixties; I’m almost 37 now. You see the paths of a human’s life each time; all lives follow roughly the same path. They have different toys - iPhones instead of colour TVs instead of whatever - but the fundamental shape of a human’s life is roughly the same.

Note that he's not making an argument from physics or biology or technology at all. This argument is from comparison to other people. "My children will die the way we all do," "all lives follow roughly the same path." One might claim that isn't unreasonable evidence. The past is a good prior, it's a good outside view. But the past also shows tremendous advances in technology and medical science - including dramatic increases in lifespan. My claim is that these things aren't considered in the ontology most people think within, one where how other people do things is dominant.

If I ask my parents, if I stop and ask people on the street, I don't expect them to say they thought about radical life extension and dismissed it because of arguments about what is technologically realistic. I don't expect them to say they're not doing anything towards it (despite it seeming possible) because they see no realistic path for them to help. I expect them to not have thought about it, I expect them to have anchored on what human life has been like to date, or I expect them to have thought about it just long enough to note that it isn't a commonly-held belief and conclude therefore it's just a thing another group believes.

Even if the contrast is "transhumanist social reality", I ask how did that social reality come to be and how did people join it? I'm pretty sure most transhumanists weren't born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group - and I'd wager that in many cases it's because on their own they reasoned how humans are now isn't how they have to be - rightly or wrongly - they invoke a belief about what broader reality allows beyond what is commonly held opinion or practice to date. Maybe that's a social reality too, but it's a really different one.

The reason why the disease and death example is confusing to me is partly because I expect people to be highly emotion and unstrategic - willing to invest a great deal for only a small chance. People agonize over "maybe I could have done something" often enough. They demand doctors do things "so long as there's a chance." One can doubt that radical life extension is possible, but I don't think one can be reasonably certain that it isn't. I expect that if people thought there was any non-trivial chance that we didn't need to millions of people to decay and die each year, they would be upset about it (especially given first-hand experience), and do something. As it is, I think most people take death and decay for granted. That's just how it is. That's what people do. That's my confusion. How can you so blithely ignore the progress of the last few hundred years? Or the technological feats we continue to pull off. You think it's reasonable for there to be giant flying metal cans? For us to split the atom and go to moon? To edit genes and have artificial hearts? To have double historical lifespans already? Yet to never wonder whether life could be better still? To never be upset that maybe the universe doesn't require it to be this way, instead we (humanity) just haven't got our shit together, and that's a terrible tragedy.

This perspective is natural to me. Obvious. The question I am trying to explain is why am I different? I think I am the weird one (i.e., the unusual one). But what am I doing differently? How is my reality (social or otherwise) different? And one of the reasonable answers is that I invoke a different type of reasoning to infer what is possible. My evidence is that I don't encounter people responding with like-kind arguments (or even having considered the question) to questions of elimination decay and death.

The concept of "terrible" doesn't exist in causal reality. (How does something being "terrible" pay rent in anticipated experiences?)

"Terrible" is a moral judgment. The anticipated experience is that when I point my "moral evaluator unit" at a morally terrible thing, it outputs "terrible."



Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T17:38:49.020Z · score: 11 (4 votes) · LW · GW

Connotation, denotation, implication, and subtext all come into play here, as do the underlying intent one can infer from them. If you don't understand someone's point, it's entirely right to to state that, but there are diverse ways of expressing incomprehension. Contrast:

  • Expressing incomprehension + a request for further clarification, e.g. "I don't understand why you think X, especially in light of Y, what am I missing?", as opposed to
  • Expressing incomprehension + judgment, opposition, e.g. "I don't understand, how could anyone think X given that Y!?"

Though inferences about underlying intent and mindstates are still only inferences, I'd say the first version is a lot more expected from a stance of "I assign some credence to you have a point that I missed (or at least act as though I do for the sake of production discussion) and I'm willing to listen so that we can talk and figure out which of us is really correct here." When I imagine the second one, it feels like it comes from a place of "You are obviously wrong. Your reasoning is obviously wrong. I want you and everyone else to know that you're wrong and your beliefs should be dismissed." (It doesn't have to mean that - and among people where there is common knowledge that everyone respects everyone else's reasoning it could even be good - but that's not the situation on the public comments here.)

The first version of expression incomprehension, I'd read as coming from a desire to figure out who is right here (hence the collaboration). The second feels more like someone is already sure they are right and wish to demolish they see as wrong (more adversarial).

Comment by ruby on Causal Reality vs Social Reality · 2019-06-30T17:23:47.248Z · score: 2 (3 votes) · LW · GW

1. True, for ten thousands of years of human history, it has been that way. But "there is no precedent or direct empirical evidence that anything else is possible" emphatically does not cut it. Within only a few hundred years the world has been transformed, we have magical god-devices that connect us across the world, we have artificial hearts, we can clean someone's blood by pumping out of it and then back in, we operate on brains, we put man on the moon. In recent years you've got the rise of AI and gene editing. Lifespans are already double most of what they've been for most of history. What has held for tens of thousands of years is no longer so. It is not that hard to see that humankind's mastery over reality is only continuing to grow. Precedent? Maybe not. But reason for hope? Yes. Actually pretty reasonable expectation that our medical science is not maxed out? Definitely.

This isn't speculative. The scientific and technological progress should be apparent to those who've lived more than a few decades in the recent history.

2. Anger doesn't always have to have a target. But if you need one then pick society, pick science, pick research, pick doctors, pick your neighbours.

3. Watching your loved ones decay and die is anguish. If people are going to yell at the doctors that they should do something, that something must be possible (though some would argue this is fake/performance), then let them also yell at state of the world. That this unnecessary circumstance has come to be. Yell at the universe.

4. The alternative explanation to saying that people see the world overwhelmingly via social reality is that people simply have terrible causal models. Perhaps to me the scientific/technological progress of the last few hundred years is obviously, obviously reason to believe far more is possible (and better today than in fifty years), but not to others. Perhaps I'm wrong about it, though I don't think I am.

And you needn't be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough. If you demand that doctors do things which only might prolong grandma's life, then why not ask that have better science because there's chance for that working too.

Perhaps people really didn't get enough of an education to appreciate science and technology (that we manipulate light itself to communicate near instantaneously sparks no wonder and awe, for example). So then I'd say they are overly anchored on the status quo. It is not so much being bound by social reality, but by how things are now, without extrapolation even fifty years forward or back - even when they themselves have lived through so much change.

5. I pick the example of disease and death because is so personal, so immediate, so painful for many. It doesn't require that we posit any altruistic motivation and it's a situation where I expect to see a lot of powerful emotion revealing how people relate to reality (rather than them taking the options they think are immediately available to them and strategic).


Comment by ruby on Matt Goldenberg's Short Form Feed · 2019-06-25T20:19:52.676Z · score: 2 (1 votes) · LW · GW

I responded to your original comment here. I don't know the Kegan types well enough (perhaps I should) to say whether that's a framing I agree with or not.

Comment by ruby on Causal Reality vs Social Reality · 2019-06-25T20:15:00.515Z · score: 6 (4 votes) · LW · GW

It feels hard to respond to your comment directly, like there's an ontology mismatch or something. But here are thoughts in response:

The things it feels I strongly care about are experiences and preferences. These exist in causal reality just the same as human minds themselves do. People "getting along" somehow feels a bit instrumental, at least stated that way. It does seem that people in social reality are putting more effort into getting along, but often by sacrificing everything else? I certainly have the feeling that social reality very often makes people miserable. Also something like that within social reality, I still expect most people to be optimizing for their own position/wellbeing within that social reality, not for the wellbeing of the social collective.

A line of thought, inspired by your comment (perhaps just rewritten in my own ontology) is that we can say that having a shared conception of what is good is extremely important for coordination and connection, including abiding by that shared conception of that good.

My post was definitely motivated by thinking that many people are wrongly forgetting about causal reality because they're so stuck in social reality. Probably the opposite happens some too, but it doesn't strike me as obviously the cause of as much harm/lost potential.

Comment by ruby on Causal Reality vs Social Reality · 2019-06-25T19:30:18.632Z · score: 8 (9 votes) · LW · GW

Meta-note: while your comment adds very reasonable questions and objections which you went to the trouble of writing up at length (thanks!), its tone is slightly more combative than I'd like discussion of my posts to be. I don't think conditions pertain that'd make that the ideal style here. I should perhaps put something like this in my moderation guidelines (update: now added).

I'd be grateful if you write future comments with a little more . . . not sure how to articulate . . .something like charity and less expression of incomprehension, more collaborative truth-seeking. Comment as though someone might have a reasonable point even if you can't see it yet.

Comment by ruby on Causal Reality vs Social Reality · 2019-06-25T19:29:25.576Z · score: 5 (7 votes) · LW · GW
Even if we interpret "clamoring in the streets" as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?

Yes, "clamoring in the streets" is not to be taken too literally here. I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?

I don't think the question of strategicness is is relevant here. For one thing, humans are not automatically strategic. But beyond that, I believe my point stands because most people are not taking any actions based on a belief that aging and death are solvable and it's terrible that we're not going as fast as we could be. I maintain this is evidence they are not living in a world (in their minds) where this is a real option. Your friend is an extreme outlier, and you too if your Rust example holds up.

I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having "weird" toe-shoes, but some people will think better of you.

It's true the social pressures exist in both directions. The point of that statement is merely to state that social considerations can be weighed within a causal frame, but they can be traded off against other things which are not social. I don't think an exhaustive enumeration of the different social pressures helps make that point further.

The phrase "doesn't involve being so weird" makes me wonder if this is meant as deliberate irony? ("Being weird" is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.

Yes, that paragraph was written from the mock-perspective of someone inhabiting a social reality frame, not my personal outside-analyzing frame as the OP. I apologize if that wasn't adequately clear from context.

What evidence do you use to distinguish between people who are playing the "talk about life extension" group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)

I agree this is a very hard problem and I have no easy answer. My point here was to say that a person in the social reality frame might not even be able to recognize the existence of people who working on life extension simply because they actually really care about life extension. That their whole assessment remains in the social frame (particularly at the S1 level).

Comment by ruby on Causal Reality vs Social Reality · 2019-06-25T03:37:16.980Z · score: 8 (6 votes) · LW · GW

I think a few things play into this specific case:

1. Global warming is about defending the status quo of nature from actions of people (keep temperature as is) whereas anti-aging is trying to change the state of nature.

2. Global warming results from pollution and there's already a social narrative around pollution being bad. And pollution is quite simple to understand too.

3. Global warming can be viewed as a moral failing of mankind and that fits in within a lot of existing stories. (There are stories about the pursuit of eternal youth, but I think they tend to have the message that the search doesn't end well).

Generally though, in this post I haven't explored the interaction between the two realities. Things from causal reality necessarily feed into social reality, I don't have a clear model of how yet.

Causal Reality vs Social Reality

2019-06-24T23:50:19.079Z · score: 37 (28 votes)
Comment by ruby on LW2.0: Technology Platform for Intellectual Progress · 2019-06-20T05:26:59.425Z · score: 2 (1 votes) · LW · GW

Thanks! Fixed.

Comment by ruby on LW2.0: Community, Culture, and Intellectual Progress · 2019-06-19T22:37:00.395Z · score: 2 (1 votes) · LW · GW

Yes, but only before Wednesday 15:36 PT on 19 June 2019.

LW2.0: Technology Platform for Intellectual Progress

2019-06-19T20:25:20.228Z · score: 26 (6 votes)

LW2.0: Community, Culture, and Intellectual Progress

2019-06-19T20:25:08.682Z · score: 27 (4 votes)
Comment by ruby on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-19T18:02:01.278Z · score: 3 (2 votes) · LW · GW

Yes, matches my own thoughts on the book. Might write up some further thoughts if I get the chance.

Discussion Thread: The AI Does Not Hate You by Tom Chivers

2019-06-17T23:43:00.297Z · score: 34 (9 votes)

Welcome to LessWrong!

2019-06-14T19:42:26.128Z · score: 78 (30 votes)

LessWrong FAQ

2019-06-14T19:03:58.782Z · score: 56 (15 votes)

An attempt to list out my core values and virtues

2019-06-09T20:02:43.122Z · score: 26 (6 votes)

Feedback Requested! Draft of a New About/Welcome Page for LessWrong

2019-06-01T00:44:58.977Z · score: 30 (5 votes)

A Brief History of LessWrong

2019-06-01T00:43:59.408Z · score: 18 (10 votes)

The LessWrong Team

2019-06-01T00:43:31.545Z · score: 22 (6 votes)

Site Guide: Personal Blogposts vs Frontpage Posts

2019-05-31T23:08:07.363Z · score: 34 (9 votes)

A Quick Taxonomy of Arguments for Theoretical Engineering Capabilities

2019-05-21T22:38:58.739Z · score: 29 (6 votes)

Could humanity accomplish everything which nature has? Why might this not be the case?

2019-05-21T21:03:28.075Z · score: 8 (2 votes)

Could humanity ever achieve atomically precise manufacturing (APM)? What about a much-smarter-than-human-level intelligence?

2019-05-21T21:00:30.562Z · score: 8 (2 votes)

Data Analysis of LW: Activity Levels + Age Distribution of User Accounts

2019-05-14T23:53:54.332Z · score: 27 (9 votes)

How do the different star-types in the universe (red dwarf, etc.) related to habitability for human-like life?

2019-05-11T01:01:52.202Z · score: 6 (1 votes)

How many "human" habitable planets/stars are in the universe?

2019-05-11T00:59:59.648Z · score: 6 (1 votes)

How many galaxies could we reach traveling at 0.5c, 0.8c, and 0.99c?

2019-05-08T23:39:16.337Z · score: 6 (1 votes)

How many humans could potentially live on Earth over its entire future?

2019-05-08T23:33:21.368Z · score: 9 (3 votes)

Claims & Assumptions made in Eternity in Six Hours

2019-05-08T23:11:30.307Z · score: 46 (13 votes)

What speeds do you need to achieve to colonize the Milky Way?

2019-05-07T23:46:09.214Z · score: 6 (1 votes)

Could a superintelligent AI colonize the galaxy/universe? If not, why not?

2019-05-07T21:33:20.288Z · score: 6 (1 votes)

Is it definitely the case that we can colonize Mars if we really wanted to? Is it reasonable to believe that this is technically feasible for a reasonably advanced civilization?

2019-05-07T20:08:32.105Z · score: 8 (2 votes)

Why is it valuable to know whether space colonization is feasible?

2019-05-07T19:58:59.570Z · score: 6 (1 votes)

What are the claims/arguments made in Eternity in Six Hours?

2019-05-07T19:54:32.061Z · score: 6 (1 votes)

Which parts of the paper Eternity in Six Hours are iffy?

2019-05-06T23:59:16.777Z · score: 18 (5 votes)

Space colonization: what can we definitely do and how do we know that?

2019-05-06T23:05:55.300Z · score: 31 (9 votes)

What is corrigibility? / What are the right background readings on it?

2019-05-02T20:43:45.303Z · score: 6 (1 votes)

Speaking for myself (re: how the LW2.0 team communicates)

2019-04-25T22:39:11.934Z · score: 47 (17 votes)

[Answer] Why wasn't science invented in China?

2019-04-23T21:47:46.964Z · score: 77 (25 votes)

Agency and Sphexishness: A Second Glance

2019-04-16T01:25:57.634Z · score: 27 (14 votes)

On the Nature of Agency

2019-04-01T01:32:44.660Z · score: 30 (10 votes)

Why Planning is Hard: A Multifaceted Model

2019-03-31T02:33:05.169Z · score: 37 (15 votes)

List of Q&A Assumptions and Uncertainties [LW2.0 internal document]

2019-03-29T23:55:41.168Z · score: 25 (5 votes)

Review of Q&A [LW2.0 internal document]

2019-03-29T23:15:57.335Z · score: 25 (4 votes)

Plans are Recursive & Why This is Important

2019-03-10T01:58:12.649Z · score: 61 (24 votes)

Motivation: You Have to Win in the Moment

2019-03-01T00:26:07.323Z · score: 49 (21 votes)

Informal Post on Motivation

2019-02-23T23:35:14.430Z · score: 29 (16 votes)

Ruby's Short-Form Feed

2019-02-23T21:17:48.972Z · score: 11 (4 votes)

Optimizing for Stories (vs Optimizing Reality)

2019-01-07T08:03:22.512Z · score: 45 (15 votes)

Learning-Intentions vs Doing-Intentions

2019-01-01T22:22:39.364Z · score: 58 (21 votes)

Four factors which moderate the intensity of emotions

2018-11-24T20:40:12.139Z · score: 60 (18 votes)

Combat vs Nurture: Cultural Genesis

2018-11-12T02:11:42.921Z · score: 36 (11 votes)

Conversational Cultures: Combat vs Nurture

2018-11-09T23:16:15.686Z · score: 120 (41 votes)

Identities are [Subconscious] Strategies

2017-10-15T18:10:46.042Z · score: 20 (9 votes)

Meetup : LW Copenhagen: December Meetup

2014-12-04T17:25:24.060Z · score: 1 (2 votes)

Meetup : Copenhagen September Social Meetup - Botanisk Have

2014-09-21T11:50:44.225Z · score: 1 (2 votes)

Meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed

2014-09-07T08:19:46.172Z · score: 1 (2 votes)

Motivators: Altruistic Actions for Non-Altruistic Reasons

2014-06-21T16:32:50.825Z · score: 19 (22 votes)

Meetup : July Rationality Dojo: Disagreement

2014-06-12T14:23:04.899Z · score: 1 (2 votes)