Conversational Cultures: Combat vs Nurture (V2) 2020-01-17T20:23:53.772Z · score: 129 (44 votes)
Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 2019-12-23T03:46:31.060Z · score: 18 (5 votes)
LW Team Updates - December 2019 2019-12-05T20:40:39.473Z · score: 41 (14 votes)
[LW Team] Request for User-Interviews about Tagging/Search/Wikis 2019-11-14T22:16:35.840Z · score: 14 (4 votes)
LW Team Updates - November 2019 (Subscriptions & More) 2019-11-08T02:39:29.498Z · score: 30 (13 votes)
[Team Update] Why we spent Q3 optimizing for karma 2019-11-07T23:39:55.274Z · score: 65 (19 votes)
[Site Update] Subscriptions, Bookmarks, & Pingbacks 2019-10-29T04:32:31.109Z · score: 95 (24 votes)
Open & Welcome Thread - October 2019 2019-10-01T23:10:57.782Z · score: 10 (3 votes)
LW Team Updates - October 2019 2019-10-01T23:08:18.283Z · score: 32 (11 votes)
Novum Organum: Introduction 2019-09-19T22:34:21.223Z · score: 81 (24 votes)
Open & Welcome Thread - September 2019 2019-09-03T02:53:21.771Z · score: 10 (4 votes)
LW Team Updates - September 2019 2019-08-29T22:12:55.747Z · score: 41 (13 votes)
[Resource Request] What's the sequence post which explains you should continue to believe things about a particle moving that's moving beyond your ability to observe it? 2019-08-04T22:31:37.063Z · score: 7 (1 votes)
Open & Welcome Thread - August 2019 2019-08-02T23:56:26.343Z · score: 13 (5 votes)
Do you fear the rock or the hard place? 2019-07-20T22:01:48.392Z · score: 43 (14 votes)
Why did we wait so long for the bicycle? 2019-07-17T18:45:09.706Z · score: 49 (19 votes)
Causal Reality vs Social Reality 2019-06-24T23:50:19.079Z · score: 37 (28 votes)
LW2.0: Technology Platform for Intellectual Progress 2019-06-19T20:25:20.228Z · score: 27 (7 votes)
LW2.0: Community, Culture, and Intellectual Progress 2019-06-19T20:25:08.682Z · score: 28 (5 votes)
Discussion Thread: The AI Does Not Hate You by Tom Chivers 2019-06-17T23:43:00.297Z · score: 36 (10 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 100 (54 votes)
LessWrong FAQ 2019-06-14T19:03:58.782Z · score: 59 (18 votes)
An attempt to list out my core values and virtues 2019-06-09T20:02:43.122Z · score: 26 (6 votes)
Feedback Requested! Draft of a New About/Welcome Page for LessWrong 2019-06-01T00:44:58.977Z · score: 30 (5 votes)
A Brief History of LessWrong 2019-06-01T00:43:59.408Z · score: 20 (12 votes)
The LessWrong Team 2019-06-01T00:43:31.545Z · score: 24 (7 votes)
Site Guide: Personal Blogposts vs Frontpage Posts 2019-05-31T23:08:07.363Z · score: 34 (9 votes)
A Quick Taxonomy of Arguments for Theoretical Engineering Capabilities 2019-05-21T22:38:58.739Z · score: 29 (6 votes)
Could humanity accomplish everything which nature has? Why might this not be the case? 2019-05-21T21:03:28.075Z · score: 8 (2 votes)
Could humanity ever achieve atomically precise manufacturing (APM)? What about a much-smarter-than-human-level intelligence? 2019-05-21T21:00:30.562Z · score: 8 (2 votes)
Data Analysis of LW: Activity Levels + Age Distribution of User Accounts 2019-05-14T23:53:54.332Z · score: 27 (9 votes)
How do the different star-types in the universe (red dwarf, etc.) related to habitability for human-like life? 2019-05-11T01:01:52.202Z · score: 6 (1 votes)
How many "human" habitable planets/stars are in the universe? 2019-05-11T00:59:59.648Z · score: 6 (1 votes)
How many galaxies could we reach traveling at 0.5c, 0.8c, and 0.99c? 2019-05-08T23:39:16.337Z · score: 6 (1 votes)
How many humans could potentially live on Earth over its entire future? 2019-05-08T23:33:21.368Z · score: 9 (3 votes)
Claims & Assumptions made in Eternity in Six Hours 2019-05-08T23:11:30.307Z · score: 46 (13 votes)
What speeds do you need to achieve to colonize the Milky Way? 2019-05-07T23:46:09.214Z · score: 6 (1 votes)
Could a superintelligent AI colonize the galaxy/universe? If not, why not? 2019-05-07T21:33:20.288Z · score: 6 (1 votes)
Is it definitely the case that we can colonize Mars if we really wanted to? Is it reasonable to believe that this is technically feasible for a reasonably advanced civilization? 2019-05-07T20:08:32.105Z · score: 8 (2 votes)
Why is it valuable to know whether space colonization is feasible? 2019-05-07T19:58:59.570Z · score: 6 (1 votes)
What are the claims/arguments made in Eternity in Six Hours? 2019-05-07T19:54:32.061Z · score: 6 (1 votes)
Which parts of the paper Eternity in Six Hours are iffy? 2019-05-06T23:59:16.777Z · score: 18 (5 votes)
Space colonization: what can we definitely do and how do we know that? 2019-05-06T23:05:55.300Z · score: 31 (9 votes)
What is corrigibility? / What are the right background readings on it? 2019-05-02T20:43:45.303Z · score: 6 (1 votes)
Speaking for myself (re: how the LW2.0 team communicates) 2019-04-25T22:39:11.934Z · score: 47 (17 votes)
[Answer] Why wasn't science invented in China? 2019-04-23T21:47:46.964Z · score: 80 (27 votes)
Agency and Sphexishness: A Second Glance 2019-04-16T01:25:57.634Z · score: 27 (14 votes)
On the Nature of Agency 2019-04-01T01:32:44.660Z · score: 30 (10 votes)
Why Planning is Hard: A Multifaceted Model 2019-03-31T02:33:05.169Z · score: 37 (15 votes)
List of Q&A Assumptions and Uncertainties [LW2.0 internal document] 2019-03-29T23:55:41.168Z · score: 25 (5 votes)


Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:34:44.383Z · score: 2 (1 votes) · LW · GW

Appendix 3: How to Nurture

These are outtakes from a draft revision for Nurture Culture which seemed worth putting somewhere:

A healthy epistemic Nurture Culture works to make it possible to safely have productive disagreement by showing that disagreement is safe. There are better and worse ways to do this. Among them:

  • Adopting a “softened tone” which holds the viewpoints as object and at some distance: “That seems mistaken to me, I noticed I’m confused” as opposed to “I can’t see how anyone could possibly think that”.
  • Expending effort to understand: “Okay, let me summarize what you’re saying and see if I got right . . .”
  • Attempting to be helpful in the discussion: “I’m not sure what you’re saying, is this is it <some description or model>?”
  • Mentioning what you think is good and correct: “I found this post overall very helpful, but paragraph Z seems gravely mistaken to me because <reasons>.” This counters perceived reputational harms and can put people at ease.

Things which are not very Nurturing:

  • “What?? How could anyone think that”
  • A comment that only says “I think this post is really wrong.”
  • You’re not accounting for X, Y, Z. <insert multiple paragraphs explaining issues at length>

Items in the first list start to move the dial on the dimensions of collaborativeness and are likely to be helpful in many discussions, even relatively Combative ones; however, they have the important additional Nurturing effect of signaling hard that a conversation has the goal of mutual understanding and reaching truth-together– a goal whose salience shifts the significance of attacking ideas to purely practical rather than political.

While this second list can include extremely valuable epistemic contributions, they can heighten the perception of reputational and other harms [1] and thereby i) make conversations unpleasant (counterfactually causing them not to happen), and ii) raise the stakes of a discussion, making participants less likely to update.

Nurture Culture concludes that it’s worth paying the costs of more complicated and often indirect speech in order to make truth-seeking discussion a more positive experience for all.

[1] So much of our wellbeing and success depends on how others view us. It reasonable for people be very sensitive to how others perceive them.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:33:01.757Z · score: 2 (1 votes) · LW · GW

Appendix 2: Priors of Trust

I’ve said that that Combat Culture requires trust. Social trust is complicated and warrants many dedicated posts of its own, but I think it’s safe to say that having following priors help one feel safe in a “combative” environment: 

  • A prior that you are wanted, welcomed and respected,
  • that others care about you and your interests,
  • that one’s status or reputation are not under a high-level of threat, 
  • that having dumb ideas is safe and that’s just part of the process,
  • that disagreement is perfectly fine and dissent will not be punished, and 
  • that you won’t be punished for saying the wrong thing.

If one has a strong priors for the above, you can have a healthy Combat Culture.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:32:22.077Z · score: 2 (1 votes) · LW · GW

Appendix 1: Conversational Dimensions

Combat and Nurture point at regions within conversation space, however as commenters on the original pointed out, there are actually quite a few different dimensions relevant to conversations. (Focused on truth-seeking conversations.)

Some of them:

  • Competitive vs Cooperative: within a conversation, is there any sense of one side trying to win against the others? Is there a notion of “my ideas” vs “your ideas”? Or is there just us trying to figure it out together.
    • Charitability is a related concept.
    • Willingness to Update: how likely are participants to change their position within a conversation in response to what’s said?
  • Directness & Bluntness: how straightforwardly do people speak? Do they say “you’re absolutely wrong” or do they say, “I think that maybe what you’re saying is not 100%, completely correct in all ways”?
  • Filtering: Do people avoid saying things in order to avoid upsetting or offending others?
  • Degree of Concern for Emotions: How much time/effort/attention is devoted to ensuring that others feel good and have a good experience? How much value is placed on this?
  • Overhead: how much effort must be expended to produce acceptable speech acts? How many words of caveats, clarification, softening? How carefully are the words chosen?
  • Concern for Non-Truth Consequences: how much are conversation participants worried about the effects of their speech on things other than obtaining truth? Are people worrying about reputation, offense, etc?
  • Playfulness & Seriousness: is it okay to make jokes? Do participants feel like they can be silly? Or is it no laughing business, too much at stake, etc.?

Similarly, it’s worth noting the different objectives conversations can have:

  • Figuring out what’s true / exchanging information.
  • Jointly trying to figure out what’s true vs trying to convince the other person.
  • Fun and enjoyment.
  • Connection and relationship building. 

The above are conversational objectives that people can share. There are also objectives that most directly belong to individuals:

  • To impress others.
  • To harm the reputation of others.
  • To gain information selfishly.
  • To enjoy themselves (benignly or malignantly).
  • To be helpful (for personal or altruistic gain).
  • To develop relationships and connection.

We can see which positions along these dimensions cluster together and which correspond to the particular clusters that are Combat and Nurture.

A Combat Culture is going to be relatively high on bluntness and directness, can be more competitive (though isn’t strictly); if there is concern for emotions, it’s going be a lower priority and probably less effort will be invested. 

A Nurture Culture may inherently be prioritizing the relationships between and experiences of participants more. Greater filtering of what’s said will take place and people might worry more about reputational effects of what gets said.

These aren’t exact and different people will focus on cultures which differ along all of these dimensions. I think of Combat vs Nurture as tracking an upstream generator that impacts how various downstream parameters get set.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:46.315Z · score: 2 (1 votes) · LW · GW

[2] A third possibility is someone who is not really enacting either culture: they feel comfortable being combative towards others but dislike it if anyone acts in kind to them. I think is straightforwardly not good.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:28.520Z · score: 2 (1 votes) · LW · GW

[1] I use the term attack very broadly and include any action which may be cause harm to a person acted upon. The harm caused by an attack could be reputational (people think worse of you), emotional (you feel bad), relational (I feel distanced from you), or opportunal (opportunities or resources are impacted).

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:05.476Z · score: 2 (1 votes) · LW · GW


Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:29:08.454Z · score: 7 (1 votes) · LW · GW

Changes from V1 to V2

This section describes the most significant changes from version 1 to version 2 of this post:

  • The original post opened with a strong assertion that it intended to be descriptive. In V2, I’ve been more prescriptive/normative.
  • I clarified that the key distinction between Combat and Nurture is the meaning assigned to combative speech-acts.
  • I changed the characterization of Nurture Culture to be less about being “collaborative” (which can often be true of Combat), and more about intentionally signaling friendliness/non-hostility.
  • I expanded the description of Nurture Culture which in the original was much shorter than the description of Combat, including the addition of a hopefully evocative example.
  • I clarify that Combat and Nurture aren’t a complete classification of conversation-culture space– far from it. And further describe degenerate neighbors: Combat without Safety, Nurture without Caring.
  • Adding appendices which cover:
    • Dimensions along which conversations and conversations vary.
    • Factors that contribute to social trust.


Shout out to Raemon, Bucky, and Swimmer963 for their help with the 2nd Version.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:27:18.894Z · score: 7 (1 votes) · LW · GW


Please do post comments at the top level.

Comment by ruby on Please Critique Things for the Review! · 2020-01-17T06:08:06.736Z · score: 3 (2 votes) · LW · GW

Yeah, true, that seems like a fair reason to point out for why there wouldn't be more reviews. Thanks for sharing your personal reasons.

Comment by ruby on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-14T01:48:54.965Z · score: 9 (4 votes) · LW · GW

[EDIT: When I say that posts earlier in the list got 25-50% more votes, I mean simply the number of non-neutral votes cast on those items, regardless of direction of magnitude. It would perhaps be more accurate to say these posts had 25-50% more people vote on them.]

The posts available for review are presented in (what I guess is) a consistent order that is (so far as I know) the same for everyone. I expect this to mean that posts presented earlier will get more votes.

Good call. I looked into this and found an effect of somewhere between 25-50% more votes for posts being displayed earlier in the list. The team rolled out a fix to randomize loading this morning.

Interestingly, the default sort order was by number of nominations in ascending, so the most heavily nominated (approx, the most popular) posts were being displayed last. These posts were getting as many votes as those at the beginning of the list (though possibly not as many as they might have otherwise), and it's the posts in the middle that were getting less.

This was an oversight which we're glad to have caught. We've around halfway through the voting, plus the second half will have the deadline rush, so hopefully this bias will get countered in the coming week. 

Unfortunately you just make mistakes the first time you're doing things. :/

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T07:53:19.170Z · score: 2 (1 votes) · LW · GW

Okay, so 80% of the reviewers have > 1000 karma. 90% >= 463; which means I think the "20-25% of eligible review voters are writing reviews" number is correct if this methodology actually makes sense.

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T07:46:44.960Z · score: 2 (1 votes) · LW · GW

Re: the ratio
The ratio isn't obviously bad to me, depending on your expectation? Between the beginning of the review on Dec 8th and Jan 3rd [1] then there's been 199 posts (excluding question posts but not excluding link posts), but of those: 

- 149 post written by 66 users with over 100 karma

- 95 written by 33 users above 1000 karma (the most relevant comparison)

- 151 posts written by 75 people whose account was first active before 2019. 

Compare those with the 82 reviews by 32 reviewers, it's a ratio of reviews:posts between 1:1 and 1:2. 
I'm curious if you'd been expecting something much different. [ETA: because of the incomplete data you might want to say 120 posts vs 82 reviews which is 1:1.5.]

Re: the effort
It's not clear to me that the effort involved means you should expect more reviews: 1) I think the Cost-Benefit Ratio for posts is higher even if they take longer, 2) reviewing a post only happens if you've read the post and it impacted you enough to remember and feel motivated to say stuff about, 3) when I write posts, it's about something I've been thinking about and am excited about; I haven't developed any habit around being excited about reviews since I'm not used to it. 

[1] That's when I last pulled that particular data onto my machine and I'm being a bit lazy because 8 more days it isn't going to change the overall picture; though it means the relative numbers are a bit worse for reviews.

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T06:36:31.219Z · score: 2 (1 votes) · LW · GW

Do you think there are any ways the 2018 Review as we've been doing it could be modified to be better along the dimensions you're concerned about?

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T06:11:14.549Z · score: 2 (1 votes) · LW · GW

That makes sense. As I'm won't to say, there often risks/benefits/costs in each direction. 

Ways in which I think communal and collaborative review are imperative:

  • Public reviews help establish the standards or reasoning expected in the community.
  • By reading other people's evaluations, you can better learn how to perform your own.
  • It's completely time prohibitive for me to thoroughly review every post that I might reference, instead I trust in the author. Dangerously, many people might do this and a post becomes highly cited despite flaws that would be exposed if a person or two spent several hours evaluating it*
  • I might be competent to understand and reference a paper, but lack the domain expertise to review it myself. The review of another domain expert can help me understanding the shortcoming's of a post.
  • And as I think has been posted about, having a coordinated "review festival" is ideally an opportunity for people with different opinions about controversial topics to get together and hash it out. In an ideal world, review is the time when the community gets together to resolve what debates it can.

*An example is the work I began auditing the paper Eternity in Six Hours which is tied to the Astronomical Waste argument. Many people reference that argument, but as far as I know, few people have spent much time attempting to systematically evaluate its claims. (I do hope to finish that work and publish more on it sometime.) 

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T05:25:49.585Z · score: 15 (4 votes) · LW · GW

Raw numbers to go with Bendini's comment:

As of the time of writing this comment, there've been 82 reviews on the 75 qualified (i.e., twice-nominated) posts by 32 different reviewers. 24 reviews were by 18 different authors on their own posts. 

Whether this counts as a shortage, is puzzling, or is concerning is a harder question to answer. 

My quick thoughts:

  • Personally, I was significantly surprised by the level of contribution to the 2018 Review. It's really hard to get people to do things (especially thing that are New and Work) and I wouldn't have been puzzled at all if the actual numbers had been 20% of what they actually are. Even the more optimistic LW team members had planned for a world where the team hunkered down and wrote all the reviews ourselves.
  • If we consider the relevant population of of potential reviewers to be the same as those eligible to vote, i.e., users with 1000+ karma, then there are ~130 [1] such users who view at least one post on the site each week (~150 at the monthly timescale). That gives us 20-25% of active eligible voters writing reviews.
    • If you look at all users above 100 karma, the number is 8-10% of candidate reviewer engaging in the Review. People below 100 karma won't have written many comments and/or probably haven't been around for that long so aren't likely candidates.

Relative to the people who could reasonably be expected to review, I think we're doing decently, if something like 10-20% of people who could do something are doing it. Of course, there's another question of why there aren't more people with 100+ or 1000+ karma around to begin with, but it's probably not to do with the incentives or mechanics of the review.

[1] For reference, there are 430 users in the LessWrong database with more than 1000 karma.

Comment by ruby on Please Critique Things for the Review! · 2020-01-12T04:40:06.930Z · score: 2 (1 votes) · LW · GW

concept of "pruning" output in this way

I'd be curious to learn the alternative ways you favor, or more detail on why this approach is flawed. Standard academic peer review has its issues, but seemingly a community should have a way it reviews material and determines what's great, what needs work, and what is plain wrong.

Comment by ruby on Circling as Cousin to Rationality · 2020-01-08T01:46:28.162Z · score: 4 (2 votes) · LW · GW

Seconding this.

I also go to T-Group (have been around a half-dozen times). T-Group, more so than other flavors of Circling, has a very rigid and restrictive format that couldn't possibly work for everyday life. It took me many tries to be remotely good at it, but it's helped me improve less heavily used aspects of my communicating/relating/connecting.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-04T21:44:55.611Z · score: 24 (6 votes) · LW · GW

[Update: the new version is now live!!]

[Author writing here.]

The initial version of this post was written quickly on a whim, but given the value people have gotten from this post (as evidenced by the 2018 Review nomination and reviews), I think it warrants a significant update which I plan to write in time for possibly publication in a book, and ideally the Review voting stage.

Things I plan to include in the update:

  • Although dichotomies (X vs. Y) are easy to remember and talk about, different conversational cultures differ on multiple dimensions, and that ought to be addressed explicitly.
  • It's easy to round-off the cultures to something simpler than I intended, and I want to ward against that. For example, the healthy Combat Culture I advocate requires a a basis of trust between participants. Absent that, you don't have the culture I was pointing at.
  • Relatedly, an updated post should incorporate some of the ideas I mentioned in the sequel about the conditions that give rise to different cultures.
  • A concept which crystallized for me since writing the post is that of "the significance of a speech act" and how this crucially differs between cultures.
  • The tradeoffs between the two cultures can be addressed more explicitly.

Overall, I think my original post did something valuable in pointing clearly at two distinct regions in conversation-culture space and giving them sufficiently good labels which enabled people to talk about them and better notice them in practice. The fact that they've gotten traction has surprised me a bit since it pointed out perhaps a hole in our communal vocab.

Crisply pointing at these two centroids in the large space necessarily meant sacrificing the nuance and detail from the multiple dimensions in the space. I think the ideal treatment of the topic both provides easy-to-use handles for discussion as well as more thorough theory of conversational-cultures. In truth, probably a sequence of posts is warranted rather than just a single behemoth post or something.

A point interesting to me is that the post differs somewhat in style from my other posts. With my other posts, I usually try to be very technically precise (and end up sounding a bit like a textbook or academic paper). This post's style was meant to be more engaging, more entertaining, more emotional, and I'm guessing that was part of its appeal. I'm not sure if it's entirely a good thing, since I think trying to write in the evocative way is in tension with writing in the most technically precise, model-rich, theoretically-accurate way. 

In updating the post, I expect to move it to the latter style more and make it relatively "more boring read" even as I make it more accurate. I could imagine the ideal for authors to be is to have one highly-engaging, evocative post for a topic that draws people in and another with the same models in their full technical glory.

Lastly I mention that I think there's so much detail in this domain that alternative takes, e.g. Abram Demski's Combat vs Nurture & Meta-Contrarianism, feel like they're describing real and true things, yet somehow different things than what I addressed. I don't have a meta-theory yet that manage to unify all the models in this space, though that would be nice.

Comment by ruby on Melting Gold, and Organizational Capacity · 2019-12-26T03:05:51.177Z · score: 2 (3 votes) · LW · GW

as early as possible, no matter what your job is, you should make a part of your job to find new people to start sharing the load

But what if you're irrayplaceable?

Comment by ruby on Noticing the Taste of Lotus · 2019-12-25T22:13:00.271Z · score: 2 (1 votes) · LW · GW

I have a guess, but I think that's outside the purview of the purpose of these reviews.

I haven't been deeply involved in the 2018 Review design process, maybe Ben and Ray have specific ideas, but my own vote is that reviewers should feel free to share whatever thoughts they have in response to their posts without worry about them going out of of bounds.

I could imagine it being better if non-author reviews try to stay focused, but I'd vote that authors feel quite free to share all their current thoughts.

Comment by ruby on Decoupling vs Contextualising Norms · 2019-12-25T21:32:34.292Z · score: 2 (1 votes) · LW · GW

At the same time, I don't want to fall for the Fallacy of the Undistributed Middle and assume that both perspectives are equally valid.

Minor possible quibble: based on the definition in the link given, I think Fallacy of the Undistributed Middle doesn't refer to assuming a the deep wisdom position that two sides of a debate each have merit.

The fallacy of the undistributed middle (Lat. non distributio medii) is a formal fallacy that is committed when the middle term in a categorical syllogism is not distributed in either the minor premise or the major premise. It is thus a syllogistic fallacy.)

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T23:24:50.643Z · score: 2 (1 votes) · LW · GW

I'm glad to hear you planned to run by the team before posting. I know you don't mean to make them announcements, but my fear is that it might be very hard to make them not come across that way.

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T21:17:27.115Z · score: 2 (1 votes) · LW · GW

Everything quoted below seems pretty plausible, but I'd be keen to get more gears.

I strongly believe that "Thinking out loud" is one of the key virtues to cultivate in the technological era we're in, and has major positive externalities, 

Is it a virtue because of the positive externalities, or for other reasons? How does "technological era" factor in?

and any moves to hide discussion and thought, especially when simultaneously centralising power, often have surprisingly disastrous consequences.

There's probably something to that, but just because there's a rock on one side doesn't mean there isn't a hard place in on the other.

There are also details which affect the situation, such as there are different ways you might have people share their thinking:

1) Everyone shares thoughts whenever they feel like it, even when it's likely to misconstrued and be difficult to correct.

2) Everyone shares their individual thoughts, but only after care has been taking to ensure there's no misunderstanding, e.g. sharing a group statement about moderation to which individuals appends their individual thinking.

The latter does add some friction and is stifling, but for some topics that might just be the correct balance? It's not clear to me yet that your considerations outweigh the others.

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T21:06:22.807Z · score: 2 (1 votes) · LW · GW

I've had a bunch of moderation posts I've wanted to write for a while, and that should improve things.

This makes me a bit anxious. My inner story is that you'll post various posts about moderation policy, intending them as "thinking aloud", but many people will relate to them as official announcement since they come from a LW mod. I'll then feel compelled to weigh in where I disagree and have a lengthy comment exchange trying to clarify the overall [incomplete] state of thinking of the team. And it'll be exhausting and stressful. And I imagine it feeling unilateralist curse-y too, because in your mind it was fine to do and in my mind it wasn't, but now I'm committed to this conversation when I'd have preferred a higher bandwidth, in-person one first. 

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T21:05:11.871Z · score: 6 (3 votes) · LW · GW

I think that the council situation is totally fine, if there's a public process of decision-making. [emphasis added]

I can imagine a system where individual council members speak their mind freely yet the subjects of the realm know that no law gets passed without appropriate process to possibly work quite well, but that "if" is doing a lot of work, and I think it currently does not hold of the LW team.

Certain topics get debated from time to time, but so far as I can recall, not in the context of "we're laying down the law now, come weigh in." I fear that people get anxious whenever those topics come up because they feel it might be their fleeting chance to make things go right.

Currently, the main place on LW where law gets formed is in actual decision announcements, and key decisions in that realm are always public, explained, and have comment sections, and everyone is allowed to write comments and posts critiquing those decisions.

I think this is false. For the most part, we have little announced law. We've got a few posts on Frontpage vs Personal blogposts, we've got the Frontpage guidelines, but nothing that much broader about what's okay vs not okay communication, what happens if we don't like something you're doing, etc. Though individual team members operate in accordance with a number of solid underlying principles, they're not really publicly or in agreement across the team, and so I'd venture that many decisions seem quite ad hoc. 

Most moderation decisions get made behind the proverbial closed doors (in practice we keep our door open to keep CO2 down, but you know– proverbial), and I wouldn't even call those decisions even law, though maybe they count as precedent.


I'm optimistic we can rectify this and I think most of the team think it's likely a top priority for Q1. Yet till we do so, I'm feeling that until such a time as we build trust (and this is difficult to do) and firmly establish a process of law getting developed in public, we don't necessarily get to have the privilege of sharing random thoughts here and there that are of pretty large significance.

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T03:51:00.053Z · score: 4 (2 votes) · LW · GW

On the inquisitive side, I would be interested in hearing your full models, Ben, of the value of being able to "think for myself out loud on LW." Not improbably that your model is richer than mine.

Comment by ruby on Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 · 2019-12-23T03:48:33.219Z · score: 7 (4 votes) · LW · GW

Continuing the thread:

I was talking specifically about about moderation and matters of LW policy and norms. I think I see the value of being able to freely express our thoughts without concern for consensus, but I now think there are factors whose importance competes the value of being able to talk freely online.

The update (which I kind of contributed to the team, but which I think Ray first crystallized and propagated) occurred in the wake of a number of lengthy (20+ hour?) conversations we were having with people on- and offline a few month ago about moderation stuff. I think you were on Sabbatical then, Ben, which might be why you didn't share this update as strongly as me or Ray. The update was that notwithstanding caveats, people remained confused (and very concerned) in the wake of us "thinking aloud".

To quickly sketch out some factors I don't think we can ignore and might warrant be more carefully what we say:

  • The five of us are extremely powerful custodians of a public commons that many people are heavily invested and care deeply about . By "extremely powerful", I mean that structurally we are able to govern it and enact our decisions without requiring any kind of affirmative assent from the community. The community has informal means of complaint (which are quite powerful), but this isn't obvious to everyone and I think that many people fear that if we start taking things in the wrong direction, there goes LW.
    • In light of that, I think it's a reasonable reaction for people, in the face of hearing a LW moderator espouse a view about how LW might go (one they like or fear) to make an update that LW might actually enact such a view and therefore have a reaction of pleasure, fear, or panic in response to hearing it; or at the very least update their models of what might happen.
    • 1) It's non-standard or members of an organization to freely express individual ideas of policy, 2) It's cognitively difficult to keep track of five (six if you include Vaniver) models of moderation when trying to model what LW policy is. I don't blame people if they end up lumping things we individually say together, losing track, and getting confused about what's going to happen with their public commons.
  • In a world where any opinion we express gets taken seriously by people as expression of what LessWrong is, we're subject to unilateralist's curse. A and B of us might hold back from expressing an individual opinion we don't want people to mistakenly to attribute as proper LW policy, but then C thinks it's fine and does so, at which A and B feel obligated to swoop in and correct the record, leading to . . .
  • Appearances of breakdown of internal team communication. This is gnarly consideration, but if people are to trust as a team, they need to believe we're capable of resolving disagreements and deciding joint policy (especially for high-level vision stuff, culture, and mod policy). It can be both good and risky for us to debating this publicly. Good because it allows others to participate in our discussion, bad if it looks like we failed to communicate internally and are now duking it out publicly, trying to correct each other, etc. I think it's fair for people to update a big negatively depending on how that happens.


My analogy for punchiness:

Suppose you're a farmer who fled your old kingdom which was ruled by an incompetent tyrant to a new kingdom ruled by the council of "five fairly-wise men." You believe the rulers are well intentioned, but also they're not elected and get to enact their wishes almost immediately without restraint. 

When you hear one of the men of the council opining in the town square about grain taxation and water quota policies that would seriously affect you, you have reason to get worried that this might soon become reality– even if the council elder proclaims "this is just my individual view, I'm just thinking aloud" It's worse if you only stumble upon the conversation part way through and missed his disclaimers that were three comments earlier, or you forgot in the midst of the lengthy rant. 

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2019-12-22T19:14:02.730Z · score: 6 (3 votes) · LW · GW

My other worry about including this in the 2018 review is a claim of what the default should be. If the post claims that nurture culture should be the default, does that then seem like this is how LW should be? This counts even more as the post is by a member of the LW team.

I agree it should be clear about which normative stances taken in the post are statements about what should be true of LW.

At the time I wrote this post, I'd begun discussions about joining the LW team and had done maybe a couple dozen hours of remote analytics work, and I began a full-time trial but I didn't become a full-time team member until March 2019. I'd be more careful now.

The LW team doesn't currently have a firm stance on where LW should fall on the dimensions outlined in the OP/discussion, that's something we're likely to work on in the next quarter. We've got the Frontpage commenting guidelines so far, but that doesn't really state things in these the terms of Combat/Nurture.

My own thinking on the topic has been enriched by my much greater participation in LW discussion, including discussion around communication styles. I'd begun typing a paragraph here of some of my current thoughts, but probably it's best to hold off till I've thought more at length and am speaking alongside the rest of team. (An update in recent discussions of moderation and conversation norms is that the team should be careful to not confuse people by saying different things individually.)

I think it is safe for me to say that while I still think that something in the Nurture cluster is a good default for most contexts, that doesn't mean that LW might not have good reasons to deviate from that default.

Comment by ruby on Book Recommendations for social skill development? · 2019-12-21T15:14:31.725Z · score: 2 (1 votes) · LW · GW

Not easy to do, necessarily, but managers are often both incentivized and well-positioned to do this since your overall workplace performance matters to them and they can observe you interact with others. This is where I got the most mentoring.

Comment by ruby on Is Causality in the Map or the Territory? · 2019-12-19T17:35:53.388Z · score: 2 (1 votes) · LW · GW

Yes, I suppose that's right too. A voltage source can't supply infinite current, i.e. can't maintain that voltage is the load's resistance is too low, e.g. a perfectly conductive path.

Comment by ruby on Is Causality in the Map or the Territory? · 2019-12-18T06:41:03.498Z · score: 4 (2 votes) · LW · GW

Possibly helpful resource for people on this topic (and the source of my knowledge here): Academian's slides on What Causality Is, covering Pearl's stuff.

Comment by ruby on Is Causality in the Map or the Territory? · 2019-12-18T06:32:21.476Z · score: 9 (5 votes) · LW · GW

Yeah, that all seems fair/right/good and I see what you're getting at. I got nerdsniped by the current source example because it was familiar and I felt as phrased it got in the way of the core idea you were going for.

The person who properly introduced me to Pearl's causality stuff had an example which seems good here and definitely erodes the notion of causality being uni-directional in time. It seems equivalent to the thermostat one, I think. 

 Suppose I'm a politician seeking election:

  • At time t0, I campaign on a platform which causes people to vote for me at time t1.
  • On one hand, my choice of campaign is seemingly the cause of people voting for me afterwards.
  • On another hand, I chose the platform I did because of an action which would occur afterwards, i.e. the voting. If I didn't have a model that people would vote for a given platform, I wouldn't have chosen that platform. My model/prediction is of a real-world thing. So it kinda seems a bit like the causality flows backwards in time. The voting causes the campaign choice same as the temperature changing in response to knob-turning causes the knob-turning.

I like the framing that the questions can be posed both for voltage supply and current supply, that seems more on track to me.

Comment by ruby on Big Community Solstice · 2019-12-18T03:14:13.606Z · score: 2 (1 votes) · LW · GW

I wonder if it'd be a good idea to have something like "Rationalist Christmas" or Rationalist Christmas traditions: things that build on the existing holiday, e.g. Rationalist decorate their trees with depictions of 12 virtues of rationality, Rationalists listen to and sing the X-Days of X-Risk. 


Comment by ruby on Is Causality in the Map or the Territory? · 2019-12-18T01:03:59.269Z · score: 13 (4 votes) · LW · GW

I don't fully trust my knowledge in this domain, but this particular example seems questionable to me just because "current sources" are kind of weird. I'll throw about a few ideas from my undergrad EE (I didn't focused on the electromagnetism side, so I'm a bit weak here).

  • Mentally, I use the abstraction that voltage (differences in electrical potential) causes current flows.
  • This probably isn't quite right, but "current sources" are in some sense a bit fictitious. The defining feature is that they maintain constant current regardless of the load placed across the terminals, but in practice, you can set up a device that behaves like that by supplying whatever voltage is necessary to maintain a fixed current. So you can model a "current-source" as "a device which adapts its voltage difference to produce constant current", which is compatible with a "voltage causes current" paradigm.
    • All real current sources have a limited range they can operate over dependent on how much voltage they can supply. If you had a truly ideal current source, you'd have an infinite energy machine.

See the Wiki entry on current sources, particularly the implementations. I didn't read through these, but a glance at several says they're on the inside they involve configurations of voltage sources. Figure 3 is pretty clear demonstration of how "current source" is made by an adaptive voltage source.


Caption: In an op-amp voltage-controlled current source the op-amp compensates the voltage drop across the load by adding the same voltage to the exciting input voltage.

Now, notwithstanding that, there are still interesting questions about causality. (Again proceeding with pretty entry-level knowledge of the physics here– I'm hoping someone will show up and add certainty and clarity.) There might be some clarity from thinking about charge instead of voltage and current. We observe that if you have electric potential differences (more charge concentrated in a place than elsewhere) and a conductive path between them, then you get current flows. Of course, you get differences in charge concentrations by moving charges around in the first place, i.e. also current flows. [The "charge movement" picture gets more complicated by how things like moving magnetic fields create voltage differences/current flows, I'm really sure how to unify the two.]

Instructively, electric potential energy is similar in some ways to gravitational potential energy. At least, both are conservative forces obeying inverse square laws. I can get gravitational potential energy by moving two bits of mass apart. If I release them and there's a pass, the potential energy gets turned into kinetic energy and they move together. Of course, to separate them I had to move mass around. The motion of rolling a boulder up a hill and the motion of letting it roll down a

Electric potentials seem the same (at least when thinking about electrostatics). Separating charge (current flow) creates potential differences which can be released and translate into motion (current flow).

In terms of the causality though, there seems to be something asymmetric. In some cases I'm putting energy into the system, causing motion, and building up potential energy (be it electric or gravitational). In other cases, I'm extracting energy from the system by letting it be used up to create motion.

Cases where you have a current source that's giving you energy, it probably it is the case that the potential difference can be described as the cause of the flow (even if potential difference produced by a device is adaptive somehow to get fixed rate of motion/current). No one thinks that the motion of the car causes the combustion (use of potential chemical energy) rather than the other way round even if I built my engine to produce fixed speed no matter the mass the vehicle it's in.

I would venture that any competent electrical engineer has a picture at least this detailed and definitely does not think of voltage sources and current sources as black boxes rather than high-level descriptions of underlying physics which lead to very concrete and different predictions.

Comment by ruby on Book Recommendations for social skill development? · 2019-12-14T06:42:07.452Z · score: 2 (1 votes) · LW · GW

I second that having a mentor is very valuable, especially if they're able to observe you in social situations and provide feedback. Mentors point out blindspots which can be very hard to notice yourself. 

OTOH, sometimes the theory in books really helps, especially if you're a theory-driven learner like me.

Comment by ruby on Book Recommendations for social skill development? · 2019-12-14T06:40:19.134Z · score: 25 (13 votes) · LW · GW

Last year, I wrote up a document of social skills resources I'd prepared for sharing with others and possibly making a post. I haven't yet made that post, so I'll dump what I've got here. Sorry for not including links and better presentation generally, I'm short on time just at the moment.

Dump of a Google Doc from June 2018.

Introductory Notes

Although this document lists the books which have helped me interact better with others, I’d attribute the bulk of any gains I’ve made in social skills to simply observing others carefully. More than any theory or recommendations, I’ve found that paying careful attention to how people say things, their body language, their responses, etc., has helped my social skill and understanding. My best guess is that if you throw enough data at your mind (at least of this sort), it will learn what do it with. So read the books in doc, but I as much advise others to pay more attention to others if want to improve. 

Also, a major component of social skills is having your own shit together. Or at least, my own ability to interact well with others has increased alongside mastery of myself. Emotions, fears, insecurities, desires, agendas, prejudices, etc., all interfere with one’s ability to interact well with others. 


Start With No by Jim Camp

This is a book on negotiation (mostly enterprise and corporate) but it drove home some crucial general lessons for me that hadn’t fully sunk-in from elsewhere: 1) the utmost importance of inhabiting the world of your “adversary” (or conversation partner), 2) the skills of listening well and asking good questions, 3) investing the effort to think about what your “adversary” really wants, 4) focusing on what you can control (actions) rather than outcomes, 5) investing the time to understand what your adversary wants, even when they themselves don’t know.

Personality Types: Using the Enneagram for Self-Discovery

Personality Type systems aren’t always the most rigorous or predictive models, but those responsible for the Enneagram have paid a lot of attention to humans and what drives them. I’ve found Enneagram materials to be very useful for recognizing underlying patterns of motivation and behavior in myself and others; and in particular, it helped me appreciate how what’s driving other people is quite different from what’s driving me most of the time.

The Charisma Myth

It’s been a few years and I’m due for a re-read, but I liked this book. Links charisma to learnable traits/states of mind and actions which can be learned.

Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity

This book is aimed at bosses and managers and does a fantastic job at describing how to set-up a two-way, feedback-rich relationship notwithstanding professional context and power asymmetries. Worth reading for most people.

How to Talk So Kids Will Listen & Listen So Kids Will Talk

Related to non-violent communication, this book is really good for getting you to think about the emotional state and desires of your interlocutor as well as your own. It generalizes well to adults.

Non-Violent Communication

Makes for good relationships when generally adopted. The skill of learning to say things without making others defensive is definitely worth learning.

Circling & Authentic Relating [not a book, but start attending groups and sessions on these.]

Elephant in the Brain

The Gervais Principle & Be Slightly Evil: A Playbook for Sociopaths

These two books by Venkatesh Rao’s are great resources on status. While Elephant in the Brain explains why status is such fundamental motivation and shows how it explains broad macro features of society, Rao’s books analyze status in individual interactions.

While I encourage people to become more aware of status if you don’t think about it much, I strongly caution them not to obsess about it.

Books I Have Only Read Small Parts Of

  • Impro
  • Games People Play
  • I’m OK-You’re OK
  • Influence: The Psychology of Persuasion by Caldini
    • Definitely the Dark Arts. I read a few chapters and it’s interesting to see the subtle tricks people can employ to get us to say yes to things. If you say yes more often than you like, worth reading.

Books I Have Collected by Not Yet Read

(Appears,  I have a habit of buying books on social skills whenever I see them and then forgetting about them.)

  • How to Speak How to Listen by Mortimer Adler
  • The Charisma Code
  • It’s Not All About “Me”
  • Superhuman Social Skills
  • The Social Skills Guidebook
  • Emotional Intelligence 2.0
  • The Definitive Book of Body Language

Books on Relationships

A General Theory of Love

You might expect a book about romantic love,  but it instead spends a lot of time focusing on Infant attachment. Still, very interesting for modeling human attachment in general.

Avoidant: How to Love (or Leave) A Dismissive Avoidant Partner

Generally helpful book on the topic of Attachment Styles. Useful for understanding that different people have learnt (or deeply ingrained) different patterns of behavior in relationship and that the interaction of these patterns matters.

The Mastery of Love: A Practical Guide to the Art of Relationship

This is a crazy book. But has some great stuff on letting your partner be who they are recognizing that you’re not responsible for solving all your partner’s problems. 


Comment by ruby on LW Team Updates - December 2019 · 2019-12-07T00:59:19.951Z · score: 2 (1 votes) · LW · GW

The team actually considered bookmarks for comments (which would cover shortform as well since shortform posts are implemented technically as comments), but it's a bit more complicated. However, I think it makes a lot of sense to have them for comments since comments are relatively harder to find again.

I'd be curious to hear about whether you've been using bookmarks and if so, in what ways.

For tags, I'm not sure. Definitely to begin with they'd be only for posts. It would probably make sense to have them for shortform too at least.

Comment by ruby on How do you assess the quality / reliability of a scientific study? · 2019-12-03T19:37:06.453Z · score: 10 (3 votes) · LW · GW

It would be good know if offering prizes like this is helpful in producing counterfactually more and better responses. So, to all those who responded with the great answers, I have a question:

How did the offer of a prize influence your contribution? Did it make any difference? If so, how come?

Comment by ruby on How do you assess the quality / reliability of a scientific study? · 2019-12-03T19:22:39.387Z · score: 4 (2 votes) · LW · GW

Hopefully this will give me a better idea of what works and I may write an updated guide next year.

I'd be excited to see that.

As a data point r.e. the prize, I’m pretty sure that if the prize wasn’t there I would have done my usual and intended to write something and never actually got round to it. I think this kind of prize is particularly useful for questions which take a while to work on and attention would otherwise drift.

Oh, that's helpful to know and reminds me that I intended to ask respondents how the offer a prize affected their contributions.

Comment by ruby on How do you assess the quality / reliability of a scientific study? · 2019-12-03T14:40:50.146Z · score: 6 (3 votes) · LW · GW

Forgive me if I rant a little against this curation notice.

hard-earned insights about what's bad about science and how to get truth out of it anyway.

I'm not sure I'd frame people's responses quite this way, i.e., I think that's framing people as having a very negative valence towards current science in a way I'm not sure is there and I would be reluctant to assign to them. Or maybe more importantly, I don't think that captures the prompt being replied to. If I'd authored a response here, I'd dislike this notice for somehow trying to make my response "political" in a way I don't endorse, like it's taking the opportunity for a "boo science" that wasn't the point for me.

Conservatively, I read people's responses as being built on the basis that studies vary in trustworthiness and answers are about methods for assessing trustworthiness/strength of evidence. Answers are about how scientific studies can be done poorly, but aren't a response to the prompt of "what are ways in which science is bad?"

Sorry, I'm probably reading too much into the wording of a single sentence. Charitably, I could read the notice as saying the answers given contain ways in which scientific studies can be bad and how to filter those ones out (or trust them to that appropriate extent).


Comment by ruby on How do you assess the quality / reliability of a scientific study? · 2019-12-02T23:51:38.504Z · score: 26 (5 votes) · LW · GW

Awards for the Best Answers

When this question was posted a month ago, I liked it so much that I offered $100 of my own money for what I judged to be the best answer and another $50 to the best distillation. Here's what I think:

Overall prize for best answer ($100): Unnamed 

Additional prizes ($25): waveman, Bucky

I will reach out to these authors via DM to arrange payment.

No one attempted to me what seemed like a proper distillation of other responses so I won't be awarding the distillation prize here, however I intend to write and publish my own distillation/synthesis of the responses soon.

Some thoughts on each of the replies:

Unnamed [winner]: This answer felt very thorough and detailed, and it feels like it's a guide I could really follow to dramatically improve my ability to assess studies. I'm assuming limitations of LW's current editor meant the formatting couldn't be nicer, but I also really like Unnamed broke down his overall response into three main questions ("Is this just noise?", "Is there anything interesting going on here?" and "What is going on here?") and then presented further sub-questions and examples to help one assess the high-level questions. 

I'd like to better summarize Unnamed's response, you should really just read it all.

waveman [winner]: waveman's reply hits a solid amount of breadth in how to assess studies.  I feel like his response is any easy guide I could pin up my wall and easily step through while reading papers. What I would really like to see is this response except further fleshed out with examples and resources, e.g. "read these specific papers or books on how studies get rigged." I'll note that I do have some pause with this response since other responders contradicted at least one part of it, e.g., Kristin Lindquist saying not to worry about the funding source of a study. I'd like to see these (perhaps only surface-level) disagreements resolved. Overall though, really solid answer that deserves its karma.

Bucky [winner]: Bucky's answer is deliciously technical. Rather than discussing high-level qualitative consequences to pay attention to (e.g. funding source, has there been reproductions), Bucky dives and provides actual forumulas and guidance about sample sizes, effect sizes, etc. What's more, Bucky discusses how he applied this approach to concrete studies (80k's replication quiz) and the outcome. I love the detail of the reply and it being backed up by concrete usage. I will mention that Bucky opens by saying that he uses subconscious thresholds in his assessments but is interesting in discussing the levels other people use.

I do suspect that learning to apply the kinds of calculations Bucky points at is tricky and vulnerable to mistaken application. Probably a longer resource/more training is needed to be able to apply Bucky's approach successfully, but his answer at the least sets one on the right path.

Kristin Lindquist: Kristin's answer is really very solid but feels like it falls short of the leading responses in terms of depth and guidance and doesn't add too much, though I do appreciate the links that were included. It's a pretty good summary. Also one of the best formatted of all answers given. I would like to see waveman and Kristin reach agreement on the question of looking funding sources.

jimrandomh: Jim's answer was short but added important answers to the conversation that no one else had stated. I think his suggestion of ensuring you ask yourself about how you ended up reading a particular study is excellent and crucial. I'm also intrigued by his response that controlling for confounds is much, much harder than people typically think. I'd very much like to see a longer essay demonstrating this.

Elizabeth: I feel like this answer solidly reminds me think to about core epistemological questions when reading a study, e.g., "how do they know this?"

Romeostevensit: this answer added a few more things to look for not not included in other responses, e.g. giving more to authors who discuss what can't be concluded from their study. Also I like his mentioning that spurious effects can sneak into despite the honest intentions of moderately competent scientists. My experience with data analysis supports this. I'd like to see a discussion between Romeostenvsit and jimrandhomh since they both seem to have thoughts about confounds (and I further know they both have interest in nutrition research).

Charlie Steiner: Good additional detail in this one, e.g. the instruction to compare papers to other similar papers and general encouragement to get a sense of what methods are reasonable. This is a good answer, just not as good as the very top answers. Would like to see some concrete examples to learn from with this one. I appreciate the clarification that this response is for Condensed Matter Physics. I'd be curious to see how other researchers feel it generalizes to their domains.

whales: Good advice and they could be right that a lot of key knowledge is tacit (in the oral tradition) and not included in papers or textbooks. That seems like something well worth remembering. I'd be rather keen to see whales's course on layperson evaluation of science.

The Major: Response seems congruent with other answers but is much shorter and less detailed them.

Comment by ruby on Incorrect hypotheses point to correct observations · 2019-12-02T07:47:46.748Z · score: 5 (2 votes) · LW · GW

I've found myself referencing this post repeatedly since reading it. It's improved my reaction to ideas and models that seem definitely wrong. Now instead of just thinking "that's clearly wrong", I'm moved to ask "but what true observations are leading to this model?" and it feels like I see more value in even wrong models. Sometimes, I think, I learn to see where they're not wrong.

I also want to give this post credit for feeling quite original to me. Many other posts are refinements or clarifications of ideas which exists in some form elsewhere. While I can't be sure, this posts feels like it really cemented something new in me rather than just helping a thing I already knew about stick.

All in all, I really like this post.


Comment by ruby on Babble · 2019-12-02T00:43:07.303Z · score: 2 (1 votes) · LW · GW

Babble and Prune has stuck as a concept in my mind since reading this sequence. The need for both of them has informed how I approach intellectually personally and also in how I think about the communal process I try to support on LessWrong. Like so many things, it's something we need to juggle the balance between.

Comment by ruby on The Tails Coming Apart As Metaphor For Life · 2019-12-02T00:13:44.091Z · score: 4 (2 votes) · LW · GW

This post together with it's predecessor have solidly installed this concept/observation in my head and its become an idea that I've recurringly employed since learning. It also forms part of deep sense of how Goodharting operates.

Comment by ruby on Player vs. Character: A Two-Level Model of Ethics · 2019-11-30T00:24:06.215Z · score: 2 (1 votes) · LW · GW

The ideas in this post feel similar to those of Hanson, Simler, and others, but I still found there something really crisp about it. Since reading it, I've mentioned this framing to others and used it internally repeatedly. The ideas here easily push towards something like cyncism, but they just seem so correct.

Comment by ruby on RAISE post-mortem · 2019-11-25T19:40:31.933Z · score: 12 (4 votes) · LW · GW

I wasn't thinking of it being publicly available yet, but I'm happy to share. The list is really a sample tag I've been testing with our in-development, early-stage tagging MVP. We probably won't release tagging for several months due to design complexity/risks (assuming we conclude it's the correct choice at all), however you can see this list I've been making here:

As you'll see, the UI isn't really complete.

Comment by ruby on The LessWrong 2018 Review · 2019-11-24T23:49:40.036Z · score: 5 (2 votes) · LW · GW

The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be "outside the Overton window" not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a debate with their colleagues about mathematics (as opposed to some dumb non-math thing like tenure or teaching duties) as an "Overton-window fight", I would be pretty worried about the culture of that mathematics department, wouldn't you?!


I think it's ominous if Raemon used the word with that intended meaning, but I'm guessing he didn't (and most people around here don't?). When I think "Overton window", I just think "what is considered reasonable to discuss without it being regarded as weird or extreme or requiring extreme evidence to overcome a very low prior"  and think of the term being agnostic to how it got decided. In this sense, our community has an Overton window that definitely includes physics and history, presently really excludes Reiki and astrology, and perhaps has meditation/IFS on the border. I think overall the process by which we've ended up with this window has been much better than what most of broader society uses.

My understanding of Ray's comments about "concentrating Overton window fights" was that just now was a period when we'd more than usual communally debate (using the correct and normative laws of reasoning) ideas which we're as yet still contentious with the community and increasing consensus of whether they were good or not– based on their epistemic merits.


It's a separate question about what best way to use the term "Overton window" is and upon which I don't have a strong opinion at present.


Comment by ruby on RAISE post-mortem · 2019-11-24T18:52:24.460Z · score: 29 (16 votes) · LW · GW

Great write-up! I generally think postmortems and retrospectives are very valuable* and this one does a great job of presenting what you did and lessons learnt. I feel like the lessons you presented are both broadly correct and valuable to have described within the context of your real-world project.

I'm someone who was not in favor of some of your past plans, but having read this postmortem, I'm excited to see what you end up doing in the future. Good luck at the bank!

*I've been collecting a list of postmortem/retrospective posts on LessWrong and I'll be glad to add this one to it.

Comment by ruby on The LessWrong Team · 2019-11-24T18:39:32.920Z · score: 4 (2 votes) · LW · GW

To add some detail, LessWrong doesn't use "off the shelf" forum software like WordPress or phpBB. It's a custom codebase originally built on a forum framework called Vulcan, but since then extensively developed and customized by the dev team.

Comment by ruby on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2019-11-24T16:04:01.499Z · score: 3 (2 votes) · LW · GW

Seconding Vaniver.