Comment by ruby on Plans are Recursive & Why This is Important · 2019-03-13T20:41:56.169Z · score: 4 (2 votes) · LW · GW

Wow, that's really cool to learn! I only have a beginner level knowledge of functional programming concepts and was not aware of hylomorphisms and unfolds (just basics like fold left, fold right). Thanks for bringing that to my attention, I might try to read that whole series.

Comment by ruby on Why should I care about rationality? · 2019-03-11T23:40:18.112Z · score: 4 (2 votes) · LW · GW

Not so long ago I hit upon a definition of rationality which better captured what everything people actually use the term to refer to.

I think a solid definition of rationality is: trying to do better on purpose.

Instead of following your default procedure, you pull back and do something different to get a better result. This might be thinking in ways other than the default or doing things in ways other than the default.

A natural consequence of tying to do better on purpose is that you look for higher-level improvements rather than purely immediate, domain-specific concrete ones. Many people think to practice and train skills, but the rationalist seeks to reflect on how they train and process. Many people work towards goals, the rationalist pauses to reflect on their selection goals.

So why care about rationality? Because you want to do better.

If you believe that it is possible to do better, and that doing better results in more of what you want - then surely you would want that.

Comment by ruby on Plans are Recursive & Why This is Important · 2019-03-10T19:26:05.518Z · score: 3 (2 votes) · LW · GW

It was in the context of God's actions, but I've always liked the phrase they taught me in school:

סוף סוף במחשבה תחילה
The end, the end, [that which was] first in thought.

Translates imprecisely, but flows so nicely in Hebrew.

Plans are Recursive & Why This is Important

2019-03-10T01:58:12.649Z · score: 56 (19 votes)
Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-08T17:56:43.135Z · score: 2 (1 votes) · LW · GW

No worries! Maybe we can get to the bottom of it another time, maybe another place. :)

Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-08T03:43:30.380Z · score: 4 (2 votes) · LW · GW

Thanks for the elaboration. Yes, I see what you mean by brute force, and I also see how my post might be read to be advising an approach similar to what you described. I don't know whether a pragmatic approach like that is a good developmental stage to go through? Maybe for a bit, but I'm not sure.

If the post didn't shed any light on how a brute force approach is not the only option and not necessarily the best, I think it's because I forgot that someone might approach motivation in that way. Only reading your description brought it back into my mind.

Go back five to six years I did have a phase when I was very big on "discipline", I certainly tried to muster willpower to make myself do things - but it was never that successful or systematized. Around the time I did begin making more serious efforts to be productive I was already engaged with CFAR, reading, and generally being coached into an approach of non-willpower-reliance and non-self-coercion. Yet it must have been long enough ago that I think I'd forgotten that there's a very a natural approach to motivation where you pile on productivity tricks in a not quite sustainable/healthy way.

So, thanks for pointing that all out. That's a good reminder.

For the public record, I think ideal motivation is attained when you have something resembling a state of harmony in your mind and with yourself. You might take actions to make actions seem more attractive and/or do things to decrease temptation, but it isn't coercive or depleting. This is difficult to achieve and requires a lot introspection, self-awareness, resolving inner conflicts, etc., etc. If you're doing it right, you're not suffering. You don't crash. It doesn't feel like you're coercing yourself.

It's possible I should have stated something like that in the post itself.

Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-05T18:38:38.727Z · score: 10 (2 votes) · LW · GW

Hmm, I'd like to step back and tally the different claims that have been surfaced so we can at least be clear where we disagree, even if we don't end up agreeing. Among the claims:

A: Abstractions are sometimes useful.

B: Abstractions eventually break down and the underlying complexity needs to be understood for further usefulness.

C: The abstraction in my post is only compelling from a certain stage of development / it's limited (but this assertion was accompanied by the statement that this doesn't mean it was wrong or useful).

D: The abstraction in my post is unlikely to help many people.

E: The particular abstraction in my post is leaky, is limited, and can become harmful after a certain point in development.

F: That abstractions are indispensable and are needed to guide understanding even when you dip to lower layers.

G: That it's harmful to always be trying to look at lower levels of abstraction without higher levels.

H. That not understanding lower levels means that you don't understand very much at all.

That's not every assertion, but it's maybe enough to start getting some clarity. I think that Gordon, mr-hire, and myself all agree on A and B broadly, though we might disagree on where the line is drawn for each. Gordon, you write:

This sounds like the crux of the disagreement: I think no abstraction is sufficiently non-leaky that you don't (eventually) need to understand more of the underlying complexity within the context I see this post sitting in, which is the context of what we might call cognitive, personal, psychological, or personal development (or to put it in non-standard term, the skill of being human). Unless your purpose is only to unlock a little of what you can potentially do as a human and not all of it, every abstraction is eventually a hindrance to progress, even if it is a skillful hinderance during certain phases along the path that helps you progress until it doesn't.

I mean, I wouldn't disagree with that in general. Though I think a lot of work is being done by the word "eventually" and "sufficiently non-leaky". I think there are contexts where you get away without needing to go all the way deep. Most (I would think overwhelming majority) of people who code don't know assembly and certainly don't understand how logic gates are implemented - and this is pretty fine 99.99% of the time.

It is fair to say that the abstraction/model in my post is not nearly as good as the abstraction of high-level computer languages. That's true. I mean, actually it breaks pretty quickly. Part 2 of this post will dive deeper. Nonetheless, I do think it's quite useful even if one don't read further. Gordon, I'm unclear what your stance is - you first state that it's useful and then state that it's unlikely to help many people, so I'm not sure of your actual thought.

I do disagree with C (compelling only from a certain stage of development) in that I think even once you have much deeper understanding, the higher levels of abstraction remain crucially important. Just because you understand electromagnetism really well and know the limits of conventional circuit theory (e.g. designing super duper tiny transistors), doesn't mean you want to throw out circuit theory and just solve Maxwell's equations everywhere - even if eventually sometimes you have to.

I don't disagree that you need more detail for a lot of applications. As mentioned, this unfortunately couldn't make it into the first single post. As I wrote:

Saying that motivation is a matter of winning in the moment is all very good, but how does one actually do that?
Unfortunately, a proper treatment of this not-so-small topic will make this past far too long and instead requires its own post (Motivation Part 2: How to Win, coming soon to a screen near you!). Nonetheless, I can offer a high-level summary here:

But again, I don't think what was presented here stops being compelling later on.

I also think D (unlikely to help many people) is somewhat false, depending on what counts as "many people". Another commenter felt this post was quite useful, someone else on FB found it rather revelationary, and I'd infer from those who I know of that several more benefited even if I don't know of it directly. That's beyond the inside view that abstraction/model presented can be applied already. mr-hire also states simpler ideas worked well for a really long time (though I'm not sure which simpler ideas or what counts as "brute force".

Back to B (abstractions break down, eventually become a hindrance). Definitely agree here.

I think if your initial comment, Gordon, had been something like:

I think this model/abstraction is correct and useful to an extent, but I want to flag that it is just a very high-level abstraction which is missing a lot of the very messy detail that is relevant (and quite necessary) detail for this domain. It'll help you on it's own at a certain stage, but after that you'll need more.

Then I wouldn't have disagreed at all. I think the disagreement might mostly be around a) how quickly abstractions break down, b) how much you still need them even once you understand.

To be honest, I did bristle at some of the way things were phrased, but that's on me. It felt like there was some kind of implication that I personally didn't have any deeper understanding and that felt.

That is, what's going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, "where do the beliefs that power motivations come from?" and "why do you even prefer one thing to another?") you'll see it explode apart in a way that will make you go "oh, I had it right, but I didn't really understand it before" the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say "oh, I'm amazed I even had such good understanding before given how little I really understood".

This felt a like sleight since I think the post references much more detailed resources and even flags in the opening lines that what's presented is the "crudest simplification". Yet quite possibly you were addressing the hypothetical reader rather than me. But even if not, still shouldn't let that influence my response too much. The additionals words of caution about the limitation of abstractions (especially here) are worthwhile. I regret perhaps that because of these very long comments, readers might not see this point and our overall agreement on it.

Hmm, I do think the thing I haven't addressed here is more on my stance that better abstractions and better understanding of abstractions (something I think is neglected in the domain of self-understand and self-improvement) is actually key to using lower-level understanding more systematically and in a less ad-hoc way. Perhaps save that for another very long comment :P

By the way, thanks for engaging so much. Don't think I've ever dived into such a detailed discussion.

Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-03T18:09:57.343Z · score: 6 (4 votes) · LW · GW

Thanks! It’s encouraging words like those which keep me writing.

I’d say one of the things I attempt to do with my writing (and in my thinking too) is clarify the foundations sufficiently clearly that you can never forget them and therefore always apply them appropriately. I find that points that initially feel obvious to me are actually still a bit murky, haven’t been fully absorbed, and therefore I don’t actually use them or appreciate their many applications. Getting clear and accurate explanations (just for myself) makes them accessible enough to my mind that become a lot more useful.

A further benefit and the original reason I found myself doing this style of writing is that clear foundations allow you then to express more complicated, profound, nuanced pieces within a solid context. Being clear on the foundations, at least for me, makes all the more advanced pieces fall into place and seem much more necessary than if I had them on their own. Gives you a framework to hang things on. The make your champion stronger vs make the competition weaker is an example of this.

The overall result is a bunch of writing that doesn’t register as particularly profound, but is very clear. Or so I’ve been told.

My entry point for this is actually thinking and writing about planning. I’ve been writing something of a sequence/book on planning and any practical planning book for humans would benefit greatly from a decent treatment of motivation (and planning around it). The insights/models in this post arose naturally from that context.

With that background, I think I can explain why I disagree with many of your points.

First, I agree that you get the model right, but it's a model that is only very compelling from a certain stage of development, my strongest evidence being it was once very compelling to me and now it's more like the kind of understanding I would have if I was asked to manifest my understanding without explaining below a certain level of detail, and the other being I think I've seen a similar pattern of discovering this and then focusing on other thing in the writing of others. That doesn't make any of it wrong or not useful, but it does suggest it's rather limited, as I think fellow commenter Romeo also points out. That is, what's going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, "where do the beliefs that power motivations come from?" and "why do you even prefer one thing to another?") you'll see it explode apart in a way that will make you go "oh, I had it right, but I didn't really understand it before" the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say "oh, I'm amazed I even had such good understanding before given how little I really understood".

The insights in the post exist at a certain level of abstraction, as you say, it doesn’t manifest understanding below a certain level of detail. That’s quite intentional. I disagree that it is only compelling from a certain stage of development or is rather limited. Far from it.

I’ve been aware of underlying details (just see the references) for a lot longer than I’ve appreciated the high-level general points here because I think the lower-level points easily obscure the higher-level picture. This is perhaps related to your assertion that other writers haven’t done justice to the breadth of the ideas here. If you can’t stop thinking about transistors, you will find it hard to focus on and fully appreciate the boolean algebra you’re executing on your logic gates made out of transistors. It’d be even harder to teach someone to understand boolean algebra (let’s say minimizing digital circuits) if you want them to keep transistor operation in mind at all times. And if your abstractions are tight (not leaky) enough, you actually don’t need to understand the underlying complexity for them to be useful. Transistors and logic gates are human design though. A better example might be understanding evolutionary selection processes. If you can't abstract away from the biological implementation of sexual reproduction for a few minutes, you're likely to miss the higher-level picture of why sexual reproduction is even a thing. What problem was it trying to solve and what does it imply for the the implementation?

In this context though. Hmm. I think the insights/models here aren’t sufficient own their own to help you manage your motivation well, and perhaps that’s your point. The original post had to split into two parts because adding in the further models needed was going to balloon out the post to six thousand words easily. If you’re point is “there isn’t enough detail here to be practically useful”, that’s kind of true.

A major hope for this piece is that if you appreciate the abstraction at this level, you will understand why all the lower-level pieces look how they look. Many people have created lists of anti-akrasia/motivation-enhancing techniques and also highly-detailed reviews of how motivation work (just see Luke’s review). A goal with this post is that you see enough of the general picture that it is clear why various motivation-techniques work and when they're needed. If I launched into talking about Hebbian learning and prospect theory, I assert you'd probably miss the very design problem that its heart the mind/motivation system is trying to solve. Hence holding off on that lower-level for other posts.

Second, just to set expectations, it's unfortunately unlikely that having this model will actually help many people. Yes, it will definitely help some who are ready to see it, but years of trying to explain my insights has taught me that one of the great frustrations is that fundamental insights come in a particular order, they build on each other, and the deeper you go the smaller the audience of people explaining your insights to will help.

I think I’m more optimistic than you about communicating ideas, though perhaps I’m just sufficiently early in writing “career” to be naive. I’m working on the premise that sufficiently clear explanations delivered systemically in the write order can recreate in the minds of others much of the understanding you have in your own mind. That does require people are willing to invest the time, but I think people do invest in reading writing that is sufficiently enjoyable and valuable-seeming.

Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-01T06:49:20.041Z · score: 5 (3 votes) · LW · GW

You're right. Thanks for pointing that out.

Unfortunately, I couldn't include them in a single post of reasonable length. Temporal motivation theory / the procrastination equation will feature in the eventual Part 2 to this post.

Comment by ruby on Motivation: You Have to Win in the Moment · 2019-03-01T03:15:05.418Z · score: 4 (2 votes) · LW · GW

This is a good clarification. Technically you're right, you can maintain motivation so long as you reaffirm commitment in each moment you are tempted, which admittedly might not every moment (consider flow states).

Though I'd still argue that you should still be thinking about causing yourself to win in every moment. You might maintain motivation through to the completion of a task because either: a) you successfully reaffirmed commitment, or b) no alternative candidate winner was surfaced in a given moment to begin with, yet it matters that one of those is true for every moment of necessary execution.

Though not covered properly in this post, the eventual goal here is to explore how to engineer circumstances, both internal and external, so that you win in each moment whether it be because of a) or b). And that applies to every moment.

Motivation: You Have to Win in the Moment

2019-03-01T00:26:07.323Z · score: 49 (21 votes)
Comment by ruby on Ruby's Short-Form Feed · 2019-02-28T19:35:53.312Z · score: 10 (2 votes) · LW · GW

Over the years, I've experienced a couple of very dramatic yet rather sudden and relatively "easy" shifts around major pain points: strong aversions, strong fears, inner conflicts, or painful yet deeply ingrained beliefs. My post Identities are [Subconscious] Strategies contains examples. It's not surprising to me that these are possible, but my S1 says they're supposed to require a lot of effort: major existential crises, hours of introspection, self-discovery journeys, drug trips, or dozens of hours with a therapist.

Have recently undergone a really big one, I noted my surprise again. Surprise, of course, is a property of bad models. (Actually, the recent shift occurred precisely because of exactly this line of thought: I noticed I was surprised and dug in, leading to an important S1 shift. Your strength as a rationalist and all that.) Attempting to come up with a model which wasn't as surprised, this is what I've got:

The shift involved S1 models. The S1 models had been there a long time, maybe a very long time. When that happens, they begin to seem how the world just *is*. If emotions arise from those models, and those models are so entrenched they become invisible as models, then the emotions too begin to be taken for granted - a natural way to feel about the world.

Yet the longevity of the models doesn’t mean that they’re deep, sophisticated, or well-founded. That might be very simplistic such that they ignore a lot of real-world complexity. They might have been acquired in formative years before one learned much of their epistemic skill. They haven’t been reviewed, because it was hardly noticed that they were beliefs/models rather than just “how the world is”.

Now, if you have a good dialog with your S1, if your S1 is amenable to new evidence and reasoning, then you can bring up the models in question and discuss them with your S1. If your S1 is healthy (and is not being entangled with threats), it will be open to new evidence. It might very readily update in the face of that evidence. “Oh, obviously the thing I’ve been thinking was simplistic and/or mistaken. That evidence is incompatible with the position I’ve been holding.” If the models shift, then the feelings shift.

Poor models held by an epistemically healthy "agent" can rapidly change when presented with the right evidence. This is perhaps not surprising.

Actually, I suspect that difficulty updating often comes from the S1 models and instances of the broccoli error: “If I updated to like broccoli then I would like broccoli, but I don’t like broccoli, so I don’t want that.” “If I updated that people aren’t out to get me then I wouldn’t be vigilant, which would be bad since people are out to get me.” Then the mere attempt to persuade that broccoli is pretty good / people are benign is perceived as threatening and hence resisted.

So maybe a lot of S1 willingness to update is very dependent on S1 trusting that it is safe, that you’re not going to take away any important, protective beliefs of models.

If there are occasions where I achieve rather large shifts in my feelings from relatively little effort, maybe it is just that I’ve gotten to a point where I’m good enough at locating the S1 models/beliefs that are causing inner conflict, good enough at feeling safe messing with my S1 models, and good enough at presenting the right reasoning/evidence to S1.

Comment by ruby on Informal Post on Motivation · 2019-02-25T01:57:02.878Z · score: 4 (2 votes) · LW · GW

Thanks, really appreciate that! Both the fixes and your thoughts. :) Updates me towards it being worthwhile to post things with lower levels of polish rather than not at all.

Informal Post on Motivation

2019-02-23T23:35:14.430Z · score: 29 (16 votes)

Ruby's Short-Form Feed

2019-02-23T21:17:48.972Z · score: 11 (4 votes)
Comment by ruby on From Personal to Prison Gangs: Enforcing Prosocial Behavior · 2019-02-02T04:26:24.215Z · score: 4 (2 votes) · LW · GW

Good post! Thanks for writing this, it adds additional clarity and generality to things I've been thinking about recently.

Comment by ruby on Combat vs Nurture & Meta-Contrarianism · 2019-01-13T02:14:01.527Z · score: 10 (2 votes) · LW · GW

I've been at a workshop and haven't had much chance to engage with this post. Thanks for writing it, it's an excellent reply and says many things better than I managed to. I especially like hierarchy which swings between nurture and combat, that seem well described to me. Also strong endorsement for meeting conversations where they're at.

Comment by ruby on Optimizing for Stories (vs Optimizing Reality) · 2019-01-07T23:39:42.602Z · score: 4 (2 votes) · LW · GW

I probably didn't emphasize this enough in the main post, but the idea I'm really going for is that there is difference in optimizing for stories vs. optimizing for reality. There's a difference in goal and intention. Even if it's the case that human are never seeing "rock-bottom reality" itself and everything is mediated through experience, there is still a big difference between a) someone attempting to change an aspect of the underlying reality such that actual different things happen in the world, and b) someone attempting to change the judgments of another person by inputting the right series of bits into them.

Optimizing stories is really about a mono-focus on optimizing the specific corners of reality which exists inside human heads.

Comment by ruby on Rationalization · 2019-01-07T16:07:30.272Z · score: 2 (1 votes) · LW · GW

Oh, right. Once upon a time I knew that was the word. Thanks.

Comment by ruby on Rationalization · 2019-01-07T08:35:58.449Z · score: 3 (2 votes) · LW · GW

I didn't know that was the word for excuse, but I think it's an excellent word itself to use for rationalization. No synonym required. ״רצה״ is the root for "want" and "הַתְרָצָה" is the the reflexive conjugation, so it's approximately "self-wanting." Which is exactly what rationalization is - reasoning towards what you want to be true.

Optimizing for Stories (vs Optimizing Reality)

2019-01-07T08:03:22.512Z · score: 43 (14 votes)
Comment by ruby on Learning-Intentions vs Doing-Intentions · 2019-01-03T01:48:06.424Z · score: 2 (1 votes) · LW · GW

The intended meaning of the post is that there can be "producing in order to produce" and "producing in order to learn". The producing to learn might involve very real producing, but underlying goal is different. You might be trying to get real investment from real investors, but the goal could be a) receiving money, b) testing your assumptions about whether you can raise successfully.

In practice, I think you're right that sometimes (or often) both intentions are necessary. You need to get users both to learn and to survive. Still, the two intentions trade off against each other and it's possible to forget about one or the other. My primary recommendation is to be aware and deliberate about your intentions so that you have the right ones at the right time in the right amount.

Comment by ruby on Learning-Intentions vs Doing-Intentions · 2019-01-02T08:58:35.553Z · score: 3 (2 votes) · LW · GW

Thanks for the link! Sorry to change from the term "mindset" to "intention" on you.

Learning-Intentions vs Doing-Intentions

2019-01-01T22:22:39.364Z · score: 58 (21 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-12-05T21:48:53.482Z · score: 10 (4 votes) · LW · GW

I emphatically agree with Zvi about the mistakeness of saying "you're dumb."

In my own words:

1) "You're absolutely wrong" is strong language, but not unreasonable in a combative culture if that's what you believe and you're honestly reporting it.

2a) "You're saying/doing something dumb" becomes a bit more personal than when making a statement about a particular view. Though I think it's rare that one have need to say this, and it's only appropriate when levels of trust and respect are very high.

2b) "You're being dumb" is a little harsher than "saying/doing something dumb." The two don't register as much different to me, however, though they do to Mary Chernyshenko?

3) "You're dumb" (introduced in this discussion by Benquo) is now making a general statement about someone else and is very problematic. It erodes the assumptions of respect which make combative-type cultures feasible in the first place. I'd say that conversations where people are calling others dumb to their faces are not situations I'd think of as healthy, good-faith, combative-type conversations.

[As an aside, even mild "that seems wrong to me"-type statements should be recognized as potentially combative. There are many contexts where any explicit disagreement registers as hostile or contrarian.]

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-29T01:43:27.485Z · score: 2 (1 votes) · LW · GW

Seconded. Would like to hear the in-depth version.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-28T02:10:00.413Z · score: 6 (3 votes) · LW · GW

Thanks for surfacing these! I've now edited the post to mention these sources and your comment.

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T23:11:44.834Z · score: 5 (3 votes) · LW · GW

Thanks, that was clarifying and helpful.

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T19:36:37.038Z · score: 6 (4 votes) · LW · GW
I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

Interesting, though I'm not sure I fully understand your meaning. Do you mind elaborating your examples a touch?

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T07:24:19.052Z · score: 19 (7 votes) · LW · GW

Two dimensions independent of the two cultures

Having been inspired by the comments here, I'm now thinking that there are two communication dimensions at play here within the Cultures. The correlation between these dimensions and the Cultures is incomplete which has been causing confusion.

1) The adversarial-collaborative dimension. Adversarial communication is each side attacking the other's views while defending their own. Collaborative communication is openness and curiosity to each other's ideas. As Ben Pace describes it:

I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way.

2) The "emotional concern and effort" dimension. Communication can be conducted with little attention or effort placed on ensuring the emotional comfort of the participants, often resulting in a directness or bluntness (because it's assumed people are fine and don't need things softened). Alternatively, communication can be conducted with each participant putting in effort to ensure the other feels okay (feels validated/respected/safe/etc.) At this end of the spectrum, words, tone, and expression are carefully selected as overall a model of the other is used to ensure the other is taken care of.

My possible bucket error

It was easy for me to notice "adversarial, low effort towards emotional comfort" as one cluster of communication behaviors and "collaborative, high concern" as another. Those two clusters are what I identified as Combat Culture and Nurture Culture.

Commenters here, including at least Raemon, Said, and Ben Pace, have rightly made comments to the effect that you can have communication where participants are being direct, blunt, and not proactively concerned for the feelings of the other while nonetheless still being open, being curious, and working collaboratively to find the truth with a spirit of, "being on the same team". This maybe falls under Combat Culture too, but it's a less central example.

On the other side, I think it's entirely possible to be acting combatively, i.e. with an external appearance of aggression and hostility, while nonetheless being very attentive to the feelings and experience of the other. Imagine two fencers sparring in the practice ring: during a bout, each is attacking and trying to win, however, they're also taking create care as to not actually injure the other. They would stop the moment they suspected they had and switch to an overtly nurturing mode.

A 2x2 grid?

One could create a 2x2 grid with the two dimensions described in this comment. Combat and Nurture cultures most directly fit in two of the quadrants, but I think the other two quadrants are populated by many instances of real-world communication. In fact, these other two quadrants might contain some very healthy communication.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-24T23:06:35.207Z · score: 6 (4 votes) · LW · GW

Epistemic status tag added. Thanks.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-24T22:12:48.275Z · score: 6 (4 votes) · LW · GW

Personal observation. I don't have any particular sources for anything here, though my thinking is influenced by some academic reading about emotions over the years. There are models more than conclusions and my intention is that readers evaluate them using their own observations rather than accept them based on sources, studies, or my say so.

Four factors which moderate the intensity of emotions

2018-11-24T20:40:12.139Z · score: 60 (18 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-13T20:10:14.899Z · score: 2 (2 votes) · LW · GW
I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

I second that experience.

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-13T20:08:05.601Z · score: 2 (2 votes) · LW · GW

I completely agree that Nurture Culture has capabilities far beyond getting along without conflict.

When I think of examples of Nurture Culture at its most powerful, much of what comes to mind is the mode of relating used in Focusing, Internal Double-Crux, and Internal Family Systems. There's a mode of relating that facilitate hazy, not-necessarily articulate, reticent, even fearful parts of oneself to voice themselves by being open, encouraging, validating, and non-judgmental (i.e., traits which are not particularly the hallmarks of Combat Culture).

I've found that increased skill with "advanced" Nurture Culture helped me relate to parts of myself far better alongside relating to others better.

At the risk of being a little repetitive , I'll think the modeling required for this mode of relating is not that of beliefs but of feelings. You model (and are attentive and responsive to) the feelings of the other (internal or external) in the context: continuously gauging their comfort, willingness, and needs within the conversation. Pushing and giving space as required.

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-12T22:54:36.825Z · score: 4 (3 votes) · LW · GW

Also, in the context of discussion and debate, Nurture Culture is a stance of:

"What you're saying sounds alien and crazy and wrong, but I will operate as though you have something valuable to say and I will orient towards you with openness, curiosity, and patience. Even though I don't understand what you're saying or think it's wrong, I still welcome you. We are not fighting."

This stance is warranted precisely when similarity is low and ITT-passing is a distant possibility (although this it is this attitude which could move you towards it).

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-12T22:07:03.375Z · score: 1 (1 votes) · LW · GW

Edit: Plausibly what I'm describing here is what you call a "degenerate case of nurture that is just about nice and polite" but I think there's a lot more to it than common notions of niceness and politeness. 1) In the ideal case, it's motivated by real caring, not social convention. 2) It's more demanding than mere pleases and thank yous.


I think you have something different in mind by "Nurture Culture" than what I do (possibly quite real, but still something else). For what I'm thinking of, ITT is two to three orders of magnitude more modeling than required, and probably the wrong kind of modeling, i.e., of beliefs rather than of feelings.

Here's a slightly longer example of what I was thinking of as Nurture Culture:

Bob: *Is at employee at Ad Corp. He enters the conference room to present the budget figures he's calculated to his manager, Alice, and some other colleagues.*
Alice: *Notices several bad mistakes in the budget.*
Alice: "Thanks Bob! We appreciate you putting in the long hours to get this done before the deadline. Okay, hmm. I like how you're breaking down ad spend across channels, that seems right . . . . can you walk me through columns F and G? Those aren't clear to me."

Alice isn't doing anything profound here, she isn't scrying Bob's soul or getting at any deep, difficult understanding of a complicate worldview that he has. She's just making a few assumptions about how someone new might feel and acting on them:

a) recognize that even if he made mistakes, Bob put in hard work, wants to do a good job, and probably wants her (his manager's approval).

b) although the mistakes were most immediately salient to her, she models that Bob might be hurt (and poorly conditioned) if she zeroes in them first. Instead, she starts by thanking and validating Bob so that he knows the overall context is one where's valued and is getting approval.

c) Once she's gone through the process of getting Bob comfortable, she starts to gently bring his attention towards the mistakes and surface them for discussion in a way that doesn't shame him.

This takes some skill and practice and effort which is why it gets taught in management books and feedback training courses HR runs at workplaces, but it's not beyond most people. I don't know the Keegan levels, but I don't think it should take a high one? When I say "more complicated social routine", I just mean it's more complicated than "say exactly what you're thinking and feeling with little filter."

[I'll also note that whatever the culture, if Bob is a new employee, then he might be right to be justifiably doubtful about he and his work are judged from the outset such that he benefits from being Nurtured rather than having his mistakes placed front and center in his first week on job. Though once this scene has played out fifty times and Alice and Bob deeply trust and respect each other - whatever the baseline culture was - I imagine that Alice will be a lot more direct because she doesn't need to freshly establish the trust and respect.

Combat vs Nurture: Cultural Genesis

2018-11-12T02:11:42.921Z · score: 36 (11 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-11T22:44:28.278Z · score: 48 (18 votes) · LW · GW

This content was moved from the main body of the post to this comment. After receiving some good feedback, I've decided I'll follow the template of "advice section in comments" for most of my posts.

Some Quick Advice


  • See if you can notice conversational cultures/styles which match what I’ve described.
  • Begin noticing if you lean towards a particular style.
  • Begin paying attention to whether those you discuss with might have a particular style, especially if it’s different from yours.
  • Start determining if different groups you’re a member of, e.g. clubs or workplaces, lean in one cultural direction or another.


  • Reflect on the advantages that cultures/styles different to your own have and others might use them instead.
  • Consider that on some occasions styles different to yours might be more appropriate.
  • Don’t assume that alternatives to your own culture are obviously wrong, stupid, bad, or lacking in skills.


  • Push yourself a little in the direction of adopting a non-default style for you. Perhaps you already do but push yourself a little more. Try doing so and feeling comfortable and open, if possible.

Ideal and Degenerate Forms of Each Culture

Unsurprisingly, each of the cultures has their advantages and weaknesses, mostly to do with when and where they’re most effective. I hope to say more in future posts, but here I’ll quickly list as what I think the cultures look like at their best and worst.

Combat Culture

At its best

  • Communicators can more fully focus their attention on their ideas the content rather than devoting thought to the impact of their speech acts on the emotions of others.
  • Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.
  • The very combativeness and aggression prove to all involved that they’re respected and included.

At its worst

  • The underlying truth-seeking nature of conversation is lost and instead becomes a fight or competition to determine who is Right.
  • The combative style around ideas is abused to dismiss, dominate, bully, belittle, or exclude others.
  • It devolves into a status game.

Nurture Culture

At its best

  • Everyone is made to feel safe, welcomed, and encouraged to participate without fear of ridicule, dismissal, or judgment.
  • People assist each other to develop their ideas, seeking to find their strongest versions rather than attacking their weak points. Curiosity pervades.

At its worst

  • Fear of inducing a negative feeling in others and the need to create positive feelings and impressions of inclusion dominate over any truth-seeking goal.
  • Empathy becomes pathological and ideas are never criticized.
  • Communicators spend most of their thought and attention on the social interaction itself rather than the ideas they’re trying to exchange.
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-10T07:24:59.395Z · score: 17 (9 votes) · LW · GW


I agree that Nurture Culture can be exploited for status too, perhaps equally so. When I was writing the post, I was thinking that Combat Culture more readily heads in that direction since in Combat Culture you are already permitted to act in ways which in other contexts would be outright power-plays, e.g. calling their ideas dumb. With Nurture Culture, it has to be more indirect, e.g. the whole "you are not conforming to the norm" thing. Thinking about it more, I'm not sure. It could be they're on par for status exploitability.

An increase in combativeness alongside familiarity and comfort matches my observation too, but I don't think it's universal - possibly a selection effect for those more natively Combative. To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends. Possibly nurturing is one way to show that you care and this causes it to become ramped up in some very close relationships. Possibly it's that receiving nurturing creates warm feelings of safety, security, and comfort for some such that they provide this to each other to a higher extent in closer relationships. I'm not sure, I haven't thought about this particular aspect in depth.

Conversational Cultures: Combat vs Nurture

2018-11-09T23:16:15.686Z · score: 120 (41 votes)
Comment by ruby on Kenshō · 2018-01-21T19:52:50.814Z · score: 35 (9 votes) · LW · GW

One more pointer - clarity on the purpose of a post is paramount. From your comments, it seems like a few different purposes got mixed in:

a) Kensho/Looking are very powerful, I want to motivate you to try them.

b) There is a puzzle around communicating things which you can only conceptually understand once you've experienced them. (I'd focus mostly on the puzzle and make it clear Kensho is but an example in this post.)

There's a dictum: "1) Tell them what you're going to tell them, 2) Tell them, 3) Tell them what you've told them." Going by your CFAR classes too, I feel like you don't like telling people what you're going to tell them (you even want them to be confused). I think this unsurprisingly results in confusion.

Comment by ruby on Kenshō · 2018-01-21T19:29:26.153Z · score: 57 (16 votes) · LW · GW
Appreciation for you, Ruby. :-)
I’m honestly flummoxed about how to create the type of post you’re suggesting. Given the clarity of everything else you’ve written here about this, I’m inclined to believe you. And I’d much like to write that post, or see it written. Any pointers?

Thanks! Okay, some pointers :) You asked for them!

Your writing style is characteristcally evocative - the kind of writing I'd use to point at the majesty of stars, the tragedy of death, and the grandeur of all that could be. It's emotional, and that is perhaps both its strength and its weakness.

You have the right style to conjure strong feelings around things one already believes and endorses (perfect for Solstice), but perhaps less so to convince people of things they're skeptical of. A pastor's rousing sermon about Jesus's love for all mankind, while moving to his congregation, does little to convince me about the matter.


Unfortunately, it seems that people who don’t know how to intentionally Look literally cannot conceptually understand what Looking is for . . .

I emphatically reject this. You've observed that you don't feel understood when you explain your experience and inferred that this is a deficiency on the part of the listener rather than the explainer. I think that's the wrong inference, even if many explainers have struggled similarly. Explaining is hard. But even supposing you are completely right, most listeners are not going to respond charitably to claims of "you couldn't possibly understand". (I'll be directly harsh and say I think accusing someone of not engaging in good faith rather than doubting your own communication is suggestive of the wrong attitude.)

Rightly or wrongly, beneath the post there is an undertone with a few sentiments: "Oh my god, guys!!", "This is something really, really important and you couldn't possibly understand, I'm frustrated", and "You don't get it! Only special people get it." (And perhaps a hint of enjoying the fact you have a special secret that others don't. We're all human, after all.)

The tone I think would be persuasive is along the lines of "I think I'm onto something big, I think it's had big benefits, I'd like you too benefit too, this is difficult to convey, but please hear out my best case."


At the end of the day, I think this is about providing a clear and solid case for why you believe what you believe. Sketching out it lightly, the case I might make could look like:

Observations: I spent time meditating; I have experienced benefits X and Y.
Model: Meditation and minfulness consist of moving parts A1, A2, A3, which predict results X and Y. (Here are my models of neuroscience, attention, etc.)
Claim: Meditation and mindfulness practice has given me be benefits X and Y.

Listeners might then doubt any of the pieces. They might be incredulous that I experienced such exteme benefits (your claims are pretty extreme), they might doubt that even if I experienced these benefits, that they were attritutable to what I'm claiming is the cause (rather than say, placebo or mania), or they might find my model implausible (brains don't work that way!). But at least if I have a 3rd person, mechanistic model, we can argue about its correctness.

Maybe I should add that we can analogize Kensho/enlightenment to consciousness. If we imagine some unconscious AIs modelling the possible existence, possible purpose, and expected observations you would get if humans have this "consciousness" thing, I think they could reasonably do that even if there was no way for them to experience consciousness from the inside with their own minds. They could talk about how it worked and what its benefits were without "seeing" it from the inside. I think they could use that understanding to decide if they want to self-modify to have consciousness, and that a convincing case could be made "from the outside".

Summing up a rambly response, I think a good post on enlightenment has at least one of the following:

1) Your observations, inferences, and why the reader should trust them.

2) A 3rd party perspective, mechanistic model for how enlightenment works and the resultant predictions.

To close, the post I'd write would large be this is what I've experienced, this is the evidence, and this is my model for WHY.

Comment by ruby on Kenshō · 2018-01-20T06:58:16.079Z · score: 35 (15 votes) · LW · GW

I think that a) Val has obtained a real and valuable skill, b) Oli is engaging in good faith and making a reasonable request, and c) that there is a type of post that Val could conceivably write which Oli would find satisfactory.

I hope to eventually prove this by achieving enough skill in this area myself (making the assumption I'm correct in understand what Val's skill is), obtaining the value, and then conveying this in a convincing manner such that anyone reasoning as Oli does is motivated by my case.

Comment by ruby on Kenshō · 2018-01-20T03:35:48.366Z · score: 8 (5 votes) · LW · GW

Glad to hear you've given it a decent shot. That being the case, I think it is pretty legitimate for you to not invest further time.

I do think that meditation/mindfulness can offer things not obtainable via the alternatives you listed, but I don't think I could make a successful case for it briefly. My only remaining recommendation would be, if you haven't, to spend some time meditating with a focus on your sensations and emotional state, instead of the more typical breathing. I especially recommend it when experiencing stronger emotions.

But I suppose I'll just have to go off and do some remarkable things!

Comment by ruby on Kenshō · 2018-01-20T02:47:16.034Z · score: 18 (7 votes) · LW · GW

Glad it's helpful!

Psychological resilience and motivated cognition are difficult to measure, but I'm very certain they're real things. Not everything real and which has a large causal effect on the world is easily measured. I'm not inclined to sketch out protocols for measuring these things in this comment thread, but I'd recommend How To Measure Anything as the book I'd turn to if I was to try.

Comment by ruby on Kenshō · 2018-01-20T02:11:06.208Z · score: 50 (16 votes) · LW · GW

I haven't achieved any state profound enough that I'd consider it enlightenment, but I'll answer based on my understanding and what I've experienced so far.

I don't think there is a trivially-verifable power conferred by enlightenment, but I would wager that people who have experienced enlightened will perform systematically better at certain tasks, including:

  • Maintaining emotional stability and wellbeing regardless of circumstance, e.g. intense stress, uncertainty, tragic loss.
  • Better ability to stare directly at uncomfortable truths, and resultantly, less motivated cognition.

It's a useful state to achieve if you plan to wake up each day, confront the sheer magnituted of the suffering that exists in the world, or carry the burden of trying to ensure the far future is as good as it could be, while hoping to be a psychologically well-adjusted and effective human. All the more so if the tasks you carry out push you to your limits[1].

It'd take resource-intensive experiments to measure these effects, but I'd still wager on their existence. Much of my confidence is because each time I feel myself move along theses dimensions, I reap marginal benefits.

[1] I think many EA's suffer because they take on these tasks without the mental infrastructure required to bear them and still flourish.

Comment by ruby on Kenshō · 2018-01-20T01:58:40.356Z · score: 17 (5 votes) · LW · GW

I don't think you need to approach meditation as a wager of vast resources for a gain obtained only at the end. My experience is that a modest amount of meditation, properly approached, has offered me substantial benefits. My recommendation is to spend a modest number of hours trying meditation out, and use the information obtained to judge whether or not it is worth further investment.

I have some detailed models of what meditation accomplishes and why, and I hope to write about them eventually. Till then, I'm happy to chat. I'd also recommend the Science of Enlightenment by Shinzen Young; definitely heavy on the grand promises, but he offers more models of what's going on than most texts.

Comment by ruby on Kenshō · 2018-01-20T01:42:23.196Z · score: 38 (12 votes) · LW · GW

My sense is that "enlightenment" is a perceptual-emotional shift rather than any change of belief or judgment, and this makes the communication difficult, same as communicating any other qualia to a person who hasn't had it. It's not unlike trying to communicate what a hypothetical novel color looks like to someone who hasn't seen it.

Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. "I can see a wavelength you can't." In the case of enlightenment, however, we don't have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience. (I'm reminded of the Mary's Room thought experiment.)

In the case of Val's Kensho, I don't think I've ever occupied that mental state, but I've experienced enough variations in relevant dimensions of perception, emotion, and relation to reality that I get that he's gone in a certain direction in a certain coordinate system of sorts. I don't occupy the same perceptual-mental state though through my understanding alone, but I feel like I could follow if I did the right things.

I think the advice to get used to using fake frames as leading towards this is on point since it's close to the skill of shifting one's perceptual-emotional state. Rationalists focus on having a map which matches the territory and are therefore constanty drawing in new lines and editing old ones; Val's pointing at the skill of reconsidering the ontology of the representation. What if roads, houses, and trees weren't the basic units of a map? This thought maneuver requires a pulling back from one's "object level models", and I see that pulling back generalizing to pulling back entirely from models and being able to see "raw perception-emotion". At that level, there are mental transformations possible which aren't about beliefs or judgments. You don't shift to consider death less bad, but your relationship to it is changed, even if it still horrific.

"Okay" is such an underqualified word for what I think Val is trying to convey. At least if it's the same thing I have a sense of.

Comment by ruby on Kenshō · 2018-01-20T01:10:52.319Z · score: 8 (3 votes) · LW · GW

Good post! I'm excited for your milestone. I'm not sure if I can discern between my having enough experience with mindfulness and acceptance to get what you're pointing at, or if I'm simply using my closest conceptual bucket, but I believe your experience is real (if not always your interpretation of it).

Comment by Ruby on [deleted post] 2017-10-23T01:15:09.824Z

This is a great post. In addition to the main points, your example around Guess-/Ask-/Tell-Cultures was useful for perspective taking in a way that somehow feels like it generalizes beyond the specific example for me.

Identities are [Subconscious] Strategies

2017-10-15T18:10:46.042Z · score: 20 (9 votes)
Comment by Ruby on [deleted post] 2017-10-15T02:20:53.239Z

I feel that Nate Soares's post Rest in Motion is relevant here, and, by extension, my own response to that post.

Comment by ruby on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-09T04:28:13.064Z · score: 4 (4 votes) · LW · GW

I'm surprised by this idea of treating SSC as a rationalist hub. I love Scott, Scott's blog, and Scott's writing. Still, it doesn't seem like it is a "rationality blog" to me. Not directly at least. Scott is applying a good deal of epistemic rationality to his topics of interest, but the blog isn't about epistemic rationality, and even less so about practical rationality. (I would say that Brienne's and Nate's 'self-help' posts are much closer to that.) By paying attention, one might extract the rationality principles Scott is using, but they're not outlined.

There's a separate claim that while Scott's blog isn't about rationality in the same was LW is, it has attracted the same audience, and therefore can be a rationality attractor/hub. This has some legitimacy, but I still don't like it. LW has attracted a lot of people who like to debate interesting topics and ideas on the internet, with a small fraction who are interested in going out and doing things (or just staying in, but actually changing themselves). Scott's blog, being about ideas, seems that it also attract lots of people who simply like mental stimulation, but without a filter for those most interested in doing. I'd really like our rationality community hubs to select for those who want take rationality seriously and implement it in their minds and actions.

On this selecting for -or at least being about- the EA Forum is actually quite good.

Lastly, maybe I feel strong resistence to trying to open Scott's blog up because it seems like it really is his personal blog about things he wants to write about - and just because he's really successful and part of the community doesn't mean we get tell him now 'open it up'/'give it over'/co-opt it for the rest of the community.

Comment by ruby on Meetup : LW Copenhagen: December Meetup · 2014-12-16T15:46:21.850Z · score: 0 (0 votes) · LW · GW

No page on, I'm afraid.

Meetup : LW Copenhagen: December Meetup

2014-12-04T17:25:24.060Z · score: 1 (2 votes)
Comment by ruby on Meetup : Copenhagen September Social Meetup - Botanisk Have · 2014-09-27T12:35:32.940Z · score: 0 (0 votes) · LW · GW

I'm on a bench near the Botanisk Have Butik. Entrance to the park is corner of Gothersgade and Øster Voldgade.

Meetup : Copenhagen September Social Meetup - Botanisk Have

2014-09-21T11:50:44.225Z · score: 1 (2 votes)

Meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed

2014-09-07T08:19:46.172Z · score: 1 (2 votes)
Comment by ruby on Motivators: Altruistic Actions for Non-Altruistic Reasons · 2014-06-24T01:21:47.343Z · score: 2 (4 votes) · LW · GW

You are very kind, good sir.

Do me one more favour - share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.

Comment by ruby on Motivators: Altruistic Actions for Non-Altruistic Reasons · 2014-06-22T00:09:02.453Z · score: 4 (4 votes) · LW · GW

Thanks! Fixed.

Motivators: Altruistic Actions for Non-Altruistic Reasons

2014-06-21T16:32:50.825Z · score: 19 (22 votes)
Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T13:04:50.833Z · score: 0 (0 votes) · LW · GW

A goal I set is a state of the world I am actively trying to bring about, whereas a value is something which . . . has value to me. The things I value dictate which world states I prefer, but for either lack of resources or conflict, I only pursue the world states resulting from a subset of my values.

So not everything I value ends up being a goal. This includes terminal goals. For instance, I think that it is true that I terminally value being a talented artist - greatly skilled in creative expression - being so would make me happy in and of itself, but it's not a goal of mine because I can't prioritise it with the resources I have. Values like eliminating suffering and misery are ones which matter to me more, and get translated into corresponding goals to change the world via action.

I haven't seen a definition provided, but if I had to provide one for 'terminal goal' it would be that it's a goal whose attainment constitutes fulfilment of a terminal value. Possessing money is rarely a terminal value, and so accruing money isn't a terminal goal, even if it is intermediary to achieving a world state desired for its own sake. Accomplishing the goal of having all the hungry people fed is the world state which lines up with the value of no suffering, hence it's terminal. They're close, but not quite same thing.

I think it makes sense to possibly not work with terminal goals on a motivational/decision making level, but it doesn't seem possible (or at least likely) that someone wouldn't have terminal values, in the sense of not having states of the world which they prefer over others. [These world-state-preferences might not be completely stable or consistent, but if you prefer the world be one way than another, that's a value.]

Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T12:38:54.660Z · score: 1 (1 votes) · LW · GW

I feel like there's not much of a distinction being made here between terminal values and terminal goals. I think they're importantly different things.

Meetup : July Rationality Dojo: Disagreement

2014-06-12T14:23:04.899Z · score: 1 (2 votes)

Australian Mega-Meetup 2014 Retrospective

2014-05-22T01:59:02.912Z · score: 21 (22 votes)

Credence Calibration Icebreaker Game

2014-05-16T07:29:25.527Z · score: 15 (19 votes)

Meetup : Melbourne June Rationality Dojo: Memory

2014-05-15T12:53:45.469Z · score: 1 (2 votes)

Meetup : LW Australia Mega-Meetup

2014-04-13T11:23:34.500Z · score: 4 (5 votes)

LW Australia Weekend Retreat

2014-04-07T09:45:35.729Z · score: 8 (9 votes)