Comment by ruby on Combat vs Nurture & Meta-Contrarianism · 2019-01-13T02:14:01.527Z · score: 2 (1 votes) · LW · GW

I've been at a workshop and haven't had much chance to engage with this post. Thanks for writing it, it's an excellent reply and says many things better than I managed to. I especially like hierarchy which swings between nurture and combat, that seem well described to me. Also strong endorsement for meeting conversations where they're at.

Comment by ruby on Optimizing for Stories (vs Optimizing Reality) · 2019-01-07T23:39:42.602Z · score: 4 (2 votes) · LW · GW

I probably didn't emphasize this enough in the main post, but the idea I'm really going for is that there is difference in optimizing for stories vs. optimizing for reality. There's a difference in goal and intention. Even if it's the case that human are never seeing "rock-bottom reality" itself and everything is mediated through experience, there is still a big difference between a) someone attempting to change an aspect of the underlying reality such that actual different things happen in the world, and b) someone attempting to change the judgments of another person by inputting the right series of bits into them.

Optimizing stories is really about a mono-focus on optimizing the specific corners of reality which exists inside human heads.

Comment by ruby on Rationalization · 2019-01-07T16:07:30.272Z · score: 2 (1 votes) · LW · GW

Oh, right. Once upon a time I knew that was the word. Thanks.

Comment by ruby on Rationalization · 2019-01-07T08:35:58.449Z · score: 3 (2 votes) · LW · GW

I didn't know that was the word for excuse, but I think it's an excellent word itself to use for rationalization. No synonym required. ״רצה״ is the root for "want" and "הַתְרָצָה" is the the reflexive conjugation, so it's approximately "self-wanting." Which is exactly what rationalization is - reasoning towards what you want to be true.

Optimizing for Stories (vs Optimizing Reality)

2019-01-07T08:03:22.512Z · score: 43 (14 votes)
Comment by ruby on Learning-Intentions vs Doing-Intentions · 2019-01-03T01:48:06.424Z · score: 2 (1 votes) · LW · GW

The intended meaning of the post is that there can be "producing in order to produce" and "producing in order to learn". The producing to learn might involve very real producing, but underlying goal is different. You might be trying to get real investment from real investors, but the goal could be a) receiving money, b) testing your assumptions about whether you can raise successfully.

In practice, I think you're right that sometimes (or often) both intentions are necessary. You need to get users both to learn and to survive. Still, the two intentions trade off against each other and it's possible to forget about one or the other. My primary recommendation is to be aware and deliberate about your intentions so that you have the right ones at the right time in the right amount.

Comment by ruby on Learning-Intentions vs Doing-Intentions · 2019-01-02T08:58:35.553Z · score: 3 (2 votes) · LW · GW

Thanks for the link! Sorry to change from the term "mindset" to "intention" on you.

Learning-Intentions vs Doing-Intentions

2019-01-01T22:22:39.364Z · score: 58 (21 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-12-05T21:48:53.482Z · score: 10 (4 votes) · LW · GW

I emphatically agree with Zvi about the mistakeness of saying "you're dumb."

In my own words:

1) "You're absolutely wrong" is strong language, but not unreasonable in a combative culture if that's what you believe and you're honestly reporting it.

2a) "You're saying/doing something dumb" becomes a bit more personal than when making a statement about a particular view. Though I think it's rare that one have need to say this, and it's only appropriate when levels of trust and respect are very high.

2b) "You're being dumb" is a little harsher than "saying/doing something dumb." The two don't register as much different to me, however, though they do to Mary Chernyshenko?

3) "You're dumb" (introduced in this discussion by Benquo) is now making a general statement about someone else and is very problematic. It erodes the assumptions of respect which make combative-type cultures feasible in the first place. I'd say that conversations where people are calling others dumb to their faces are not situations I'd think of as healthy, good-faith, combative-type conversations.

[As an aside, even mild "that seems wrong to me"-type statements should be recognized as potentially combative. There are many contexts where any explicit disagreement registers as hostile or contrarian.]

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-29T01:43:27.485Z · score: 2 (1 votes) · LW · GW

Seconded. Would like to hear the in-depth version.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-28T02:10:00.413Z · score: 6 (3 votes) · LW · GW

Thanks for surfacing these! I've now edited the post to mention these sources and your comment.

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T23:11:44.834Z · score: 5 (3 votes) · LW · GW

Thanks, that was clarifying and helpful.

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T19:36:37.038Z · score: 6 (4 votes) · LW · GW
I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

Interesting, though I'm not sure I fully understand your meaning. Do you mind elaborating your examples a touch?

Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-26T07:24:19.052Z · score: 19 (7 votes) · LW · GW

Two dimensions independent of the two cultures

Having been inspired by the comments here, I'm now thinking that there are two communication dimensions at play here within the Cultures. The correlation between these dimensions and the Cultures is incomplete which has been causing confusion.

1) The adversarial-collaborative dimension. Adversarial communication is each side attacking the other's views while defending their own. Collaborative communication is openness and curiosity to each other's ideas. As Ben Pace describes it:

I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way.

2) The "emotional concern and effort" dimension. Communication can be conducted with little attention or effort placed on ensuring the emotional comfort of the participants, often resulting in a directness or bluntness (because it's assumed people are fine and don't need things softened). Alternatively, communication can be conducted with each participant putting in effort to ensure the other feels okay (feels validated/respected/safe/etc.) At this end of the spectrum, words, tone, and expression are carefully selected as overall a model of the other is used to ensure the other is taken care of.

My possible bucket error

It was easy for me to notice "adversarial, low effort towards emotional comfort" as one cluster of communication behaviors and "collaborative, high concern" as another. Those two clusters are what I identified as Combat Culture and Nurture Culture.

Commenters here, including at least Raemon, Said, and Ben Pace, have rightly made comments to the effect that you can have communication where participants are being direct, blunt, and not proactively concerned for the feelings of the other while nonetheless still being open, being curious, and working collaboratively to find the truth with a spirit of, "being on the same team". This maybe falls under Combat Culture too, but it's a less central example.

On the other side, I think it's entirely possible to be acting combatively, i.e. with an external appearance of aggression and hostility, while nonetheless being very attentive to the feelings and experience of the other. Imagine two fencers sparring in the practice ring: during a bout, each is attacking and trying to win, however, they're also taking create care as to not actually injure the other. They would stop the moment they suspected they had and switch to an overtly nurturing mode.

A 2x2 grid?

One could create a 2x2 grid with the two dimensions described in this comment. Combat and Nurture cultures most directly fit in two of the quadrants, but I think the other two quadrants are populated by many instances of real-world communication. In fact, these other two quadrants might contain some very healthy communication.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-24T23:06:35.207Z · score: 6 (4 votes) · LW · GW

Epistemic status tag added. Thanks.

Comment by ruby on Four factors which moderate the intensity of emotions · 2018-11-24T22:12:48.275Z · score: 6 (4 votes) · LW · GW

Personal observation. I don't have any particular sources for anything here, though my thinking is influenced by some academic reading about emotions over the years. There are models more than conclusions and my intention is that readers evaluate them using their own observations rather than accept them based on sources, studies, or my say so.

Four factors which moderate the intensity of emotions

2018-11-24T20:40:12.139Z · score: 60 (18 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-13T20:10:14.899Z · score: 2 (2 votes) · LW · GW
I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

I second that experience.

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-13T20:08:05.601Z · score: 2 (2 votes) · LW · GW

I completely agree that Nurture Culture has capabilities far beyond getting along without conflict.

When I think of examples of Nurture Culture at its most powerful, much of what comes to mind is the mode of relating used in Focusing, Internal Double-Crux, and Internal Family Systems. There's a mode of relating that facilitate hazy, not-necessarily articulate, reticent, even fearful parts of oneself to voice themselves by being open, encouraging, validating, and non-judgmental (i.e., traits which are not particularly the hallmarks of Combat Culture).

I've found that increased skill with "advanced" Nurture Culture helped me relate to parts of myself far better alongside relating to others better.

At the risk of being a little repetitive , I'll think the modeling required for this mode of relating is not that of beliefs but of feelings. You model (and are attentive and responsive to) the feelings of the other (internal or external) in the context: continuously gauging their comfort, willingness, and needs within the conversation. Pushing and giving space as required.

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-12T22:54:36.825Z · score: 4 (3 votes) · LW · GW

Also, in the context of discussion and debate, Nurture Culture is a stance of:

"What you're saying sounds alien and crazy and wrong, but I will operate as though you have something valuable to say and I will orient towards you with openness, curiosity, and patience. Even though I don't understand what you're saying or think it's wrong, I still welcome you. We are not fighting."

This stance is warranted precisely when similarity is low and ITT-passing is a distant possibility (although this it is this attitude which could move you towards it).

Comment by ruby on Combat vs Nurture: Cultural Genesis · 2018-11-12T22:07:03.375Z · score: 1 (1 votes) · LW · GW

Edit: Plausibly what I'm describing here is what you call a "degenerate case of nurture that is just about nice and polite" but I think there's a lot more to it than common notions of niceness and politeness. 1) In the ideal case, it's motivated by real caring, not social convention. 2) It's more demanding than mere pleases and thank yous.


I think you have something different in mind by "Nurture Culture" than what I do (possibly quite real, but still something else). For what I'm thinking of, ITT is two to three orders of magnitude more modeling than required, and probably the wrong kind of modeling, i.e., of beliefs rather than of feelings.

Here's a slightly longer example of what I was thinking of as Nurture Culture:

Bob: *Is at employee at Ad Corp. He enters the conference room to present the budget figures he's calculated to his manager, Alice, and some other colleagues.*
Alice: *Notices several bad mistakes in the budget.*
Alice: "Thanks Bob! We appreciate you putting in the long hours to get this done before the deadline. Okay, hmm. I like how you're breaking down ad spend across channels, that seems right . . . . can you walk me through columns F and G? Those aren't clear to me."

Alice isn't doing anything profound here, she isn't scrying Bob's soul or getting at any deep, difficult understanding of a complicate worldview that he has. She's just making a few assumptions about how someone new might feel and acting on them:

a) recognize that even if he made mistakes, Bob put in hard work, wants to do a good job, and probably wants her (his manager's approval).

b) although the mistakes were most immediately salient to her, she models that Bob might be hurt (and poorly conditioned) if she zeroes in them first. Instead, she starts by thanking and validating Bob so that he knows the overall context is one where's valued and is getting approval.

c) Once she's gone through the process of getting Bob comfortable, she starts to gently bring his attention towards the mistakes and surface them for discussion in a way that doesn't shame him.

This takes some skill and practice and effort which is why it gets taught in management books and feedback training courses HR runs at workplaces, but it's not beyond most people. I don't know the Keegan levels, but I don't think it should take a high one? When I say "more complicated social routine", I just mean it's more complicated than "say exactly what you're thinking and feeling with little filter."

[I'll also note that whatever the culture, if Bob is a new employee, then he might be right to be justifiably doubtful about he and his work are judged from the outset such that he benefits from being Nurtured rather than having his mistakes placed front and center in his first week on job. Though once this scene has played out fifty times and Alice and Bob deeply trust and respect each other - whatever the baseline culture was - I imagine that Alice will be a lot more direct because she doesn't need to freshly establish the trust and respect.

Combat vs Nurture: Cultural Genesis

2018-11-12T02:11:42.921Z · score: 36 (11 votes)
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-11T22:44:28.278Z · score: 46 (17 votes) · LW · GW

This content was moved from the main body of the post to this comment. After receiving some good feedback, I've decided I'll follow the template of "advice section in comments" for most of my posts.

Some Quick Advice


  • See if you can notice conversational cultures/styles which match what I’ve described.
  • Begin noticing if you lean towards a particular style.
  • Begin paying attention to whether those you discuss with might have a particular style, especially if it’s different from yours.
  • Start determining if different groups you’re a member of, e.g. clubs or workplaces, lean in one cultural direction or another.


  • Reflect on the advantages that cultures/styles different to your own have and others might use them instead.
  • Consider that on some occasions styles different to yours might be more appropriate.
  • Don’t assume that alternatives to your own culture are obviously wrong, stupid, bad, or lacking in skills.


  • Push yourself a little in the direction of adopting a non-default style for you. Perhaps you already do but push yourself a little more. Try doing so and feeling comfortable and open, if possible.

Ideal and Degenerate Forms of Each Culture

Unsurprisingly, each of the cultures has their advantages and weaknesses, mostly to do with when and where they’re most effective. I hope to say more in future posts, but here I’ll quickly list as what I think the cultures look like at their best and worst.

Combat Culture

At its best

  • Communicators can more fully focus their attention on their ideas the content rather than devoting thought to the impact of their speech acts on the emotions of others.
  • Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.
  • The very combativeness and aggression prove to all involved that they’re respected and included.

At its worst

  • The underlying truth-seeking nature of conversation is lost and instead becomes a fight or competition to determine who is Right.
  • The combative style around ideas is abused to dismiss, dominate, bully, belittle, or exclude others.
  • It devolves into a status game.

Nurture Culture

At its best

  • Everyone is made to feel safe, welcomed, and encouraged to participate without fear of ridicule, dismissal, or judgment.
  • People assist each other to develop their ideas, seeking to find their strongest versions rather than attacking their weak points. Curiosity pervades.

At its worst

  • Fear of inducing a negative feeling in others and the need to create positive feelings and impressions of inclusion dominate over any truth-seeking goal.
  • Empathy becomes pathological and ideas are never criticized.
  • Communicators spend most of their thought and attention on the social interaction itself rather than the ideas they’re trying to exchange.
Comment by ruby on Conversational Cultures: Combat vs Nurture · 2018-11-10T07:24:59.395Z · score: 17 (9 votes) · LW · GW


I agree that Nurture Culture can be exploited for status too, perhaps equally so. When I was writing the post, I was thinking that Combat Culture more readily heads in that direction since in Combat Culture you are already permitted to act in ways which in other contexts would be outright power-plays, e.g. calling their ideas dumb. With Nurture Culture, it has to be more indirect, e.g. the whole "you are not conforming to the norm" thing. Thinking about it more, I'm not sure. It could be they're on par for status exploitability.

An increase in combativeness alongside familiarity and comfort matches my observation too, but I don't think it's universal - possibly a selection effect for those more natively Combative. To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends. Possibly nurturing is one way to show that you care and this causes it to become ramped up in some very close relationships. Possibly it's that receiving nurturing creates warm feelings of safety, security, and comfort for some such that they provide this to each other to a higher extent in closer relationships. I'm not sure, I haven't thought about this particular aspect in depth.

Conversational Cultures: Combat vs Nurture

2018-11-09T23:16:15.686Z · score: 120 (41 votes)
Comment by ruby on Kenshō · 2018-01-21T19:52:50.814Z · score: 35 (9 votes) · LW · GW

One more pointer - clarity on the purpose of a post is paramount. From your comments, it seems like a few different purposes got mixed in:

a) Kensho/Looking are very powerful, I want to motivate you to try them.

b) There is a puzzle around communicating things which you can only conceptually understand once you've experienced them. (I'd focus mostly on the puzzle and make it clear Kensho is but an example in this post.)

There's a dictum: "1) Tell them what you're going to tell them, 2) Tell them, 3) Tell them what you've told them." Going by your CFAR classes too, I feel like you don't like telling people what you're going to tell them (you even want them to be confused). I think this unsurprisingly results in confusion.

Comment by ruby on Kenshō · 2018-01-21T19:29:26.153Z · score: 57 (16 votes) · LW · GW
Appreciation for you, Ruby. :-)
I’m honestly flummoxed about how to create the type of post you’re suggesting. Given the clarity of everything else you’ve written here about this, I’m inclined to believe you. And I’d much like to write that post, or see it written. Any pointers?

Thanks! Okay, some pointers :) You asked for them!

Your writing style is characteristcally evocative - the kind of writing I'd use to point at the majesty of stars, the tragedy of death, and the grandeur of all that could be. It's emotional, and that is perhaps both its strength and its weakness.

You have the right style to conjure strong feelings around things one already believes and endorses (perfect for Solstice), but perhaps less so to convince people of things they're skeptical of. A pastor's rousing sermon about Jesus's love for all mankind, while moving to his congregation, does little to convince me about the matter.


Unfortunately, it seems that people who don’t know how to intentionally Look literally cannot conceptually understand what Looking is for . . .

I emphatically reject this. You've observed that you don't feel understood when you explain your experience and inferred that this is a deficiency on the part of the listener rather than the explainer. I think that's the wrong inference, even if many explainers have struggled similarly. Explaining is hard. But even supposing you are completely right, most listeners are not going to respond charitably to claims of "you couldn't possibly understand". (I'll be directly harsh and say I think accusing someone of not engaging in good faith rather than doubting your own communication is suggestive of the wrong attitude.)

Rightly or wrongly, beneath the post there is an undertone with a few sentiments: "Oh my god, guys!!", "This is something really, really important and you couldn't possibly understand, I'm frustrated", and "You don't get it! Only special people get it." (And perhaps a hint of enjoying the fact you have a special secret that others don't. We're all human, after all.)

The tone I think would be persuasive is along the lines of "I think I'm onto something big, I think it's had big benefits, I'd like you too benefit too, this is difficult to convey, but please hear out my best case."


At the end of the day, I think this is about providing a clear and solid case for why you believe what you believe. Sketching out it lightly, the case I might make could look like:

Observations: I spent time meditating; I have experienced benefits X and Y.
Model: Meditation and minfulness consist of moving parts A1, A2, A3, which predict results X and Y. (Here are my models of neuroscience, attention, etc.)
Claim: Meditation and mindfulness practice has given me be benefits X and Y.

Listeners might then doubt any of the pieces. They might be incredulous that I experienced such exteme benefits (your claims are pretty extreme), they might doubt that even if I experienced these benefits, that they were attritutable to what I'm claiming is the cause (rather than say, placebo or mania), or they might find my model implausible (brains don't work that way!). But at least if I have a 3rd person, mechanistic model, we can argue about its correctness.

Maybe I should add that we can analogize Kensho/enlightenment to consciousness. If we imagine some unconscious AIs modelling the possible existence, possible purpose, and expected observations you would get if humans have this "consciousness" thing, I think they could reasonably do that even if there was no way for them to experience consciousness from the inside with their own minds. They could talk about how it worked and what its benefits were without "seeing" it from the inside. I think they could use that understanding to decide if they want to self-modify to have consciousness, and that a convincing case could be made "from the outside".

Summing up a rambly response, I think a good post on enlightenment has at least one of the following:

1) Your observations, inferences, and why the reader should trust them.

2) A 3rd party perspective, mechanistic model for how enlightenment works and the resultant predictions.

To close, the post I'd write would large be this is what I've experienced, this is the evidence, and this is my model for WHY.

Comment by ruby on Kenshō · 2018-01-20T06:58:16.079Z · score: 35 (15 votes) · LW · GW

I think that a) Val has obtained a real and valuable skill, b) Oli is engaging in good faith and making a reasonable request, and c) that there is a type of post that Val could conceivably write which Oli would find satisfactory.

I hope to eventually prove this by achieving enough skill in this area myself (making the assumption I'm correct in understand what Val's skill is), obtaining the value, and then conveying this in a convincing manner such that anyone reasoning as Oli does is motivated by my case.

Comment by ruby on Kenshō · 2018-01-20T03:35:48.366Z · score: 8 (5 votes) · LW · GW

Glad to hear you've given it a decent shot. That being the case, I think it is pretty legitimate for you to not invest further time.

I do think that meditation/mindfulness can offer things not obtainable via the alternatives you listed, but I don't think I could make a successful case for it briefly. My only remaining recommendation would be, if you haven't, to spend some time meditating with a focus on your sensations and emotional state, instead of the more typical breathing. I especially recommend it when experiencing stronger emotions.

But I suppose I'll just have to go off and do some remarkable things!

Comment by ruby on Kenshō · 2018-01-20T02:47:16.034Z · score: 18 (7 votes) · LW · GW

Glad it's helpful!

Psychological resilience and motivated cognition are difficult to measure, but I'm very certain they're real things. Not everything real and which has a large causal effect on the world is easily measured. I'm not inclined to sketch out protocols for measuring these things in this comment thread, but I'd recommend How To Measure Anything as the book I'd turn to if I was to try.

Comment by ruby on Kenshō · 2018-01-20T02:11:06.208Z · score: 50 (16 votes) · LW · GW

I haven't achieved any state profound enough that I'd consider it enlightenment, but I'll answer based on my understanding and what I've experienced so far.

I don't think there is a trivially-verifable power conferred by enlightenment, but I would wager that people who have experienced enlightened will perform systematically better at certain tasks, including:

  • Maintaining emotional stability and wellbeing regardless of circumstance, e.g. intense stress, uncertainty, tragic loss.
  • Better ability to stare directly at uncomfortable truths, and resultantly, less motivated cognition.

It's a useful state to achieve if you plan to wake up each day, confront the sheer magnituted of the suffering that exists in the world, or carry the burden of trying to ensure the far future is as good as it could be, while hoping to be a psychologically well-adjusted and effective human. All the more so if the tasks you carry out push you to your limits[1].

It'd take resource-intensive experiments to measure these effects, but I'd still wager on their existence. Much of my confidence is because each time I feel myself move along theses dimensions, I reap marginal benefits.

[1] I think many EA's suffer because they take on these tasks without the mental infrastructure required to bear them and still flourish.

Comment by ruby on Kenshō · 2018-01-20T01:58:40.356Z · score: 17 (5 votes) · LW · GW

I don't think you need to approach meditation as a wager of vast resources for a gain obtained only at the end. My experience is that a modest amount of meditation, properly approached, has offered me substantial benefits. My recommendation is to spend a modest number of hours trying meditation out, and use the information obtained to judge whether or not it is worth further investment.

I have some detailed models of what meditation accomplishes and why, and I hope to write about them eventually. Till then, I'm happy to chat. I'd also recommend the Science of Enlightenment by Shinzen Young; definitely heavy on the grand promises, but he offers more models of what's going on than most texts.

Comment by ruby on Kenshō · 2018-01-20T01:42:23.196Z · score: 38 (12 votes) · LW · GW

My sense is that "enlightenment" is a perceptual-emotional shift rather than any change of belief or judgment, and this makes the communication difficult, same as communicating any other qualia to a person who hasn't had it. It's not unlike trying to communicate what a hypothetical novel color looks like to someone who hasn't seen it.

Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. "I can see a wavelength you can't." In the case of enlightenment, however, we don't have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience. (I'm reminded of the Mary's Room thought experiment.)

In the case of Val's Kensho, I don't think I've ever occupied that mental state, but I've experienced enough variations in relevant dimensions of perception, emotion, and relation to reality that I get that he's gone in a certain direction in a certain coordinate system of sorts. I don't occupy the same perceptual-mental state though through my understanding alone, but I feel like I could follow if I did the right things.

I think the advice to get used to using fake frames as leading towards this is on point since it's close to the skill of shifting one's perceptual-emotional state. Rationalists focus on having a map which matches the territory and are therefore constanty drawing in new lines and editing old ones; Val's pointing at the skill of reconsidering the ontology of the representation. What if roads, houses, and trees weren't the basic units of a map? This thought maneuver requires a pulling back from one's "object level models", and I see that pulling back generalizing to pulling back entirely from models and being able to see "raw perception-emotion". At that level, there are mental transformations possible which aren't about beliefs or judgments. You don't shift to consider death less bad, but your relationship to it is changed, even if it still horrific.

"Okay" is such an underqualified word for what I think Val is trying to convey. At least if it's the same thing I have a sense of.

Comment by ruby on Kenshō · 2018-01-20T01:10:52.319Z · score: 8 (3 votes) · LW · GW

Good post! I'm excited for your milestone. I'm not sure if I can discern between my having enough experience with mindfulness and acceptance to get what you're pointing at, or if I'm simply using my closest conceptual bucket, but I believe your experience is real (if not always your interpretation of it).

Comment by Ruby on [deleted post] 2017-10-23T01:15:09.824Z

This is a great post. In addition to the main points, your example around Guess-/Ask-/Tell-Cultures was useful for perspective taking in a way that somehow feels like it generalizes beyond the specific example for me.

Identities are [Subconscious] Strategies

2017-10-15T18:10:46.042Z · score: 20 (9 votes)
Comment by Ruby on [deleted post] 2017-10-15T02:20:53.239Z

I feel that Nate Soares's post Rest in Motion is relevant here, and, by extension, my own response to that post.

Comment by ruby on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-09T04:28:13.064Z · score: 4 (4 votes) · LW · GW

I'm surprised by this idea of treating SSC as a rationalist hub. I love Scott, Scott's blog, and Scott's writing. Still, it doesn't seem like it is a "rationality blog" to me. Not directly at least. Scott is applying a good deal of epistemic rationality to his topics of interest, but the blog isn't about epistemic rationality, and even less so about practical rationality. (I would say that Brienne's and Nate's 'self-help' posts are much closer to that.) By paying attention, one might extract the rationality principles Scott is using, but they're not outlined.

There's a separate claim that while Scott's blog isn't about rationality in the same was LW is, it has attracted the same audience, and therefore can be a rationality attractor/hub. This has some legitimacy, but I still don't like it. LW has attracted a lot of people who like to debate interesting topics and ideas on the internet, with a small fraction who are interested in going out and doing things (or just staying in, but actually changing themselves). Scott's blog, being about ideas, seems that it also attract lots of people who simply like mental stimulation, but without a filter for those most interested in doing. I'd really like our rationality community hubs to select for those who want take rationality seriously and implement it in their minds and actions.

On this selecting for -or at least being about- the EA Forum is actually quite good.

Lastly, maybe I feel strong resistence to trying to open Scott's blog up because it seems like it really is his personal blog about things he wants to write about - and just because he's really successful and part of the community doesn't mean we get tell him now 'open it up'/'give it over'/co-opt it for the rest of the community.

Comment by ruby on Meetup : LW Copenhagen: December Meetup · 2014-12-16T15:46:21.850Z · score: 0 (0 votes) · LW · GW

No page on, I'm afraid.

Meetup : LW Copenhagen: December Meetup

2014-12-04T17:25:24.060Z · score: 1 (2 votes)
Comment by ruby on Meetup : Copenhagen September Social Meetup - Botanisk Have · 2014-09-27T12:35:32.940Z · score: 0 (0 votes) · LW · GW

I'm on a bench near the Botanisk Have Butik. Entrance to the park is corner of Gothersgade and Øster Voldgade.

Meetup : Copenhagen September Social Meetup - Botanisk Have

2014-09-21T11:50:44.225Z · score: 1 (2 votes)

Meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed

2014-09-07T08:19:46.172Z · score: 1 (2 votes)
Comment by ruby on Motivators: Altruistic Actions for Non-Altruistic Reasons · 2014-06-24T01:21:47.343Z · score: 2 (4 votes) · LW · GW

You are very kind, good sir.

Do me one more favour - share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.

Comment by ruby on Motivators: Altruistic Actions for Non-Altruistic Reasons · 2014-06-22T00:09:02.453Z · score: 4 (4 votes) · LW · GW

Thanks! Fixed.

Motivators: Altruistic Actions for Non-Altruistic Reasons

2014-06-21T16:32:50.825Z · score: 19 (22 votes)
Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T13:04:50.833Z · score: 0 (0 votes) · LW · GW

A goal I set is a state of the world I am actively trying to bring about, whereas a value is something which . . . has value to me. The things I value dictate which world states I prefer, but for either lack of resources or conflict, I only pursue the world states resulting from a subset of my values.

So not everything I value ends up being a goal. This includes terminal goals. For instance, I think that it is true that I terminally value being a talented artist - greatly skilled in creative expression - being so would make me happy in and of itself, but it's not a goal of mine because I can't prioritise it with the resources I have. Values like eliminating suffering and misery are ones which matter to me more, and get translated into corresponding goals to change the world via action.

I haven't seen a definition provided, but if I had to provide one for 'terminal goal' it would be that it's a goal whose attainment constitutes fulfilment of a terminal value. Possessing money is rarely a terminal value, and so accruing money isn't a terminal goal, even if it is intermediary to achieving a world state desired for its own sake. Accomplishing the goal of having all the hungry people fed is the world state which lines up with the value of no suffering, hence it's terminal. They're close, but not quite same thing.

I think it makes sense to possibly not work with terminal goals on a motivational/decision making level, but it doesn't seem possible (or at least likely) that someone wouldn't have terminal values, in the sense of not having states of the world which they prefer over others. [These world-state-preferences might not be completely stable or consistent, but if you prefer the world be one way than another, that's a value.]

Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T12:38:54.660Z · score: 1 (1 votes) · LW · GW

I feel like there's not much of a distinction being made here between terminal values and terminal goals. I think they're importantly different things.

Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T10:25:50.029Z · score: 1 (1 votes) · LW · GW

Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.

Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-19T10:21:38.080Z · score: 3 (3 votes) · LW · GW

My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.

Comment by ruby on On Terminal Goals and Virtue Ethics · 2014-06-18T03:23:42.395Z · score: 21 (21 votes) · LW · GW

If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).

To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.

So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.

How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.

Meetup : July Rationality Dojo: Disagreement

2014-06-12T14:23:04.899Z · score: 1 (2 votes)
Comment by ruby on Australian Mega-Meetup 2014 Retrospective · 2014-05-24T02:32:06.423Z · score: 7 (7 votes) · LW · GW

The whole thing hinges on how much you trust people when they assure you you can say potentially upsetting thing X to them. Generally, not very much. I would never trust a sticker or declaration to the extent that I wouldn't model someone's response, it's just an update on that model.

It was emphasised that people didn't have to answer any question, but the empathy should have been equally pushed.

On this occasion, askers were very hesitant to ask questions they thought would be too personal, but those being asked invariably responded without any hesitation or unease. Discovering that you could ask personal questions you were curious about with only the positive consequences of closeness and openness was a win.

But this does all include a good deal of judgment. Not an exercise for a group not high in empathy or generally unconcerned about others' responses, nor for those who are easily pressured.

Comment by ruby on Australian Mega-Meetup 2014 Retrospective · 2014-05-23T13:25:05.945Z · score: 6 (8 votes) · LW · GW

I have updated towards your position.

Australian Mega-Meetup 2014 Retrospective

2014-05-22T01:59:02.912Z · score: 21 (22 votes)

Credence Calibration Icebreaker Game

2014-05-16T07:29:25.527Z · score: 15 (19 votes)

Meetup : Melbourne June Rationality Dojo: Memory

2014-05-15T12:53:45.469Z · score: 1 (2 votes)
Comment by ruby on Welcome to Less Wrong! (6th thread, July 2013) · 2014-04-20T10:00:46.701Z · score: 0 (0 votes) · LW · GW

And we're in action!

Comment by ruby on Meetup : Christchurch, NZ Inaugural Meetup · 2014-04-13T11:34:27.809Z · score: 1 (1 votes) · LW · GW

I know it's a long way, but if you're eager for LW company it'd be super great to have you guys at our LW Australia Mega-Meetup weekend retreat next month. We've already got one person from Auckland considering it. :)

Either way, best of luck growing your communities!

Meetup : LW Australia Mega-Meetup

2014-04-13T11:23:34.500Z · score: 4 (5 votes)

LW Australia Weekend Retreat

2014-04-07T09:45:35.729Z · score: 8 (9 votes)
Comment by ruby on Welcome to Less Wrong! (6th thread, July 2013) · 2014-04-07T04:41:27.065Z · score: 5 (5 votes) · LW · GW

Hey everyone,

This a new account for an old user. I've got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).

I'm planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.

Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.

So pretty please, could I get the couple of karma points needed to post again?