Matt Goldenberg's Short Form Feed

post by mr-hire · 2019-06-21T18:13:54.275Z · score: 32 (6 votes) · LW · GW · 105 comments

Where I write up some small ideas that I've been happening that may eventually become their own top level posts. I'll start populating with a few ideas I've posted up as twitter/Facebook thoughts.

105 comments

Comments sorted by top scores.

comment by mr-hire · 2019-07-10T19:44:45.268Z · score: 29 (8 votes) · LW · GW

FEELINGS AND TRUTH SEEKING NORMS

Stephen Covey says that maturity is being able to find the balance between Courage and Consideration. Courage being the desire to express yourself and say your truth, consideration being recognizing the consequences of what you say on others.

I often wish that this was common knowledge in the rationality community (or just society in general) because I see so many fights between people who are on opposite sides of the spectrum and don't recognize the need for balance.

Courage is putting your needs first, consideration is putting someone else's needs first, the balance is putting your needs equally. There are some other dichotomies that I think are pointing to a similar distinction.

Courage------->Maturity----->Consideration

From parenting literature:

Authoritarian------->Authoritative----->Permissive

From a course on confidence:

Aggressive------->Assertive----->Passive

From attachment theory:

Avoidant------->Secure----->Preoccupied

From my three types of safe spaces: [LW · GW]

We'll make you grow---->Own Your Safety---> We'll protect you.

--------------------------------------------------------------------

Certain people may be wondering how caring about your feelings and others feelings relate to truth seeking. The answer is that our feelings are based on system 1 beliefs. I suspect this isn't strictly 100% true but its' a useful model, one behind Focusing, Connection Theory, Cognitive Behavioral Therapy, Internal Double Crux, and a good portion of other successful therapeutic interventions.

How this caches out is that being able to fully express yourself is a necessary prerequisite to being able to bring all your beliefs to bear on a situation. Now sometimes, when someone is getting upset, its' not a belief like (this thing is bad) but "I believe that believing what you're saying is unsafe for my identity" or some similar belief.

However, if they think its' unsafe to express THAT belief, you end up in a situation where people have to protect themselves under the veneer of motivated reasoning. You end up in a situation where everybody is still protecting themselves, but they're all pretending to do it in pursuit of the truth (or whatever the group says it values).

In this sense, tone arguments are vitally important to keeping clean epistemic norms [LW · GW]. If I'm not allowed to express the belief that the way you're phrasing things means I'm going to die horribly and live alone forever (which may be an actual system 1 belief), then I have to come up with FAKE arguments against the thing you're saying, or leave the group where that belief of mine isn't being respected.

Which brings me back to the definition of Maturity. If you put your need to express what you think is true in the way you feel is true (which again, is based on your beliefs), over my feelings that I'm going to be alone forever if people take your arguments seriously), you not only are acting immature, but you're fostering an immature community with people who aren't in touch with their own beliefs. What was wrong with this example:

The conversation of the group shifted at the point when Susan started to cry. From that moment, the group did not discuss the actual issue of the student community. Rather, they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault.

Was not that the group considered Susan's feelings, but that they put Susan's feelings above their own beliefs, instead of on equal footing.

------------------------------------------------------------

Here are some situations I've encountered where I wish people knew about the definition of Maturity:

A rationalist friend of mine got upset about being repeatedly asked about a situation after he asked the other person to stop. The other rationalist friend told him "The mature thing to do would be able to control your feelings, like this other rationalist I know." The mature thing is to control your feelings, but also sometimes express them loudly, depending on the needs in the moment.

A lover told me that they weren't going to lie to me, they were going to tell it like it is. I said that was in general fine, but that I wanted them to consider how the way and time they told me things affected my feelings. They said no, they would express themselves when and how they wanted, and they expected me to do the same. That relationship didn't last long.

People taking care of a friend at detriment to their own health.

Soooo many more.

------------------------------------------------------------

Lately, I've been considering adding a third factor, so it's no longer a dichotomy but a trichotomy. Courage, Consideration, and Consequences.

I know there's a strong idea around norms in the rationality community to go full courage (expressing your true beliefs) and have other people mind thmeselves and ignore the consequences (decoupling norms). As I've said elsewhere and above, I think in actuality this leads to a community that trains people to hide certain beliefs and lie about their motives, but do it in a way that can't be called out.

I think you should obviously think about the effects of what you say, on the culture, on the world, and on the person you're speaking to. I have beliefs about this, which cache out in me feeling very upset when people express the truth at all costs, because they're sacrificing their terminal values for instrumental ones, but I'm punished in the rationality community for saying this, so I'm less likely to express it. So the truth seeking norm is stifling my ability to tell the truth.

I think in general I'd love to see WAY more truth seeking norms in society, but I think that's because most of society is immature, they're way too much on the side of consideration, with barely thought to consequences and courage.

Meanwhile, some of the rationality community has gone way to much towards courage, ignoring consideration and consequences.


comment by ChristianKl · 2019-08-12T12:45:20.084Z · score: 8 (3 votes) · LW · GW

I found Taber's radical honesty workshops very useful for a framing of how to deal with telling the truth.

According to him telling the truth is usually about choosing pain now instead of pain in the future. However not all kinds of pain are equal. A person who practices yoga has to be able to tell the pain from stretching from the pain of hurting their joints. In the same way a person who speaks in a radical honest way should be aware of the pain that the statement produces and be able to distinguish whether it's healthy or isn't.

Courage is only valuable when it comes with the wisdom to know when the pain you are exposing yourself is healthy and when it isn't. The teenager who expresses courage to signal courage to his friends without any sense of whether the risk he takes is worth it isn't mature.

Building up thick emotional walls and telling "the truth" without any consideration of the effects of the act of communication doesn't lead to honest conversation in the radical honesty sense. As it turns out, it also doesn't have much to do with real courage as it's still avoiding the conversations that are actually difficult.

comment by mr-hire · 2019-08-12T13:25:39.636Z · score: 2 (1 votes) · LW · GW

I like this framing, the idea of useful and nonuseful pain. It seems like a similary useful definition of maturity.

comment by ChristianKl · 2019-08-13T09:11:21.412Z · score: 4 (2 votes) · LW · GW

One difference is that different types of pain come with slightly different qualia. This allows communication that's in contact with what's felt in the moment which isn't there in ideas of maturity where maturity is about following rules that certain things shouldn't be spoken.

comment by Lukas_Gloor · 2019-07-20T09:04:36.981Z · score: 3 (6 votes) · LW · GW

Excellent comment!

I know there's a strong idea around norms in the rationality community to go full courage (expressing your true beliefs) and have other people mind thmeselves and ignore the consequences (decoupling norms).

"Have other people mind themselves and ignore the consequences" comes in various degrees and flavors. In the discussions about decoupling norms I have seen (mostly in the context of Sam Harris), it appeared me that they (decoupling norms) were treated as the opposite of "being responsible for people uncharitably misunderstanding what you are saying." So I worry that presenting it as though courage = decoupling norms makes it harder to get your point across, out of worry that people might lump your sophisticated feedback/criticism together with some of the often not-so-sophisticated criticism directed at people like Sam Harris. No matter what one might think of Harris, to me at least he seems to come across as a lot more empathetic and circumspect and less "truth over everything else" than the rationalists whose attitude about truth-seeking's relation to other virtues I find off-putting.

Having made this caveat, I think you're actually right that "decoupling norms" can go too far, and that there's a gradual spectrum from "not feeling responsible for people uncharitably misunderstanding what you are saying" to "not feeling responsible about other people's feelings ever, unless maybe if a perfect utilitarian robot in their place would also have well-justified instrumental reasons to turn on facial expressions for being hurt or upset". I just wanted to make clear that it's compatible to think that decoupling norms are generally good as long as considerateness and tact also come into play. (Hopefully this would mitigate worries that the rationalist community would lose something important by trying to reward considerateness a bit more.)

comment by mr-hire · 2019-07-02T01:22:53.145Z · score: 18 (4 votes) · LW · GW

LESS BAD ORGANIZATIONS

The Gervais Principle says that when an organization is run by Sociopaths, it inevitably devolves into infighting and politics that the sociopaths use to make decisions, and then blame them on others. What this creates is a misaligned organization - people aren't working towards the same thing, and therefore much wasted work goes towards undoing what others have done, or assigning blame to someone that isn't yourself. Organizations with people that aren't aligned can sometimes luck into good outcomes, especially if the most skilled players (the most skilled sociopaths) want them to. They aren't necessarily dead players, but they're running on borrowed time - borrowed for the usefulness to the sociopaths.

Dead organizations are those that are run by Rao's clueless (or less commonly, by Rao's losers, in which case you have a Bureaucracy that outlived its' founder). They can't do anything new because they're run by people that can't question the rulesets they're in. As a clueless leading a dead organization, one effective strategy seems to be to accept the memes around you unquestioningly and really executing on them. The most successful people in Silicon valley make their own rules, but the next tier are the people who take the memes of Silicon Valley and follow them unquestioningly. This is how organizations enter Mythic Mode [LW · GW] - they believe in the culture around them so much that they channel the god of that culture, and are able to attract funding, customers, results etc purely through the resulting aura.

Running Good Organizations

Framing the Gervais principle in terms of Kegan:

Losers - Kegan 3

Clueless - Kegan 4

Sociopaths - Kegan 4.5

To run a great organization, the first thing you need is to be lead not by a socipath, but someone who is Kegan 5. Then you need sociopath repellent.

Short Form Feed is getting too long. I'll write more on good organizations at some point soon.

comment by mr-hire · 2019-09-16T18:43:48.859Z · score: 15 (7 votes) · LW · GW

Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person.

I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skill but are now good, and model the strategies that are common for them to make the switch.

The episode is cut to tell a narrative of what the skills are to be acquired, what beliefs/attitudes need to be let go of and acquired, and the process to acquire them, rather than focusing on interviewing a particular person

If there's enough interest, I'll do a pilot episode. Comment with what skillset you'd love to see a pilot episode on.

Upvote if you'd have 50% or more chance of listening to the first episode.

comment by Viliam · 2019-09-16T21:51:48.300Z · score: 3 (2 votes) · LW · GW

Sounds interesting!

The question is, how good are people at introspection: what if the strategies they report are not the strategies they actually use? For example, because they omit the parts that seem unimportant, but that actually make the difference. (Maybe positive or negative thinking is irrelevant, but imagining blue things is crucial.)

Or what if "the thing that brings success" causes the narrative of the cognitive strategy, but merely changing the cognitive strategy will not cause "the thing that brings success"? (People imagining blue things will be driven to succeed in love, and also to think a lot about fluffy kittens. However, thinking about fluffy kittens will not make you imagine blue things, and therefore will not bring you success in love. Even if all people successful in love report thinking about fluffy kittens a lot.)

comment by mr-hire · 2019-09-16T23:09:59.615Z · score: 2 (1 votes) · LW · GW

I think its' probably likely that gaining knowledge in this way will have systematic biases (OK, this is probably true of all types of knowledge acquisition strategies, but you pointed out some good ones for this particular knowledge gathering technique.)

Anyways, based on my own research (and practical experience over the past few months doing this sort of modelling for people with/without procrastination issues) here are some of the things you can do to reduce the bias:

  • Try to inner sim using the strategy yourself and see if it works.
  • Model multiple people, and find the strategies that seem to be commonalities.
  • Check for congruence with people as they're talking. Use common indicators of cached answers like instant answers or lack of emotional charge.
  • Make sure people are embodied in a particular experience as they discuss, rather than trying to "figure themselves out" from the outside.
  • Use introspection tools from a variety of disciplines like thinking at the edge, coherence therapy, etc that can allow people to get better access to internal models.

All that being said, there will still be bias, but I think with these techniques there's not SO much bias that its' a useless endeavor.

comment by William_Darwin · 2019-09-16T22:05:53.796Z · score: 2 (2 votes) · LW · GW

Sounds interesting. I think it may be difficult to find a person, let alone multiple people on a given topic, who are have a particular skill but are also able to articulate it and/or identify the cognitive strategies they use successfully.

Regardless, I'd like to hear about how people reduce repetitive talk in their own heads - how to focus on new thoughts as opposed to old, recurring ones...if that makes sense.

comment by mr-hire · 2019-09-17T00:51:39.508Z · score: 4 (3 votes) · LW · GW

Is this ruminating, AKA repetively going over bad memories and negative thoughts? Or is it more getting stuck with cached thoughts and not coming up with original things?

comment by mr-hire · 2019-07-10T20:47:32.304Z · score: 15 (5 votes) · LW · GW

SOCIOPATH REPELLENT FOR GOOD ORGANIZATIONS AND COMMUNITIES

The role of the Kegan 5 in a good organization:

1. Reinvent the rules and mission of the organization as the landscape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. Notice when sociopaths are arbitraging the difference between the rules and the terminal goals, and shut it down.

----------

Sociopaths (in the Gervais principle sense) are powerful because they're Kegan 4.5. They know how to take the realities of Kegan 4's and 3's and deftly manipulate them, forcing them into alignment with whatever is a good reality for the Sociopath.

The most effective norm I know to combat this behavior is Radical Transparency. Radical transparency is different from radical honesty. Radical honesty says that you should ignore consideration and consequences in favor of courage [LW · GW]. Radical transparency doesn't make any suggestions about what you should say, only that everyone in the organization should be privy to things everyone says. This makes it exceedingly hard for sociopaths to maintain multiple realities.

  • One way to implement radical honesty is to do what David Ogilvy used to do. If someone used BCC in their emails too much, he would fire them. That's an effective Sociopath repellent.
  • Another way to implement radical honesty is to record all your conversations and make them available to everyone, like Bridgewater does. That's an effective Sociopath repellent.

Once I was part of an organization that was trying to create a powerful culture. Someone had just told us about the recording all conversations thing, so me and another leader in the organization decided to try it in one of our conversations. We found we had to keep pausing the recording because the level of honesty we were having with each other would cause our carefully constructed narratives with everyone else to crumble. We were acting as sociopaths, and we had constructed an awful organization.

I left shortly after, but it would have been an exceedingly painful process to convert to a good organization at that time. Creating sociopath repellent organizations is painful because most of us act like sociopaths some of the time, and operating from a place of universal common knowledge means that we have to be prepared to bring our full selves to every situation, instead of crafting ourself to the person in front of us.

---------

The second most effective norm I know to act as sociopath repellent is that anyone should be able to apply the norms to anyone else. Here's how I described that in a previous post:

Anyone should be able to apply the values to anyone else. If "Give critical feedback ASAP, and receive it well" is a value, then the CEO should be willing to take feedback from the new mail clerk. As soon as this stops being the case, the 3's get look for their validation elsewhere, and the 4's get disillusioned.

Besides selective realities, another way that sociopaths gain advantage is through selective application of the norms when it suits them. By creating norms that anyone can apply to anyone else (and making them clear by providing the opposites, as well as examples) you prevent this behavior from sociopaths and take away one of their main weapons.

Once, I was the leader of an organization (ok, I was actually the captain of a team in highschool, but same thing). I was elected leader because I exemplified the norms as good or better than most others, and had the skills to back it up. Once I became the leader, I eventually ran into challenges with sociopathic (again in the Gervais principle sense) behavior trying to undermine my authority. Instead of leaning back on the principles that had earned me the position, I leaned on my power to force people to do what I wanted, while ignoring the principles that got me there. This made others lose faith in the principles, and killed morale, leading to infighting and politics.

The lesson for me as a leader was to lead with influence based on moral authority, not power. But the lesson for me as an organization designer was to allow ANYBODY to enforce the norms, not just the leader, and to make this ability part of the norms themselves. This would have immediately prevented from ruining team morale when I descended into petty behavior.

-------

The final important behavior for sociopath repellent is to notice when the instrumental values of the organization aren't serving the terminal goals, and relentlessly redefine the core values to make them closer to spirit, rather than the letter. This is important because Gervais Sociopaths ALSO have this ability to notice when the instrumental values aren't serving the terminal goals, and will arbitrage this difference for their own gain. A good Kegan 5 leader will be able to point to the values, show how they're meant to lead to the results, then lead the organization in redefining them so that sociopaths can't get away with anything.

Occasionally, Kegan 5 leaders will have to take a look at the landscape, notice its' changed, and make substantial changes to the values or mission of an organization to keep up with the current reality.

------

The next question becomes, if you want a long lasting organization, and a skilled Kegan 5 leader is necessary for a long running organization, how do you get a steady stream of Kegan 5 leaders? This is The Succession Problem. One answer is to create Deliberately Developmental organizations, [LW · GW]that put substantial effort into helping their members become more developed humans. That will be the subject of the next post in the sequence.

comment by ChristianKl · 2019-08-08T15:36:24.806Z · score: 4 (3 votes) · LW · GW

It feels to me unwise to use the term Sociopaths in this way because it means that you lose the ability to distinguish clinical sociopaths from people who aren't.

Distinguishing clinical sociopaths from people that aren't is important because interaction with them is fundamentally different. Techniques for dealing with grief that were taught to prisoners helped reduce recidivism rates for the average prisoner but increased it for sociopaths.

comment by mr-hire · 2019-08-08T16:23:00.823Z · score: 2 (1 votes) · LW · GW

I'm importing the term from Venkatash Rao, and his essays on the Gervais principle. I agree this is an instance of word inflation, which is generally bad. From now on I'll start referring to this as "Gervais Sociopaths" in my writing.

comment by Viliam · 2019-09-17T21:46:10.553Z · score: 2 (1 votes) · LW · GW
Radical transparency doesn't make any suggestions about what you should say, only that everyone in the organization should be privy to things everyone says. This makes it exceedingly hard for sociopaths to maintain multiple realities.

Seems like it could work, but I wonder what other effects it could have. For example, if someone makes a mistake, you can't tell them discreetly; the only way to provide a feedback on a minor mistake is to announce it to the entire company.

By the way, are you going to enforce this rule after working hours? What prevents two bad actors from meeting in private and agreeing to pretend having some deniable bias in other to further their selfish goals? Like, some things are measurable, but some things are a matter of subjective judgment, and two people could agree to always have the subjective judgment colored in each other's favor, and against their mutual enemy. In a way that even if other people notice, you could still insist that what X does simply feels right to you, and what Y does rubs you the wrong way even if you can't explain why.

Also, people in the company would be exposed to each other, and perhaps the vulnerability would cancel out. But then someone leaves, is no longer part of the company, but still has all the info on the remaining members. Could this info be used against the former colleagues? The former colleagues still have info on the one that left, but not on his new colleagues. Also, if someone strategically joins only for a while, he could take care not to expose himself too much, while everything else would be exposed to him.

the CEO should be willing to take feedback from the new mail clerk.

This assumes the new mail clerk will be a reasonable person. Someone who doesn't understand the CEO's situation or loves to create drama could use this opportunity to give the CEO tons of useless feedback. And then complain about hypocrisy when others tell him to slow down.

comment by mr-hire · 2019-09-17T23:07:08.256Z · score: 2 (1 votes) · LW · GW
Seems like it could work, but I wonder what other effects it could have. For example, if someone makes a mistake, you can't tell them discreetly; the only way to provide a feedback on a minor mistake is to announce it to the entire company.By the way, are you going to enforce this rule after working hours?
What prevents two bad actors from meeting in private and agreeing to pretend having some deniable bias in other to further their selfish goals? Like, some things are measurable, but some things are a matter of subjective judgment, and two people could agree to always have the subjective judgment colored in each other's favor, and against their mutual enemy. In a way that even if other people notice, you could still insist that what X does simply feels right to you, and what Y does rubs you the wrong way even if you can't explain why.
Also, people in the company would be exposed to each other, and perhaps the vulnerability would cancel out. But then someone leaves, is no longer part of the company, but still has all the info on the remaining members. Could this info be used against the former colleagues? The former colleagues still have info on the one that left, but not on his new colleagues. Also, if someone strategically joins only for a while, he could take care not to expose himself too much, while everything else would be exposed to him.

I had already updated away from this particular tool, and this comment makes me update further. I still have the intuition that this can work well in a culture that has transcended things like blame and shame, but for 99% of organizations radical transparency might not be the best tool.

This assumes the new mail clerk will be a reasonable person. Someone who doesn't understand the CEO's situation or loves to create drama could use this opportunity to give the CEO tons of useless feedback. And then complain about hypocrisy when others tell him to slow down.

Yes, there are in fact areas where this can break down. Note that ANY rule can be gamed, and the proper thing to do is to refer back to values rather than trying to make ungameable rules. In this case, the others might in fact point out that the values of the organization are such that everyone should be open to feedback, including mail clerks. If this happened persistently with say 1 in every 4 people, then the organization would look at their hiring practices to see how to reduce that. If this happened consistently with new hires, the organization would look at their training practices, etc.

The sociopath repellent here only works in the context of the other things I've written about good organizations, like strongly teaching and ingraining the values and making sure decisions always point back to them, having strong vetting procedures, etc. Viewing this or other posts in the series as a list of tips risks taking them out of context.


comment by An1lam · 2019-09-17T23:51:26.400Z · score: 1 (1 votes) · LW · GW

This note won't make sense to anyone who isn't already familiar with the Sociopath framework in which you're discussing this, but I did want to call out that Venkat Rao (at least when he wrote the Gervais Principle) explicitly stated that sociopaths are amoral and has fairly clearly (especially relative to his other opinions) stated that he thinks having more Sociopaths wouldn't be a bad thing. Here are a few quotes from Morality, Compassion, and the Sociopath which talk about this:

So yes, this entire edifice I am constructing is a determinedly amoral one. Hitler would count as a sociopath in this sense, but so would Gandhi and Martin Luther King.

In all this, the source of the personality of this archetype is distrust of the group, so I am sticking to the word “sociopath” in this amoral sense. The fact that many readers have automatically conflated the word “sociopath” with “evil” in fact reflects the demonizing tendencies of loser/clueless group morality. The characteristic of these group moralities is automatic distrust of alternative individual moralities. The distrust directed at the sociopath though, is reactionary rather than informed.

Sociopaths can be compassionate because their distrust only extends to groups. They are capable of understanding and empathizing with individual pain and acting with compassion. A sociopath who sets out to be compassionate is strongly limited by two factors: the distrust of groups (and therefore skepticism and distrust of large-scale, organized compassion), and the firm grounding in reality. The second factor allows sociopaths to look unsentimentally at all aspects of reality, including the fact that apparently compassionate actions that make you “feel good” and assuage guilt today may have unintended consequences that actually create more evil in the long term. This is what makes even good sociopaths often seem callous to even those among the clueless and losers who trust the sociopath’s intentions. The apparent callousness is actually evidence that hard moral choices are being made.

When a sociopath has the resources for (and feels the imperative towards) larger scale do-gooding, you get something like Bill Gates’ behavior: a very careful, cautious, eyes-wide-open approach to compassion. Gates has taken on a world-hunger sized problem, but there is very little ceremony or posturing about it. It is sociopath compassion. Underlying the scale is a residual distrust of the group — especially the group inspired by oneself — that leads to the “reluctant messiah” effect. Nothing is as scary to the compassionate and powerful sociopath as the unthinking adulation and following inspired by their ideas. I suspect the best among these lie awake at night worrying that if they were to die, the headless group might mutate into a monster driven by a frozen, unexamined moral code. Which is why the smartest attempt to engineer institutionalized doubt, self-examination and formal checks and balances into any systems they design.

I hope my explanation of the amorality of the sociopath stance makes a response mostly unnecessary: I disagree with the premise that “more sociopaths is bad.” More people taking individual moral responsibility is a good thing. It is in a sense a different reading of Old Testament morality — eating the fruit of the tree of knowledge and learning to tell good and evil apart is a good thing. An atheist view of the Bible must necessarily be allegorical, and at the risk of offending some of you, here’s my take on the Biblical tale of the Garden of Eden: Adam and Eve were clueless, having abdicated moral responsibility to a (putatively good) sociopath: God. Then they became sociopaths in their own right. And were forced to live in an ecosystem that included another sociopath — the archetypal evil one, Satan — that the good one could no longer shield them from. This makes the “descent” from the Garden of Eden an awakening into freedom rather than a descent into baseness. A good thing.

I apologize if this just seems like nitpicking your terminology, but I'm calling it out because I'm curious whether you agree with his abstract definition but disagree with his moral assessment of Sociopaths, vice versa, or something else entirely? As a concrete example, I think Venkat would argue that early EA was a form of Sociopath compassion and that for the sorts of world-denting things a lot LWers tend to be interested in, Sociopathy (again, as he defines it) is going to be the right stance to take.

comment by mr-hire · 2019-09-18T00:00:04.876Z · score: 2 (1 votes) · LW · GW

Rao's sociopaths are Kegan 4.5, they're nihilistic and aren't good for long lasting organizations because they view the notion of organizational goals as nonsensical. I agree that there's no moral bent to them but if you're trying to create an organization with a goal they're not useful. Instead, you want an organization that can develop Kegan 5 leaders.

comment by Raemon · 2019-09-18T00:15:03.244Z · score: 7 (2 votes) · LW · GW

This doesn't seem like it's addressing Anlam's question though. Gandhi doesn't seem nihilist. I assume (from this quote, which was new to me), that in Kegan terms, Rao probably meant something ranging from 4.5 to 5.

comment by mr-hire · 2019-09-18T00:59:12.541Z · score: 5 (2 votes) · LW · GW

I think Rao was at Kegan 4.5 when he wrote the sequence and didn't realize Kegan 5 existed. Rao was saying "There's no moral bent" to Kegan 4.5 because he was at the stage of realizing there was no such thing as morals.

At that level you can also view Kegan 4.5's as obviously correct and the ones who end up moving society forward into interesting directions, they're forces of creative destruction. There's no view of Kegan 5 at that level, so you'll mistake Kegan 5's as either Kegan 3's or other Kegan 4.5's, which may be the cause of the confusion here.

comment by mr-hire · 2019-08-09T13:56:02.837Z · score: 13 (6 votes) · LW · GW

There's a pattern I've noticed in my self that's quite self-destructive.

It goes something like this:

  • Meet new people that I like, try to hide all my flaws and be really impressive, so they'll love me and accept me.
  • After getting comfortable with them, noticing that they don't really love me if they don't love the flaws that I haven't been showing them.
  • Stop taking care of myself, downward spiral, so that I can see they'll take care of me at my worst and I know they REALLY love me.
  • People justifiably get fed up with me not taking care of myself, and reject me. This triggers the thought that I'm unlovable.
  • Because I'm not lovable, when I meet new people, I have to hide my flaws in order for them to love me.

This pattern is destructive, and has been one of the main things holding me back from becoming as self-sufficient as I'd like. I NEED to be dependent on others to prove they love me.

What's interesting about this pattern is how self-defeating it is. Do people not wanting to support me mean that they don't love me? No, it just means that they don't want to support another adult. Does hiding all my flaws help people accept me? No, it just sets me up for a crash later. Does constantly crashing from successful ventures help any of this? No, it makes it harder to seem successful, AND harder to be able to show my flaws without having people run away.

comment by Raemon · 2019-08-09T22:21:08.250Z · score: 3 (1 votes) · LW · GW

Don't have much else to say for now but :(

comment by ChristianKl · 2019-08-12T11:06:04.107Z · score: 2 (1 votes) · LW · GW

That sounds to me like the belief "I'm not lovable" causes you trouble and it would make sense to get rid of it. Transform Yourself provides one framework of how to go about it. The Lefkoe method would be a different one.

comment by mr-hire · 2019-08-12T11:15:29.004Z · score: 5 (2 votes) · LW · GW

I've tried both of those, as well as a host of other tools. I only recently (the past year) developed the belief "I am lovable", which allowed me to see this pattern. I can now belief report both " I am lovable" and " I'm not lovable"

comment by mr-hire · 2019-07-22T20:50:38.309Z · score: 10 (5 votes) · LW · GW

I think philosophical bullet biting is usually wrong. It can be useful to make a theory that you KNOW is wrong, and bite a bullet in order to make progress on a philosophical problem. However, I think it can be quite damaging to accept a practical theory of ethics that feels practical and consistent to you, but breaks some of your major moral intuitions. In this case I think it's better to go "I don't know how to come up with a consistent theory for this part of my actions, but I'll follow my gut instead."

Note that this is the opposite of becoming a robust agent. [LW · GW] However, the alternative is CREATING a robust agent that is not in fact aligned with its' creator. I've seen people who adopted a moral view for consistency, and now make choices that they NEVER would have endorsed before they chose to bite bullets for consistency.

I think this is one of my major disagreements with Raemon's view of becoming a robust agent.

comment by Said Achmiz (SaidAchmiz) · 2019-07-22T22:23:27.507Z · score: 5 (2 votes) · LW · GW

Related: this old comment of mine about rules and exceptions [LW · GW].

comment by Raemon · 2019-07-22T22:48:05.018Z · score: 3 (1 votes) · LW · GW

FYI I think that'd make a good post with a handy title that'd make it easier to refer to

comment by Said Achmiz (SaidAchmiz) · 2019-07-23T03:39:21.787Z · score: 5 (3 votes) · LW · GW

Done [LW · GW].

comment by Pattern · 2019-07-23T02:29:58.042Z · score: 0 (0 votes) · LW · GW

"There are no exceptions." "Rules contain exceptions." "How to make Rules." "How to make Exceptions."

comment by Raemon · 2019-07-22T21:33:44.380Z · score: 3 (1 votes) · LW · GW

Thanks for the crisp articulation.

One short answer is: "I, Raemon, do not really bite bullets. What I do is something more like "flag where there were bullets I didn't bite, or areas that I am confused about, and mark those on my Internal Map with a giant red-pen 'PLEASE EVALUATE LATER WHEN YOU ARE HAVE TIME AND/OR ARE WISER' label."

One example of this: I describe my moral intuitions as "Sort of like median-preference utilitarianism, but not really. Median-preference-utilitarianism seems to break slightly less often in ways slightly more forgiveable than other moral theories, but not by much."

Meanwhile, my decision-making is something like "95% selfish, 5% altruistic within the 'sort of but not really median-preference-utilitarian-lens', but I look for ways for the 95% selfish part to get what it wants while generating positive externalities for the 5% altruistic part." And I endorse people doing a similarly hacky system as they figure themselves out.

(Also, while I don't remember exactly how I phrased things, I don't actually think robust agency is a thing people should pursue by default. It's something that's useful for certain types of people who have certain precursor properties. I tried to phrase my posts like 'here are some reasons it might be better to be more robustly-agentic, where you'll be experiencing a tradeoff if you don't do it', but not making the claim that the RA tradeoffs are correct for everyone)

comment by Raemon · 2019-07-22T21:35:36.002Z · score: 5 (2 votes) · LW · GW

On the flipside, I think a disagreement I have with habryka (or did, a year or two ago), was something like habryka saying: "It's better to build an explicit model, try to use the model for real, and then notice when it breaks, and then build a new model. This will cause you to underperform initially but eventually outclass those who were trying to hack together various bits of cultural wisdom without understanding them."

I think I roughly agree with that statement of his, I just think that the cost of lots of people doing this at once are fairly high and that you should instead do something like 'start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.'

comment by mr-hire · 2019-07-22T21:51:26.590Z · score: 4 (2 votes) · LW · GW
start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.'

I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results. Over time I notice that certain parts work and don't, and that certain models tend to work in certain situations. Eventually, I examine my actual beliefs on the situation and find something like "Oh, I've actually developed my own theory of this that ties together the best parts of all of these models and my own observations." Sometimes I help this along explicitly by introspecting on the switching rules/similarities and differences between models, etc.

This feels related to the thing that happens with my moral intuitions, except that there are internal models that didn't seem to come from outside or my own experiences at all, basic things I like and dislike, and so sometimes all these models converge and I still have a separate thing that's like NOPE, still not there yet.

comment by Raemon · 2019-07-22T22:03:59.881Z · score: 3 (1 votes) · LW · GW
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results.

This seems basically fine, but I mean my advice to apply to, like, 4 and 12 year olds who don't really understand what a model is. Anything model-shaped or robust-shaped has to bootstrap from something that's more Cultural wisdom shaped. (but, I probably agree that you can have cultural wisdom that more directly bootstraps you into 'learn to build models')

comment by mr-hire · 2019-07-22T22:14:31.961Z · score: 3 (2 votes) · LW · GW

I think I was viewing "cultural wisdom' as basically its' own blackbox model, and in practice I think this is basically how I treat it.

Nitpick: Human's are definitely creating models at 12, and able to understand that what they're creating are models.

comment by Pattern · 2019-07-23T02:12:09.635Z · score: 1 (1 votes) · LW · GW

How does this compare with empiricism - specifically saying "This is testable, so let's test it."?

comment by mr-hire · 2019-07-23T18:36:25.509Z · score: 2 (1 votes) · LW · GW

I think there's an inferential distance step I'm missing here, because I'm actually a bit at a loss as to how to relate my post to empiricism.

comment by mr-hire · 2019-08-28T13:57:14.771Z · score: 9 (5 votes) · LW · GW

The four levels of listening, from some old notes:

1. Content - Do you actually understand what this person is saying? Do they understand that you understand?

2. Subtext - Do you actually understand how this person feels about what they're saying? Do they understand that you understand?

3. Intent- Do you actually understand WHY this person is saying what they're saying? Do they understand that you understand?

4. Paradigm - Do you actually understand what all of the above says about who this person is and how they view the world? Do they understand that you understand?

comment by mr-hire · 2019-08-22T21:13:05.464Z · score: 9 (6 votes) · LW · GW

A frequent failure mode that I have as a leader:

  • Someone comes on to a new project, and makes a few suggestions.
  • All of those suggestions are things we/I have thought about and discussed in detail, and we have detailed reasons why we've made the decisions we have.
  • I tell the person those reasons.
  • The person comes away feeling like the project isn't really open to criticism feedback, and their ideas won't be heard.

I think a good policy is to just say yes to WHATEVER experiment someone who is new to the project proposes, and let them take their own lumps, or pleasantly surprised.

But, despite having known this for a bit, I always seem to forget to do this when it matters. I wonder if I can add this to our onboarding checklists.

comment by mr-hire · 2019-08-24T15:10:54.076Z · score: 6 (4 votes) · LW · GW

Some concrete updates I had around this idea, based on discussion on Facebook.

  • One really relevant factor is the criticism coming from a person in authority, and leaders should be extra careful of critizing ideas. By steering them towards other, less authorative figures that you think will give valid critiques, you can avoid this failure.
  • Another potential obvious pitfall here is people feeling like they were set up to fail by not having all the relevant information. The idea here is to make people feel like they have agency, obviously not to hide information.
  • Even if you do the above, people can feel patronized if it seems like you're doing this as a tactic because you think they can't take criticism. This can be true even if giving them criticism would indeed be harmful for the team dynamic. Thus, the emphasizing ways to increase agency over avoiding criticism is key here.
comment by Raemon · 2019-08-24T17:29:58.443Z · score: 5 (3 votes) · LW · GW

This combination of failure modes seems pretty dicey.

I think I've encountered something similar in relationships, where my naive thought was "they're doing something wrong/harmful and I should help them avoid it" but I eventually realized "them having an internal locus of control and not feeling like I'm out to micromanage them is way more important than any given suboptimal thing they're doing."

comment by An1lam · 2019-08-22T22:29:26.015Z · score: 6 (6 votes) · LW · GW

I've rarely seen teams do this well and agree that your proposed approach is much better than the alternative in many cases. I've definitely seen cases where insiders thought something was impossible and then a new person went and did it. (I've been the insider, the new person who ignored the advice and succeeded, and the new person who ignored the advice and failed.)

That said, I think there's a middle ground where you convey why you chose not to do something but also leave it open for the person to try anyway. The downside of just letting them do it without giving context is they may fail for a silly rather than genuine reason.

What I'm suggesting could look something like the following.

That's an awesome idea! This is something some of us explored a bit previously and decided not to pursue at the time for X, Y, and Z reasons. However, as insiders, we are probably biased towards viewing things as hard, so it's important for team health to have new people re-try and re-explore things we may have already thought about. You should definitely not take our reasons as final and feel free to try The Thing if you still feel like it might work or you'll learn something by doing so.

comment by mr-hire · 2019-08-18T15:16:21.541Z · score: 9 (5 votes) · LW · GW

I've had a draft sitting in my posts section for months about shallow, deep, and transfer learning. Just made a Twitter thread that gets at the basics. And figured I'd post here to gauge interest in a longer post with examples.

Love kindle, love Evernote. But never highlight good ideas. It's level one reading. Instead use written notes and link important ideas to previous concepts you know.

Level 1: What's important? What does this mean?

Level 2: How does this link to compare/contrast to previous concepts or experiences? Do I believe this?

Level 3: How is this a metaphor for seemingly unrelated concepts? How can this frame my thinking?

4 questions to get to level 2:

  • How is this similar to other things I know?
  • How is this different from other things I know?
  • What previous experiences can I relate this to?
  • In what circumstances would I use this knowledge? How would I use it?

3 Questions to ask to get to level 3:

  • How does it feel to view the world through this lens?
  • How does this explain everything?
  • What is this a metaphor for?
comment by Raemon · 2019-08-18T17:38:22.843Z · score: 4 (2 votes) · LW · GW

I notice that this all makes perfect sense but that I don't expect to use it that much.

Which I think is more of a failure of my part to set up my life such that I can be using my "deliberate effort" brain while reading. I mostly do reading in the evening when I'm tired (where the base-situation was "using facebook or something", and I was trying to at least get extra value out of my dead brain state)

Currently my "deliberate effort" hours go into coding, and writing. This seems probably bad, but it feels like a significant sacrifice to do less of either. Mrr.

comment by mr-hire · 2019-08-18T22:36:36.438Z · score: 3 (2 votes) · LW · GW

Note this this mostly doesn't feel like deliberate effort anymore now that it's a habit for me. It took maybe 3 months of it being deliberate effort, but now my mind just automatically notices something important while I'm learning and asks "what is this related to?"

I haven't checked if reading is more tiring than before, but I also haven't noticed anything to that effect.

comment by Raemon · 2019-08-18T22:41:01.396Z · score: 4 (2 votes) · LW · GW

That all makes sense – once the habit is ingrained I wouldn't expect it to be deliberate effort per se (but, would still require me to make time for this that isn't 'right before I go to sleep while lying in bed')

comment by mr-hire · 2019-08-04T17:51:46.309Z · score: 9 (7 votes) · LW · GW

I've had a similar conversation many times recently related to Kegan's levels of development and Constructive-developmental theory:

X: Okay, but isn't this just pseudoscience like Myers-Briggs?

Me: No, there's been a lot of scientific research into constructive-developmental theory.

X: Yeah, but does it have strong inter-rater reliablity?

Me: Yes, it has both strong inter-rater reliablity and test retest reliablity. In addition, it has strong correlation with other measures of adult development that themselves have a strong evidence base.

X: Sure, but it seems so culturally biased.

Me: There's also strong preliminary reports on cross-culture validity.


It makes me want to make a post summarizing the evidence for Constructive-Developmental theory so people don't keep pattern matching to less-valid psychometrics like Myers-Briggs

comment by Raemon · 2019-08-04T19:04:06.172Z · score: 23 (8 votes) · LW · GW

I'd be interested in a post that was just focused on laying out what the empirical evidence was (preferably decoupled from trying to sell me on the theory too hard)

comment by Raemon · 2019-08-04T19:22:42.163Z · score: 8 (4 votes) · LW · GW

(a bit more details on how I'm thinking about this. Note that this is just my own opinion, not necessarily representing any LW team consensus)

I'm generally interested in getting LW to a state where

  • it's possible to bring up psych theories that seem wooey at first glance, but
  • it's also clearer:
    • what the epistemic status of those theories are
    • what timeframes are reasonable to expect that epistemic status to reach a state where we have a better sense of how true/useful the theory is
    • have some kind of plan to deprecate weird theories if they turn out to be BS

I think there are some additional constraints on developmental theories [LW · GW], where for social reasons I think it makes sense to lean harder in the "strong standards of evidence" direction. I think Dan Speyer's suspicions (articulated on FB) are pretty reasonable, and whether they're reasonable or not they also seem to a fact-of-the-matter that needs to be addressed anyhow.

I've recently updated that developmental theories might be pretty important, but I think there's a lot of ways to use them poorly and I wanna get it right.

comment by Said Achmiz (SaidAchmiz) · 2019-08-10T07:11:40.918Z · score: 11 (5 votes) · LW · GW

I have seen much talk on Less Wrong lately of “development stages” and “Kegan” and so forth. Naturally I am skeptical; so I do endorse any attempt to figure out if any of this stuff is worth anything. To aid in our efforts, I’d like to say a bit about what might convince me be a little less skeptical.

A theory should explain facts; and so the very first thing we’d have to do, as investigators, is figure out if there’s anything to explain. Specifically: we would have to look at the world, observe people, examine their behavior, their patterns of thinking and interacting with other people, their professed beliefs and principles, etc., etc., and see if these fall into any sorts of patterns or clusters, such that they may be categorized according to some scheme, where some people act like this [and here we might give some broad description], while other people act like that.

(Clearly, the answer to this question would be: yes, people’s behavior obviously falls into predictable, clustered patterns. But what sort, exactly? Some work would need to be done, at least, to enumerate and describe them.)

Second, we would have to see whether these patterns that we observe may be separated, or factored, by “domain”, whereby there is one sort of pattern of clusters in how people think and act and speak, which pertains to matters of religion; and another pattern, which pertains to relationship to family; and another pattern, which pertains to preferences of consumption; etc. We would be looking for such “domains” which may be conceptually separated—regardless of whether there were any correlation between clustering patterns in one domain or another.

(Here again, the answer seems clearly to be that yes, such domains may be defined without too much difficulty. However, the intuition is weaker than for the previous question; and we are less sure that we know what it is we’re talking about; and it becomes even more important to be specific and explicit.)

Now we would ask two further questions (which might be asked in parallel). Third: does categorization of an individual into one cluster or another, in any of these domains, correlate with that individual’s category membership in categories pertaining to any observable aspect of human variation? (Such observable aspects might be: cultural groupings; gender; weight; height; age; ethnicity; socioeconomic status; hair color; various matters of physical health; or any of a variety of other ways in which people demonstrably differ.) And fourth: may the clusters in any of these domains sensibly be given a total ordering (and the domain thereby be mapped onto a linear axis of variation)?

Note the special import of this latter question. Prior to answering it, we are dealing exclusively with nominal data values. We now ask whether any of the data we have might actually be ordinal data. The answer might be “no” (for instance, you prefer apples, and I prefer oranges; this puts us in different clusters within the “fruit preferences” domain of human psychology, but in no sense may these clusters be arranged linearly).

Our fifth question (conditional on answering yes to all four of the previous question) is this: among our observed domains of clustering, and looking in particular at those for which the data is of an ordinal nature, are there any such that the dimension of variation has any normative aspect? That is: is there a domain such that we might sensibly say that it is better to belong to clusters closer to one end of its spectrum of variation, than to belong to clusters closer to the other end? (Once more note that the answer might be “no”: for example, suppose that some people fidget a lot, while others do not fidget very much. Is it better to be a much-fidgeter than a not-much-fidgeter? Well… not really; nor the reverse; at least, not in any general way. Maybe fidgeting has some advantages, and not fidgeting has others, etc.; who knows? But overall the answer is “no, neither of these is clearly superior to the other; they’re just one of those ways in which people differ, in a normatively neutral way”.)

Finally, our sixth question is: does there exist any domain of clustering in human behavioral/psychological variation for which all of these are true:

  • That its clusters may naturally be given a total order (i.e., arranged linearly);
  • That this linear dimension has normative significance;
  • That membership in its categories is correlated primarily with category membership pertaining to one aspect of human variation (rather than being correlated comparably with multiple such aspects);
  • That in particular, membership in this domain’s clusters is correlated primarily with age.

Note that we have asked six (mostly[1]) empirical questions about humanity. And we have had six chances to answer in the negative.

And note also that if we answer any of these questions in the negative, then any and all theories of “moral development” (or any similar notion) are necessarily nonsense—because they purport to explain facts which (in this hypothetical scenario) we simply do not observe. Without any further investigation, we can dispose of the lot of them with extreme prejudice, because they are entirely unmotivated by the pre-theoretical facts.

So, this is what I would like to see from any proponents of Kegan’s theory, or any similar ones: a detailed, thorough, and specific examination (with plenty of examples!) of the questions I give in this comment—discussed with utter agnosticism about even the concept of “moral development”, “adult development” or any similar thing. In short: before I consider any defense of any theory of “adult development”, I should like to be convinced of such a theory’s motivation.


  1. The question of normative import is not quite empirical, but it may be operationalized by considering intersubjective judgments of normative import; that is, in any case, more or less what we are talking about in the first place. ↩︎

comment by mr-hire · 2019-08-10T13:33:51.189Z · score: 2 (1 votes) · LW · GW

Why must a developmental theory be normative? A descriptive theory that says all humans go through stages where they get less moral over time works still as an interesting descriptive theory. Similary, there's certain Developmental stages that probably aren't normative of everyone around you is in a lower developmental stage, but it can still be descriptive as the next stage most humans go through if they indeed progress.

comment by Said Achmiz (SaidAchmiz) · 2019-08-10T13:49:47.800Z · score: 2 (1 votes) · LW · GW

I did not say anything about the theory being normative. “A descriptive theory that says all humans go through stages where they get less moral over time” is entirely consistent with what I described. Note that “moral” is a quality with normative significance—compare “get less extraverted over time” or “get less risk-seeking over time”.

comment by mr-hire · 2019-08-10T14:26:15.753Z · score: 2 (1 votes) · LW · GW

Ahh, so is the idea just that you don't care about a specific type of development if it doesn't have consequences that matter?

comment by Said Achmiz (SaidAchmiz) · 2019-08-10T22:43:43.832Z · score: 0 (2 votes) · LW · GW

Whether I care is hardly at issue; all the theories of “adult development” and similar clearly deal with variation along normatively significant dimensions.

If, for some reason, you propose to defend a theory of development that has no such normative aspect, then by all means remove that requirement from my list. (Kegan’s theory, however, clearly falls into the “normatively significant variation” category.)

comment by mr-hire · 2019-08-11T00:25:58.800Z · score: 2 (1 votes) · LW · GW

I think that EG constructive-developmental theory studiously avoids normative claims. The level that fits best is context dependent on the surrounding culture.

comment by Said Achmiz (SaidAchmiz) · 2019-08-11T00:42:49.111Z · score: 2 (1 votes) · LW · GW

Fair enough. Assuming that’s the case, then anyone proposing to defend that particular theory is exempt from that particular question.

comment by mr-hire · 2019-08-11T01:33:50.015Z · score: 3 (2 votes) · LW · GW

Just in case it isn't clear, constructive-developmental theory and "kegan's levels of development" are two names for the same thing.

comment by Said Achmiz (SaidAchmiz) · 2019-08-11T06:26:47.180Z · score: 2 (1 votes) · LW · GW

Ah, my mistake.

However, in that case I don’t really understand what you mean. But, in any case, the rest of my original comment stands.

I look forward to any such detailed commentary on the fact-based motivation for any sort of developmental theory, from anyone who feels up to the task of providing such.

comment by mr-hire · 2019-08-07T03:46:35.233Z · score: 5 (2 votes) · LW · GW

Looks like Sarah Constantine beat me to it, although I think here lit review missed a few studies I've seen.

https://srconstantin.wordpress.com/2017/04/06/are-adult-developmental-stages-real/

comment by ChristianKl · 2019-08-08T15:25:20.783Z · score: 4 (2 votes) · LW · GW

From her post:

In a study of West Point students, average inter-rater agreement on the Subject-Object Interview was 63%, and students developed from stage 2 to stage 3 and from stage 3 to stage 4 over their years in school.

Are you calling those 63% strong inter-rater reliablity or are you referring to other studies?

comment by mr-hire · 2019-08-08T20:21:48.762Z · score: 2 (1 votes) · LW · GW

There's as far as I know 3 studies on this. She found the one with 63% agreement, whereas the previous two studies had about 80% agreement

comment by Raemon · 2019-08-07T04:09:40.430Z · score: 3 (1 votes) · LW · GW

Oh, I was looking for that recently. Apparently predates LessWrong integration with her blog

comment by habryka (habryka4) · 2019-08-08T20:23:35.246Z · score: 2 (1 votes) · LW · GW

My general takeaway from that post was that in terms of psychometric validity, most developmental psychology is quite bad. Did I miss something?

This doesn't necessarily mean the underlying concepts aren't real, but I do think that in terms of the quality metrics that psychometrics tends to assess things on, I don't think the evidence base is very good.

comment by mr-hire · 2019-08-09T00:32:43.546Z · score: 2 (1 votes) · LW · GW

I haven't looked into general developmental theories like Sarah Constantin, but have looked into the studies on Constructive Developmental theory.

My takeaways (mostly supported by her research, although she misses a lot) is that basically all the data points towards confirming the theory, with high information value on further research

  • high interrater reliability
  • high test-retest reliability
  • good correlation with age
  • good correlations with age in multiple cultures
  • good correlation with measures of certainty types of achievement like leadership

As Sarah points at, the biggest thing missing is evidence that the steps procede in order with no skipping, but as far as I can tell there's no counterevidence for that either. Also, replications of the other things.

Perhaps if I had went into this looking at a bunch of other failed developmental theories, my priors would have been such that I would have described it as "not enough evidence to confirm the theory". However, given this is the only developmental theory I looked into, my takeaways was "promising theory with preliminary support, needs more confirming research"

comment by mr-hire · 2019-08-04T21:03:31.188Z · score: 5 (2 votes) · LW · GW

Yes, this is what I'm imagining. A simple post that just summarizes the epistemic status, potentially as the start of a sequence for later posts that use it as a building block for other ideas.

comment by mr-hire · 2019-06-21T18:31:23.983Z · score: 9 (5 votes) · LW · GW

CHANGE IS GOOD

Something I've been noticing lately in a lot of places is that many people have the intuition that change is bad, and the default should be to maintain the status quo. This is epitomized by the Zvi article Change is Bad.

I tend to have the exact opposite intuition, and feel a sense of dread or foreboding when I see a lack of change in institutions or individuals I care about, and work to create that change when possible. Here's some of the models that seem to be behind this:

  • Change is inevitable. The broader systems in which the systems I care about exist are always changing (the culture, the economic system, etc). Trying to keep things static is MORE effort than going with the flow, so I don't buy the "conserve your energy" argument. Anyone who has ever TRIED to fight the flow of broader systems within their local system knows this to be true.
  • By Default Change is Inevitable but usually non-directed. What I mean by that is that as stated above, the systems are always changing. However, many times this is a result of local actors following local incentives and acting within local constraints. Rarely are the trickle down effects on the things you care about in any way shape or form directed towards making that thing better for human flourishing. This means that there's much gain to be had by simply working to direct and shape the change that will be happening any way to make it actually GOOD. This is also an argument for EAs being less shy about systemic change.
  • Even if change isn't inevitable, entropy is. Even in a relatively stable system, the default is not for things to stay the same, but for them to fall or drift apart. I've found that change injects a NEWNESS into the system that provides its' own momentum. This is all metaphorical, but will probably hit for anyone who has run an organization that meets regularly. If you keep doing the same thing, there's a staleness that causes people to drift away. Trying to rally the troops and prevent this drifting in the face of the staleness is like pulling teeth. However, doing something NEW in the organization, organizing a new event, a new initiative, anything, provides new energy that makes people excited to continue, and is actually EASIER than simply struggling against the staleness.
comment by mr-hire · 2019-09-02T16:40:15.229Z · score: 8 (3 votes) · LW · GW

Does anyone here struggle with perfectionism? I'd love to talk to you and get an understanding of your experience.

comment by mr-hire · 2019-08-27T14:47:14.125Z · score: 8 (5 votes) · LW · GW

One of the enduring insights I've gotten from elityre is that different world models are often about the weight and importance of different ideas, not about how likely those things are to be true. For instance, The Elephant in the Brain isn't about whether or not signalling exists, its' about how central signalling is to the worldview of Simler and Hanson. Similarly with Antifragility and Nassim Taleb.

One way to say this is that disagreement is often about the importance of an idea, not its' truth.

Another way to say this is that worldview differences are often about the centrality and interconnectedness of a node within a graph, and not its' existence.

A third way to say this is that disagreements are often about tradeoffs, not truths.

I've used all of these when trying to point to this idea, but I'd like a single, catchy word or phrase to use and a blog post I can point to so that this idea can enter the rationalist lexicon. Does this blogpost already exist? If not, any ideas for what to name this?

comment by Pattern · 2019-08-27T19:16:08.165Z · score: 4 (3 votes) · LW · GW
any ideas for what to name this?

A Matter of Degree

comment by cousin_it · 2019-08-27T15:06:35.167Z · score: 3 (1 votes) · LW · GW

Yeah. This problem is especially bad in politics. I've been calling it "importance disagreements", e.g. here [LW · GW] and here [LW · GW]. There's no definitive blogpost, you're welcome to write one :-)

comment by mr-hire · 2019-08-27T15:43:32.534Z · score: 2 (1 votes) · LW · GW

Note that I think we're talking about similar things, but have slightly different framing. For instance, you say :

I've had similar thoughts but formulated them a bit differently. It seems to me that most people have the same bedrock values, like "pain is bad". Some moral disagreements are based on conflicts of interest, but most are importance disagreements instead. Basically people argue like "X! - No, Y!" when X and Y are both true, but they disagree on which is more important, all the while imagining that they're arguing about facts. You can see it over and over on the internet.

I think "Value Importance" disagreements definitely do happen, and Ruby talks about them in "The Rock and the Hard Place [LW · GW]".

However, I'm also trying to point at "Fact Importance" as a thing that people often assume away when trying to model each other. I'd even go as far to say that often what seems like "value importance" intractable debates are often "hidden assumption fact importance debates".

For instance, we might both have the belief that signalling effects peoples' behaviors, and the belief that people are trying to achieve happiness, and we both assign moderately high probability on each of these factors. However, unless I understand, in their world model, how MUCH they think signalling effects behaviors in comparison to seeking happiness, I've probably just unknowingly imported my own importance weights onto those items.

Any time you're using heuristics (which most good thinkers are) its' important to go up and model the meta-heuristics that allow you to choose how much a given heuristic effects a given situation.

comment by cousin_it · 2019-08-27T16:19:28.203Z · score: 5 (2 votes) · LW · GW

Yeah, I guess I wasn't separating these things. A belief like "capitalists take X% of the value created by workers" can feel important both for its moral urgency and for its explanatory power - in politics that's pretty typical.

comment by Pattern · 2019-08-27T19:15:14.396Z · score: 1 (1 votes) · LW · GW

Depends on the value of X.

comment by Ruby · 2019-08-30T06:13:38.989Z · score: 2 (1 votes) · LW · GW

Just wanted to quickly assert strongly that I wouldn't characterize my post cited above as being only about value disagreements (value disagreements might even be a minority of applicable cases).

Consider Alice and Bob who are aligned on the value of not dying. They are arguing heatedly over whether to stay where they are vs run into the forest.

Alice: "If we stay here the axe murderer will catch us!" Bob: "If we go into the forest the wolves will eat us!!" Alice: "But don't you see, the axe murderer is nearly here!!!"

Same value, still a rock and hard place situation.

comment by mr-hire · 2019-08-27T15:48:47.982Z · score: 2 (1 votes) · LW · GW

Similarly, we might both agree on the meta-heuristics in a specific situation, but I have models that apply a heuristic to 50x the situations that you do, so even though you agree that the heuristic is true, you disagree on how important it is because you don't have the models to apply it to all the situations that I can.

comment by Slider · 2019-08-30T14:57:34.780Z · score: 1 (1 votes) · LW · GW

If you make it explicit like "X is important" vs "X is not important" I have hard time to use the word "disagree" on it. Like if A and B emphasis and have signaling as similarly central in their worldviews saying "we agree on signaling" sounds wrong. Also saying stuff like "I disagree with racism" sounds like a funky way to get that point across.

comment by mr-hire · 2019-08-30T15:07:55.341Z · score: 2 (1 votes) · LW · GW

I think disagree is not semantically accurate for the thing I'm trying to point at, but it still feels internally often like "We have a fundamental disagreement about how to view this situation", it make more sense to talk about "our models being in agreement" than us being in agreement.

comment by mr-hire · 2019-07-03T16:14:46.376Z · score: 8 (2 votes) · LW · GW

RUNNING GOOD ORGANIZATIONS

Framing the Gervais principle in terms of Kegan:

Losers - Kegan 3

Clueless - Kegan 4

Sociopaths - Kegan 4.5

To run a great organization, the first thing you need is to be lead not by a sociopath, but someone who is Kegan 5. Then you need sociopath repellent.

The Gervais principle works on the fact that at the bottom, the losers see what the sociopaths are doing and opt-out, finding enjoyment elsewhere. The clueless, in the middle, believe the stories the sociopaths are telling them and hold the party line. The sociopaths, at the top, are infighting and trying to use the organization to get their own needs met.

In a good organization, the people at the top are Kegan 5. They have varying rules and models in their head for how the organization should act, and they use this as a best guess for the VALUES the organization should have, given the current environment - IE, they do their best to synthesize their varying models into a legible set of rules that will achieve their terminal goals (which, because they're Kegan 5, aren't pure solipsism)

The reason that they need to do this distillation process is that they need something that works for the Kegan 3's and Kegan 4's. The Kegan 4's SHARE the terminal goal of the Kegan 5 (or some more simplified version of it), and believe in the values and mission of the organization as the ONE TRUE WAY to achieve that goal.

Because the rules of the organization are set up to be legible and reward actions that actually help the terminal goal, the Kegan 3's can get their belonging and good vibes in highly legible, easy ways that are simple to understand before them. Notice now that the 3's, 4's, and 5's are all aligned, working towards the same ends instead of fighting each other.

Two important things about the values, mission, and rules of the organization.

1. The values must have sincere opposites that you could plausibly use for real decision making, otherwise they don't help the Kegan 3's and disillusion the Kegan 4s. You can't run an organization or make decisions based on "being unproductive" so "productivity" isn't a valid goal. You can make decisions that tradeoff short term productivity for long term productivity, so "move fast and break things" is a valid value, as is "Move slowly and plan carefully."

2. Anyone should be able to apply the values to anyone else. If "Give critical feedback ASAP, and receive it well" is a value, then the CEO should be willing to take feedback from the new mail clerk. As soon as this stops being the case, the 3's get look for their validation elsewhere, and the 4's get disillusioned.

Two good examples of values: Principles by Ray Dalio, The Scribe Culture Bible

The role of the Kegan 5 in this organization is twofold:

1. Reinvent the rules and mission of the organization as the landscape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. Notice when sociopaths are arbitraging the difference between the rules and the terminal goals, and shut it down.

Short Form Feed is getting too long. Next time, I'll wrote more about Sociopath repellent.

comment by mr-hire · 2019-06-21T18:19:20.188Z · score: 8 (4 votes) · LW · GW

ON SAFE SPACES

There's at least 3 types of psychological "safe spaces":

1. We'll protect you.
We'll make sure there's nothing in the space that can actively touch your wounds. This is a place to heal with plenty of sunshine and water. Anyone who's in this space is agreeing to be extra careful to not poke any wounds, and the space will actively expel anyone who does. Most liberal arts colleges are trying to achieve this sort of safety.

2. Own your safety.
There may or may not be things in this space that can actively touch your wounds. You're expected to do what's necessary to protect them, up to and including leaving the space if need be. You have an active right to know your own boundaries and participate or not as needed. Many self-help groups are looking to achieve this sort of safety.

3. We'll make you grow.
This space is meant to poke at your wounds, but only to make you grow. We'll probably waterboard the shit out of you, but we won't let you drown. Anyone who's too fragile for this environment should enter at their own peril. This is Bridgewater, certain parts of the US Military, and other DDOs.

This is a half formed thought that seems to ping enough of my other important concepts that it seems worth sharing. Which one you think should be default relates a lot to how you view the world

It relates to:

- Why you would choose decoupling vs. contextualizing norms (https://www.lesswrong.com/…/decoupling-vs-contextualising-n… [LW · GW])

- Why you would allow or not allow punch bug (https://medium.com/@Th…/in-defense-of-punch-bug-68fcec56cd6b)

-Whether you want to protect Cluess, Losers, or Sociopaths (https://www.ribbonfarm.com/…/the-gervais-principle-or-the-…/)

- The left/right culture war.

comment by mr-hire · 2019-06-21T18:15:33.965Z · score: 8 (4 votes) · LW · GW

WHY VIBING IS IMPORTANT

Vibing is a type of communication where the content is a medium through which you can play with the emotional rhythm. I've said before that the Berkely rationalist community is missing this, and that that's important, but have never really explained why vibing is important.

Firstly, vibing is one of the purest forms of play - if you're playing with others, but you're not vibing, there's an important emotional connection component missing from your play.

Secondly, vibing is a way to screen for people whose emotional rhythm can sync up with a group. It's a vital screening mechanism to figure out if you can brainstorm well together, work well together, and get along.

Finally, the speed at which you communicate vibing means you're communicating almost purely from System 1, expressing your actual felt beliefs. It makes deception both of yourself and others much harder. Its much more likely to reveal your true colors. This allows it to act as a values screening mechanism as well.

comment by moses · 2019-06-22T04:15:00.241Z · score: 1 (1 votes) · LW · GW

I'm so curious about this. I presume there isn't, like, a video example of "vibing"? I'd love to see that

comment by mr-hire · 2019-06-22T05:17:52.583Z · score: 2 (1 votes) · LW · GW

I don't think vibing is that an unsual a method of communication, most people have seen it and participated in it... rationalists in Berkeley just happen to be really bad at it.

Unfortunately I can't find a video example (don't know what to search for) but I did write up a post that was trying to explain it from the inside. https://www.lesswrong.com/posts/jXHwYYnqynhB3TAsc/what-vibing-feels-like [LW · GW]

comment by moses · 2019-06-22T14:45:46.619Z · score: 3 (2 votes) · LW · GW

Yeah, I've read that one, and I guess that would let someone who've had the same experience understand what you mean, but not someone who haven't had the experience.

I feel similarly to when I read Valentine's post on kensho—there is clearly something valuable, but I don't have the slightest idea of what it is. (At least unlike with kensho, in this example it is possible to eventually have an objective account to point to, e.g. video.)

comment by mr-hire · 2019-08-21T20:39:14.231Z · score: 7 (3 votes) · LW · GW

Here are some of the common criticisms I get of myself. If you know me, either in person, through secondhand accounts feel free to comment with your thoughts on which ones feel correct to you and any nuance or comments you'd like to make. Full license for this particular thread to operate on Crocker's rules and not take my feelings into account. If you don't feel comfortable commenting publicly, also feel free to message with your thoughts.


  • I have too low epistemic rigor.
  • Too confident in myself
  • Not confident enough in myself.
  • Too focused on status.
  • I don't keep good company.
  • I'm too impulsive.
  • Too risk seeking.
comment by mr-hire · 2019-10-11T22:44:03.500Z · score: 6 (3 votes) · LW · GW

*Virtual Procrastination Coach*

For the past few months I've been doing a deep dive into Procrastination, trying to find the cognitive strategies that people who have no trouble with procrastination use to overcome their procrastination.
--------------
This deep dive has involved:

* Introspecting on my own cognitive strategies
* Reading the self help literature and mining cognitive strategies
* Scouring the scientific literature for reviews and meta studies related to overcoming procrastination, and mining the cognitive strategies.
*Interviewing people who have trouble with procrastination, and people who have overcome it, and modelling their cognitive strategies.

I then took these ~18 cognitive strategies, split them into 7 lessons, and spent ~50 hours taking people individually through the lessons and seeing what worked, what didn't and what was missing.

This resulted in me doing another round of research, adding a whole new set of cognitive strategies, (for a grand total of 25 cognitive strategies taught over the course of 10 lessons) and testing for another round of ~50 hours to again test these cognitive strategies with 1-on-1 lessons to see what worked for people.
-------------------------------------
The first piece of more scalable testing is now ready. I used Spencer Greenberg's GuidedTrack tool to create a "virtual coach" for overcoming procrastination. I suspect it won't be very useful without the lessons (I'm writing up a LW sequence with those), but nevertheless am still looking for a few people who haven't taken the lessons to test it out and see if its' helpful.

The virtual coach walks you through all the parts of a work session and holds your hand. If you feel unmotivated, indecisive, or overwhelmed, its' there to help. If you feel ambiguity, perfectionism, or fear of failure, its' there to help.

If you're interested in alpha testing, let me know!

comment by mr-hire · 2019-06-25T20:01:36.061Z · score: 6 (3 votes) · LW · GW

STEELMANNING KEGAN 3 (OR, KEGAN 3, TO THE TUNE OF KEGAN 4)

Ruby recently made an excellent post called Causal Reality vs. Social Reality [LW · GW]. One way to frame what he was writing was he was trying to point at that 58% of the population is on Kegan's stage 3, and a lot of what rationality is doing is trying to move people to stage 4.

I made a reply to that (knowing it might not be that well received) essentially trying to steelman Kegan 3 from a Kegan 4 perspective - that is, is there a valid systemic reason based on long term goals to act as if all you care about is how you make yourself and others feel.

Here's my slightly edited attempt:

The thing we actually care about... Is it how everyone feels? People being happy and content and getting along, love and meaning - it seems to be based in large part on the fundamental question of how people feel about other people, how we get along - the questions that are asked in Kegan 3.

It might be understandable if you're a person that cares about a world where people love and cherish each other, and are able to pursue meaning - you might think that the near term effects of how people think and feel relate to what happens effect the long term of how people think and feel and relate as well. If you don't have a lot of power, you might even subconsciously think that the flowthrough effects from your ability to effect how people around you feel is your best chance at affecting the "ultimate goal" of everyone getting along.

And when you run into someone who (in your mind) doesn't care about that reality of how their actions effect the harmony of the group, and instead is focused on weird rules that discard those obvious effects, you might think them cold and calculating and importantly in opposition to that ultimate goal.

Then you might write up a post about how sure, rules and Kegan 4 and principles of action are important sometimes, but the important thing is just being good and kind to other people, and things will work themselves out - That Kegan 3 actions are actually the best way to achieve Kegan 4 goals.

comment by Raemon · 2019-06-25T20:05:47.789Z · score: 4 (2 votes) · LW · GW
The thing we actually care about... Is it how everyone feels?

I happen to roughly agree with this but be warned that there are people who get off this train right about here.

comment by habryka (habryka4) · 2019-06-25T21:10:04.952Z · score: 4 (3 votes) · LW · GW

*raises hand and gets off the train*

comment by mr-hire · 2019-06-25T21:59:13.708Z · score: 3 (2 votes) · LW · GW

You strike me as someone very heaven focused, so I am surprised you got off the train at about here.

I wonder, if you expand the concept of "how everyone feels" to include Eudomonic happiness - that is, its' not just about how they feel, but second order ideas of how they would feel about the meaningfullness/rightness of their own feelings (and how you feel about the meaningfullness/rightfullness of their actions), do you still get off the train?

comment by habryka (habryka4) · 2019-06-25T23:22:14.114Z · score: 7 (4 votes) · LW · GW

Yeah, it seems pretty plausible that I care about things that don't have any experience. It seems likely that I prefer a universe tiled with amazing beautiful paintings but no conscious observers to a universe filled with literal mountains of feces but no conscious observers. I don't really know how much I prefer one over the other, but if you give me the choice between the two I would definitely choose the first one.

comment by mr-hire · 2019-06-25T20:16:06.959Z · score: 2 (1 votes) · LW · GW

There's a lot of underlying models here around the "Heaven and Enlightenment" dichotomy that I've been playing with. That is, it seems like when introspecting people either same to want to get to a point where everyone feels great, or get to a point where they can feel great/ok/at peace with everyone not feeling great. (Some people are in the middle, and for instance want to create heaven with their proximate tribe or family, and enlightenment around the suffering of the broader world).

One of the things I found out recently that makes me put more weight into the heaven and enlightenment dichotomy is that research into Kegan stage 5 has found there are two types of Kegan stage 5 - people who get really interested in other people and how they feel and how to make them do better (Heaven), and people who get really interested in their own experience and their own body and what's going on internally (enlightenment). That is, when you've discarded all your instrumental values and ontologies as fluid and contextual and open to change and growth, whats' left is your terminal values - Either heaven, or enlightenment.

comment by Ruby · 2019-06-25T20:19:52.676Z · score: 2 (1 votes) · LW · GW

I responded to your original comment here [LW · GW]. I don't know the Kegan types well enough (perhaps I should) to say whether that's a framing I agree with or not.

comment by mr-hire · 2019-07-12T17:30:38.164Z · score: 5 (3 votes) · LW · GW

INSTRUMENTAL RATIONALITY CURRICULUM

A few weeks ago I ran a workshop at the EA hotel that taught my Framework for internal debugging [LW · GW]. It went well but there was obviously too much content, and I have doubts about the ability for it to consistently effect people in the real world.

I've started planning for the next workshop, and creating test content. The idea is to teach the material as a series of habits where specific mental sensations/smells are associated with specific mental moves. These implementation intentions can be practiced through focused meditations. There are 3 "sets" of habits that each have seven or 8 meditations attached to them.

The idea is that the first course, the way of focus, teaches people the basic skills of working with intentions and focusing that are needed to not procrastinate. That is there are basic skills to focusing, that even if you don't have any internal conflict or trauma, you still need to get things done. The first course starts with that.

THE WAY OF FOCUS (Overcoming Akrasia).

1. Noticing dropped intention -> Restabilizing intention

2. Noticing Competing Intention -> Loving Snooze (+ Setting Up Pomodoros or Consistent Break Schedule)

3. Noticing Potential Intention -> Mental Contrasting

4. Noticing Coercive Intention -> Switching to Non-coercive Possiblity

5. Noticing Ambiguious/Overwhelming Intention -> Generating Specific Next Action

6. Noticing Context Switch -> Intention Clearing (+ Habits for Removing Distractions)

7. Noticing Productivity - > Reinforcing Self-Concept as Productive Person (+ Changing Environment to That of Productive Person)

THE WAY OF LETTING GO (Overcoming Trauma)

Sometimes, you'll have competing intentions come up that are very persistent, because they're related to deep emotional issues/trauma. You can find them by looking for feelings of avoidance or the inability to avoid, and then use the following techniques to dispell.

1. Noticing Avoidance-> Fuse with the Feeling

2. Noticing Magnetism -> Dissociate from Feeling

3. Inhabiting Feeling -> Finding Emotional Core

4. Finding Emotional Core -> Re-experince Memories

5. Sticky Belief -> Question Belief Via Work of Byron Katie

6. Sticky Feeling -> Let Go of Feeling Via Sedona Method

7. Sticky Memories -> Reframe Memories Via Lefkoe Belief Process

8. Process Fails-> Find Second Layer Emotion.

THE WAY OF ALIGNMENT (Overcoming Internal Conflict)

Sometimes, you'll notice competing intentions that aren't unambigiously negative or positive, and it's hard to know what to do. In those cases, you can notice the "conflicted" feeling, and use the following habits to deal with them over a period of time.

0. Noticing Conflict -> Fuse/Dissociate With Feeling (Already Taught)

0. Easy to Fuse/Dissociate -> Find Emotional Core (Already Taught)

1. Familiar Conflict-> Alternate Fusing/Dissociating (practice switching perspectives)

2. Easy to shift perspectives -> Practice holding both at once

3. Easy to hold both at once -> Internal Double Crux

4. Memory Reconsolidated -> Stack Attitudes

5. Attitudes Stacked -> Core Transformation

6. Core Transformed -> Parental Timeline Reimprinting

7. Timeline Reimprinted -> Modality Mind Palace

ASK:

I'm just finishing up the content for THE WAY OF FOCUS, and I'm looking for people to help test the material. It will involve commiting 30 minutes over the internet a day for 7 days. 10 minutes to practice previous meditations, 10 minutes to teach the new material, and 10 minutes to practice the new material via a new type of meditation.

comment by mr-hire · 2019-06-22T19:56:32.906Z · score: 5 (2 votes) · LW · GW

POST-RATIONALITY IS SYSTEMATIZED WINNING

John is a Greenblot, a member of the species that KNOWS that the ultimate goal, the way to win, is to minimize the amount of blue in the world [LW · GW], and maximize the amount of green.

The Greenblots have developed theories of cooperation, that allow them to work together to make more green. And complicated theories of light to explain the true nature of green, and several competing systems of ethics that describe the greenness or blueness of various actions, in a very complicated sense that actually clearly leads to the color.

One day, John meets Ted. Ted is a member of the Lovelots. John is aghast when he finds out that Lovelots can't perceive the difference between Blue and Green. Ted is aghast that John can't perceive the difference between love and hate. They both go on their merry way.

The next day, John is doing his daily meditation, imagining the cessation of endless blue and the ascendance of endless green, but thoughts of Ted and his inability to perceive this situation keep intruding. Suddenly, John experiences a subject-object shift [LW · GW]. He is able to perceive his meditation as Ted perceives it, with both colors being the same. In the next moment, he has a flash of the Greenblots celebrating when they've achieved their goal, and John now knows what its' like to experience the thing Ted called love.

John is confused, he thought the Greenblots had built a a fullproof theory of winning, of how to maximize the green and minimize the blue. But then he experienced endless green, and knew how it was for that to not be winning at all. And he experienced the thing Ted was describing, and the sensation of winning felt the same. John thought he knew everything about winning, but in fact he knew nothing.

John vows to understand the true nature of winning, and develop the discipline of being able to work with the sensation just like he previously was able to work with beliefs about making things greener. John will become the Greenblots' first post-rationalist.

comment by mr-hire · 2019-10-21T19:14:18.375Z · score: 4 (2 votes) · LW · GW
  • Today I had a great chat with a friend on the difference between #Fluidity and #Congruency
  • For the past decade+ my goal has been #Congruency (also often called #Alignment), the idea that there should be no difference between who I am internally, what I do externally, and how I represent myself to others
  • This worked well for quite a long time, and led me great places, but the problems with #Congruency started to show more obviously recently.
  • Firstly, my internal sense of "rightness" wasn't easily encapsulated in a single sense of consistent principles, it's very fuzzy and context specific. And furthermore, what I can even define as "right" shifts as my #Ontology shifts.
  • Secondly, and in parallel, as the idea of #Self starts to appear less and less coherent to me, the whole base that the house is built on starts to collapse.
  • This had led me to begin a shift from #Congruency to #Fluidity. #Fluidity is NOT about behaving by an internally and externally consistent set of principles, rather it's being able to find that sense of "Rightness" - the right way forward - in increasingly complex and nuanced situations.
  • This "rightness" in any given situation is influenced by the #Ontology's that I'm operating under at any given time, and the #Ontologies are influenced by the sense of "rightness".
  • But as I hone my ability to fluidly shift ontologies, and my ability to have enough awareness to be in touch with that sense of rightness, it becomes easier to find that sense of rightness/wrongness in a given situation. This is as close as I can come to describing what is sometimes called #SenseMaking.
comment by mr-hire · 2019-10-21T19:22:31.815Z · score: 3 (2 votes) · LW · GW

Sorry for all the hashtags, this was originally written in Roam.

comment by Pattern · 2019-10-22T05:11:56.367Z · score: 1 (1 votes) · LW · GW

Is Roam as useful a medium for you to read in, as it is for you to write in?

comment by mr-hire · 2019-08-11T22:54:50.899Z · score: 3 (3 votes) · LW · GW

I had one of my pilot students for the akrasia course I'm working on point out today that something I don't cover in my course is indecision. I used to have a bit of problem with that, but not enough to have sunk a lot of time into determining the qualia and mental moves related to defeating it.

Has anyone reading this gone from being really indecisive (and procrastinating because of it) to much more decisive? Or is currently working on making the switch I'd love to talk to you/model you.

As a bonus thank you, you'll of course get a free version of the course (along with all the guided meditations and audios) when it's complete.

comment by mr-hire · 2019-06-21T18:17:34.807Z · score: 3 (2 votes) · LW · GW

ON HEAVEN AND ENLIGHTENMENT

https://scontent-sjc3-1.xx.fbcdn.net/v/t1.0-9/56656099_10220056198495676_9079758874621247488_n.jpg?_nc_cat=107&_nc_oc=AQm42c-keDXguTwDHsVQz7hGt5AK-DkYK_eG13XXmHcybXql4JvgoYZC4r0Uy4LvMAU&_nc_ht=scontent-sjc3-1.xx&oh=bb4a1f996cfde07165c9e22fdfe7c06d&oe=5D901596

At the extremes, people have one of four life goals: To achieve a state of nothingness (hinayana enlightenment), to achieve a state of oneness (mahayana enlightenment), to achieve a utopia of meaning (galts gulch), or to achieve a utopia of togetherness (hivemind).

In practice, most people exist somewhere in the middle, depending on how much they want to change their conception of the world (enlightenment) vs. changing the world itself (heaven), and depending on how much they view their identity as seperate from other things (individualism) or the same as other things (collectivism).

I think I'm already past stream entry, and this is why the above diagram scares the shit out of me:

It seems like hinayana enlightenment may be an attractor state even if I have a significant amount of values that would want to create a utopia of meaning.

If I was confident that I could go the mahayana path, there's the "Bodhisattva option" - stepping back from your enlightenment to bring others in, thus creating heaven.

But it's not clear to me that I won't end up at nothingness instead of oneness, and I'm not aware of a path to step back from nothingness and create a utopia of meaning, in fact they feel almost diametrically opposed.

Hence 'Stream entry considered harmful.'

comment by Raemon · 2019-06-22T07:42:22.893Z · score: 3 (1 votes) · LW · GW

I'm interested in a medium-fleshed-out version of this comment that holds my hand more than the current one does. (Not sure whether I'd want the full fledged post version yet)

(In general, happy to see more people using shortform feeds)

((also, you probably didn't mean to call it a short-term feed))

comment by mr-hire · 2019-06-22T23:34:24.384Z · score: 2 (1 votes) · LW · GW

Will do.

comment by Elo · 2019-06-23T00:02:06.935Z · score: 2 (3 votes) · LW · GW

You should add integral's interior and exterior to the diagram.

comment by mr-hire · 2019-07-10T21:04:16.164Z · score: 2 (1 votes) · LW · GW

Interior and exterior is one component of heaven and enlightenment. It's possible to break up that one axis into several axes but its' usually correlated enough to not have to do that for the vast majority of people and organizations.

comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-11T07:16:33.616Z · score: 1 (1 votes) · LW · GW
At the extremes, people have one of four life goals: To achieve a state of nothingness (hinayana enlightenment), to achieve a state of oneness (mahayana enlightenment), to achieve a utopia of meaning (galts gulch), or to achieve a utopia of togetherness (hivemind).

These are not distinct things - they're alternative ways to frame one thing. All roads lead to Rome, so to speak. The way I see it, full enlightenment entails attaining all four at once. Just don't get distracted by the taste of lotus on the way.

comment by mr-hire · 2019-07-11T17:37:52.107Z · score: 2 (1 votes) · LW · GW

This is a common belief and it may in fact be true, but it's at odds with the ontology as presented. There are tradeoffs between which one you choose in this ontology.

comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-12T10:06:21.934Z · score: 4 (2 votes) · LW · GW

Ontologically distinct enlightenments suggest path dependence. That seems correct on reflection; updating and reframing.

Enlightenment is caused by a certain observation about mind/reality that is salient, obvious in retrospect and reliably triggers major updates. The referent of this observation is universal and invariant but its interpretation and the resulting updates may not be; the mind can only work with what it has.

In other words, enlightenment has one referent in the territory but the resulting maps are path dependent. This seems consistent with what I know about spirituality-related failure modes and doctrinal disagreements. Also, the sixties.

So yeah. Caution is warranted. Just keep in mind that your skull is an information bottleneck, not an ontological boundary.