Matt Goldenberg's Short Form Feed

post by Matt Goldenberg (mr-hire) · 2019-06-21T18:13:54.275Z · LW · GW · 302 comments

Contents

302 comments

Where I write up some small ideas that I've been happening that may eventually become their own top level posts. I'll start populating with a few ideas I've posted up as twitter/Facebook thoughts.

302 comments

Comments sorted by top scores.

comment by Matt Goldenberg (mr-hire) · 2024-10-11T19:29:33.278Z · LW(p) · GW(p)

I desperately want people to stop using "I asked Claude or ChatGPT" as a stand-in for "I got an objective third party to review"

LLMs are not objective.  They are trained on the internet which has specific sets of cultural, religious, ideological biases, and then further trained via RL to be biased in a way that a specific for-profit entity wanted them to be.

Replies from: gwern, thomas-kwa, shankar-sivarajan, RamblinDash, weightt-an, Seth Herd, MondSemmel, raunakchhatwal
comment by gwern · 2024-10-12T01:42:34.445Z · LW(p) · GW(p)

Perhaps the norm should be to use some sort of LLM-based survey service like https://news.ycombinator.com/item?id=36865625 in order to try to get a more representative population sample of LLM outputs?

This seems like it could be a useful service in general: do the legwork to take base models (not tuned models), and prompt in many ways and reformulate in many ways to get the most robust distribution of outputs possible. (For example, ask a LLM to rewrite a question at various levels of details or languages, or switch between logically equivalent formulations to avoid acquiescence bias; or if it needs k shots, shuffle/drop out the shots a bunch of times.)

comment by Thomas Kwa (thomas-kwa) · 2024-10-11T19:37:14.617Z · LW(p) · GW(p)

Disagree. If ChatGPT is not objective, most people are not objective. If we ask a random person who happens to work at a random company, they are more biased than the internet, which at least averages out the biases of many individuals.

Replies from: LosPolloFowler, Thane Ruthenis, mr-hire, HNX, AAA
comment by Stephen Fowler (LosPolloFowler) · 2024-10-11T22:52:24.314Z · LW(p) · GW(p)

I'll grant that ChatGPT displays less bias than most people on major issues, but I don't think this is sufficient to dismiss Matt's concern.

My intuition is that if the bias of a few flawed sources (Claude, ChatGPT) is amplified by their widespread use, the fact that it is "less biased than the average person" matters less. 

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-10-12T15:44:20.152Z · LW(p) · GW(p)

Yes, this is an excellent point I didn't get across in the past above.

comment by Thane Ruthenis · 2024-10-12T13:28:47.903Z · LW(p) · GW(p)

LLMs are, simultaneously, (1) notoriously sycophantic, i. e. biased to answer the way they think the interlocutor wants them to, and (2) have "truesight", i. e. a literally superhuman ability to suss out the interlocutor's character (which is to say: the details of the latent structure generating the text) based on subtle details of phrasing. While the same could be said of humans as well – most humans would be biased towards assuaging their interlocutor's worldview, rather than creating conflict – the problem of "leading questions" rises to a whole new level with LLMs, compared to humans.

You basically have to interpret an LLM being asked something as if a human were asked as biased a way to phrase this question as possible.

Replies from: MichaelDickens, FractalSyn
comment by MichaelDickens · 2024-10-12T16:55:40.805Z · LW(p) · GW(p)

(2) have "truesight", i. e. a literally superhuman ability to suss out the interlocutor's character

Why do you believe this?

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2024-10-12T17:16:10.520Z · LW(p) · GW(p)

See e. g. this [LW · GW] and this [LW(p) · GW(p)], and it's of course wholly unsurprising, since it's literally what the base models are trained to do.

comment by FractalSyn · 2024-10-12T18:11:30.141Z · LW(p) · GW(p)

I wouldn't say that my experience with ChatGPT is in total agreement with your conclusion yet you're raising a good point and the distinction is helpful. I remember of conversations in which the chatbot would both acknowledge and challenge my viewpoint, which I must admit is quite appreciated and not systematic in the biological realm. On the other hand, indeed it is common that pushing the chatbot to buy my arguments and adopt my stance be fairly easy. 

Somehow it's very related to humanlike intelligence; that is, when training an LLM-based chatbot[1] by reinforcement, the positive (rewarding) feedback comes from both confirmation of the interlocutor's beliefs and matters like veracity, ethics, ... It's also what we humans have been experiencing. 

Why and how does it rise to a whole new level when it comes to AI? I tend to think that we must understand the technologies we are using, so it's our responsibility to use chatbots properly and leverage their capabilities. When talking with a child, or a yound student, or generally someone you know is a newcomer, we adapt our questions, arguments, and the way we process their responses. It's not an exact science for sure, but there's no reason to expect so with chatbots.     

  1. ^

    It seems more accurate than LLMs as those have not yet been trained to have a chat with you

comment by Matt Goldenberg (mr-hire) · 2024-10-11T21:23:32.750Z · LW(p) · GW(p)

Of course a random person is biased. Some people will will have more authority than others, and we'll trust them more, and argument screens off authority.

What I don't want people to do is give chatGPT or Claude authority. Give it to the wisest people you know not Claude.

comment by HNX · 2024-10-12T12:31:11.276Z · LW(p) · GW(p)

[1] Can't they both be not objective? Why make it a point of one or the other? A bit of a false dichotomy, there. 

[2] There is no single "Internet" - there are specific spaces, forums, communities, blogs, you name it; comprising it. Each has its own, subjective, irrational, moderated (whether by a single individual, a team, or an overall sentiment of the community: promoting/exalting/hyping one subset of topics while ignoring others) mini/sub-culture. 

This last one, furthermore, necessarily only happens to care about its own specific niche; happily ignoring most of everything else. LessWrong used to be mostly about, well - being less wrong - back when it started out. Thus, the "rationality" philosophy. Then it has slowly shifted towards a broader, all-encompassing EA. Now it's mostly AI. 

Compare the 3k+ [? · GW] results for the former against the 8k+ [? · GW]results for the latter.

Every space is focused on its own topic, within whatever mini/sub-cultural norms are encouraged/rewarded or punished/denigrated by the people within it. That creates (virtually) unavoidable blind spots, as every group of people within each space only shares information about [A] its chief topic of interest, within [B] the "appropriate" sentiment for the time, while [C] contrasting itself against the enemy/out-group/non-rationalists, you name it. 

In addition to that, different groups have vastly different [I] amount of time on their hands, [II] social, emotional, ethical, moral "charge" with regards to the importance they assign to their topic of choice, and emergent from it come out [III] vastly different amounts of information, produced by the people within that particular space.

When you compile the data set for your LLM, you're not compiling a proportionately biased take on different topics. If that was the case, I'd happily agree with you. But you are clearly not. What you are compiling is a bunch of biased, blindsided in their own way, overly leaning towards one social, semantic, political, epistemological position; sets of averaged sentiments. Each will have their own memes, quirks, "hot takes". Each will have massively over-represented discussions of one topic, at the expense of the other. That's the web of today.

When you "train" your GPT on the resulting data set then, who is to say whether it is "averaging" the biases in between different groups? Can you open up any LLM to see its exact logic, reasoning, argumentation steps? Should there be any averaging going on, after all - how is it going to account for disproportionately represented takes of people, who simply have too much time and/or rage to spare? What of the people, who simply don't spend too much on the web to begin with? Is your GPT going to "average in" those as well, somehow?

What would prevent the resulting transformer from simply picking up on the likelihood of any given incoming prompt matching the overall "culture" of any single community, thus promptly completing it as if it was a part of an "average" discussion within that particular community there? Isn't it plain wishful, if not outright naive*, to imagine the algo will do what you hope it will do - instead of what is the easiest possible thing for it to do?

* the fact a given thought pattern is wishful/naive doesn't make you wishful/naive; don't take it personally, plz

comment by yc (AAA) · 2024-10-13T01:34:15.322Z · LW(p) · GW(p)

It’s probably less on all internet but more on the rlhf guidelines (I imagine the human reviewers receive a guideline based on the LLM-training company’s policy, legal, and safety experts’ advice). I don’t disagree though that it could present a relatively more objective view on some topics than a particular individual (depending on the definition of bias).

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-10-11T21:16:52.139Z · LW(p) · GW(p)

Would you say the same thing of people saying they looked at the Wikipedia article?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-10-12T15:41:15.847Z · LW(p) · GW(p)

Yes, if people were using Wikipedia in the way they are using the LLMs.

In practice that doesn't happen though, people cite Wikipedia for facts but are using LLMs for judgement calls.

comment by RamblinDash · 2024-10-13T21:54:16.241Z · LW(p) · GW(p)

I treat chatGPT as a vibes-ologist; it's good for answering questions about like which X is most popular or what do most people think about X. I agree it's less good for "X is true"

comment by weightt an (weightt-an) · 2024-10-13T08:24:11.050Z · LW(p) · GW(p)

It's not just biases, they are also just dumb. (Right now, nothing against 160 iq models that you have in the future). They are often unable to notice important things, or unable to spot problems, or follow up on such observations.

comment by Seth Herd · 2024-10-11T19:34:20.740Z · LW(p) · GW(p)

What they're saying is I got a semi-objective answer fast.

If they'd googled for the answer all the same concerns would apply. You'd need to know the biases of whoever wrote the web content they read to get an answer.

I doubt the orga got much of their own bias into the RLHF/RLAIF process. There are real cultural biases from the humans answering RLHF and the LLM itself from the training set and how it interpreted its constitution.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-10-11T19:40:22.437Z · LW(p) · GW(p)

What they're saying is I got a semi-objective answer fast. 

 

Exactly. Please stop saying this. It's not semi-objective. The trend of casually treating LLMs as an arbiter of truth leads to moral decay.

 

I doubt the orga got much of their own bias into the RLHF/RLAIF process

This is obviously untrue, orgs spend lots of effort making sure their AI doesn't say things that would give them bad press for example.

Replies from: Seth Herd
comment by Seth Herd · 2024-10-11T20:43:51.697Z · LW(p) · GW(p)

I should've specified - the orgs carefully train to get them to refuse to say things. I don't think the specifically train them to say things the orgs like or believe. The refusals are intentional, the bias is accidental IMO.

And every source has bias.

So, do you want people.to.quit saying they googled for an answer? I just like them to say where they got the answer so I can judge how biased it might be.

comment by MondSemmel · 2024-10-12T16:00:31.826Z · LW(p) · GW(p)

Agreed, except for the small caveat of LLMs answers which can be easily verified as approximately correct. E.g. answers to math problems where the solution is hard but the verification is easy; or Python scripts you've tested yourself and whose output looks correct; or reformatted text (like plaintext -> BBCode) if it looks correct on a word diff website.

Incidentally, are there any LLM services which can already this kind of verification in specific domains?

comment by RaunakChhatwal (raunakchhatwal) · 2024-10-12T05:33:12.635Z · LW(p) · GW(p)

It still signals to the subject of my question that I put in some effort before coming to them.

comment by Matt Goldenberg (mr-hire) · 2019-07-10T19:44:45.268Z · LW(p) · GW(p)

FEELINGS AND TRUTH SEEKING NORMS

Stephen Covey says that maturity is being able to find the balance between Courage and Consideration. Courage being the desire to express yourself and say your truth, consideration being recognizing the consequences of what you say on others.

I often wish that this was common knowledge in the rationality community (or just society in general) because I see so many fights between people who are on opposite sides of the spectrum and don't recognize the need for balance.

Courage is putting your needs first, consideration is putting someone else's needs first, the balance is putting your needs equally. There are some other dichotomies that I think are pointing to a similar distinction.

Courage------->Maturity----->Consideration

From parenting literature:

Authoritarian------->Authoritative----->Permissive

From a course on confidence:

Aggressive------->Assertive----->Passive

From attachment theory:

Avoidant------->Secure----->Preoccupied

From my three types of safe spaces: [LW(p) · GW(p)]

We'll make you grow---->Own Your Safety---> We'll protect you.

--------------------------------------------------------------------

Certain people may be wondering how caring about your feelings and others feelings relate to truth seeking. The answer is that our feelings are based on system 1 beliefs. I suspect this isn't strictly 100% true but its' a useful model, one behind Focusing, Connection Theory, Cognitive Behavioral Therapy, Internal Double Crux, and a good portion of other successful therapeutic interventions.

How this caches out is that being able to fully express yourself is a necessary prerequisite to being able to bring all your beliefs to bear on a situation. Now sometimes, when someone is getting upset, its' not a belief like (this thing is bad) but "I believe that believing what you're saying is unsafe for my identity" or some similar belief.

However, if they think its' unsafe to express THAT belief, you end up in a situation where people have to protect themselves under the veneer of motivated reasoning. You end up in a situation where everybody is still protecting themselves, but they're all pretending to do it in pursuit of the truth (or whatever the group says it values).

In this sense, tone arguments are vitally important to keeping clean epistemic norms [LW · GW]. If I'm not allowed to express the belief that the way you're phrasing things means I'm going to die horribly and live alone forever (which may be an actual system 1 belief), then I have to come up with FAKE arguments against the thing you're saying, or leave the group where that belief of mine isn't being respected.

Which brings me back to the definition of Maturity. If you put your need to express what you think is true in the way you feel is true (which again, is based on your beliefs), over my feelings that I'm going to be alone forever if people take your arguments seriously), you not only are acting immature, but you're fostering an immature community with people who aren't in touch with their own beliefs. What was wrong with this example:

The conversation of the group shifted at the point when Susan started to cry. From that moment, the group did not discuss the actual issue of the student community. Rather, they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault.

Was not that the group considered Susan's feelings, but that they put Susan's feelings above their own beliefs, instead of on equal footing.

------------------------------------------------------------

Here are some situations I've encountered where I wish people knew about the definition of Maturity:

A rationalist friend of mine got upset about being repeatedly asked about a situation after he asked the other person to stop. The other rationalist friend told him "The mature thing to do would be able to control your feelings, like this other rationalist I know." The mature thing is to control your feelings, but also sometimes express them loudly, depending on the needs in the moment.

A lover told me that they weren't going to lie to me, they were going to tell it like it is. I said that was in general fine, but that I wanted them to consider how the way and time they told me things affected my feelings. They said no, they would express themselves when and how they wanted, and they expected me to do the same. That relationship didn't last long.

People taking care of a friend at detriment to their own health.

Soooo many more.

------------------------------------------------------------

Lately, I've been considering adding a third factor, so it's no longer a dichotomy but a trichotomy. Courage, Consideration, and Consequences.

I know there's a strong idea around norms in the rationality community to go full courage (expressing your true beliefs) and have other people mind thmeselves and ignore the consequences (decoupling norms). As I've said elsewhere and above, I think in actuality this leads to a community that trains people to hide certain beliefs and lie about their motives, but do it in a way that can't be called out.

I think you should obviously think about the effects of what you say, on the culture, on the world, and on the person you're speaking to. I have beliefs about this, which cache out in me feeling very upset when people express the truth at all costs, because they're sacrificing their terminal values for instrumental ones, but I'm punished in the rationality community for saying this, so I'm less likely to express it. So the truth seeking norm is stifling my ability to tell the truth.

I think in general I'd love to see WAY more truth seeking norms in society, but I think that's because most of society is immature, they're way too much on the side of consideration, with barely thought to consequences and courage.

Meanwhile, some of the rationality community has gone way to much towards courage, ignoring consideration and consequences.


Replies from: ChristianKl, Lukas_Gloor
comment by ChristianKl · 2019-08-12T12:45:20.084Z · LW(p) · GW(p)

I found Taber's radical honesty workshops very useful for a framing of how to deal with telling the truth.

According to him telling the truth is usually about choosing pain now instead of pain in the future. However not all kinds of pain are equal. A person who practices yoga has to be able to tell the pain from stretching from the pain of hurting their joints. In the same way a person who speaks in a radical honest way should be aware of the pain that the statement produces and be able to distinguish whether it's healthy or isn't.

Courage is only valuable when it comes with the wisdom to know when the pain you are exposing yourself is healthy and when it isn't. The teenager who expresses courage to signal courage to his friends without any sense of whether the risk he takes is worth it isn't mature.

Building up thick emotional walls and telling "the truth" without any consideration of the effects of the act of communication doesn't lead to honest conversation in the radical honesty sense. As it turns out, it also doesn't have much to do with real courage as it's still avoiding the conversations that are actually difficult.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-12T13:25:39.636Z · LW(p) · GW(p)

I like this framing, the idea of useful and nonuseful pain. It seems like a similary useful definition of maturity.

Replies from: ChristianKl
comment by ChristianKl · 2019-08-13T09:11:21.412Z · LW(p) · GW(p)

One difference is that different types of pain come with slightly different qualia. This allows communication that's in contact with what's felt in the moment which isn't there in ideas of maturity where maturity is about following rules that certain things shouldn't be spoken.

comment by Lukas_Gloor · 2019-07-20T09:04:36.981Z · LW(p) · GW(p)

Excellent comment!

I know there's a strong idea around norms in the rationality community to go full courage (expressing your true beliefs) and have other people mind thmeselves and ignore the consequences (decoupling norms).

"Have other people mind themselves and ignore the consequences" comes in various degrees and flavors. In the discussions about decoupling norms I have seen (mostly in the context of Sam Harris), it appeared me that they (decoupling norms) were treated as the opposite of "being responsible for people uncharitably misunderstanding what you are saying." So I worry that presenting it as though courage = decoupling norms makes it harder to get your point across, out of worry that people might lump your sophisticated feedback/criticism together with some of the often not-so-sophisticated criticism directed at people like Sam Harris. No matter what one might think of Harris, to me at least he seems to come across as a lot more empathetic and circumspect and less "truth over everything else" than the rationalists whose attitude about truth-seeking's relation to other virtues I find off-putting.

Having made this caveat, I think you're actually right that "decoupling norms" can go too far, and that there's a gradual spectrum from "not feeling responsible for people uncharitably misunderstanding what you are saying" to "not feeling responsible about other people's feelings ever, unless maybe if a perfect utilitarian robot in their place would also have well-justified instrumental reasons to turn on facial expressions for being hurt or upset". I just wanted to make clear that it's compatible to think that decoupling norms are generally good as long as considerateness and tact also come into play. (Hopefully this would mitigate worries that the rationalist community would lose something important by trying to reward considerateness a bit more.)

comment by Matt Goldenberg (mr-hire) · 2020-01-24T17:16:39.290Z · LW(p) · GW(p)

FITTING IN AND THE RATIONALITY COMMUNITY


One of my biggest learning experiences over the last few years was moving to the Bay Area, and attempting to be accepted into the "Rationality Tribe".

When I first took my CFAR workshop years ago, and interacted with the people in the group, I was enamored. A group of people who was into saving the world, self-improvement, understanding their own minds, connecting with others - I felt like I had found my people.

A few short months later I moved to the Bay Area.

I had never been good at joining groups or tribes. From a very early age, I made my friend group (sometimes very small) by finding solid individuals that could connect to my particular brand of manic, ambitious, and open, and bringing them together through my own events and hangouts.

In Portland, where I was before moving to the Bay, I really felt I had a handle on this, meeting people at events (knowing there weren't many who would connect with me in Portland), then regularly hosting my own events like dinner parties and meetups to bring together the best people.

Anyway, when I got to the Bay, I for the first time tried really hard to be accepted into existing tribes. Not only did I finally think I had found a large group of people I would fit in with, I was also operating under the assumption that I needed to be liked by all these peoples because they were allies in changing the world for the better.

And honestly, this made me miserable. While I did find a few solid people I really enjoyed, trying to be liked and accepted by the majority of people in the rationality community was an exercise in frustration - Being popular has always run counter to my ability to express myself honestly and openly, and I kept having to bounce between the two choices.

And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a close connection to most people in it, and feel alienated as a result.

What feels real to me is that there are people in the rationality community that I like, and love. And there are people outside of the rationality community that I like and love. And that it makes a lot of sense for me to stop trying to bounce from round hole to round hole, trying to see if my square peg fits in.

Instead, like always, I'll just make my island, and invite the people who want to be there with me.

Replies from: Viliam, Raemon, Isnasene
comment by Viliam · 2020-01-26T22:36:33.872Z · LW(p) · GW(p)

Being a rationalist is not the only trait the individual rationalists have. Other traits may prevent you from clicking with them. There may be traits frequent in the Bay Area that are unpleasant to you.

Also, being an aspiring rationalist is not a binary thing. Some people try harder, some only join for the social experience. Assuming that the base rate of people "trying things hard" is very low, I would expect that even among people who identify as rationalists, the majority is there only for the social reasons. If you try to fit in with the group as a whole, it means you will mostly try to fit in with these people. But if you are not there primarily for social reasons, that is already one thing that will make you not fit in. (By the way, no disrespect meant here. Most of people who identify as rationalists only for social reasons are very nice people.)

What you could do, in my opinion, is find a subgroup you feel comfortable with, and accept that this is the natural state of things. Also, speaking as an introvert, I can more easily connect with individuals than with groups. The group is simply a place where I can find such individuals with greater frequency, and conveniently meet more of them at the same place.

Or -- as you wrote -- you could create such subgroup around yourself. Hopefully, it will be easier in the Bay Area than it would be otherwise.

Replies from: mr-hire, Zack_M_Davis
comment by Matt Goldenberg (mr-hire) · 2020-01-27T01:37:09.414Z · LW(p) · GW(p)
What you could do, in my opinion, is find a subgroup you feel comfortable with, and accept that this is the natural state of things.

I'm pretty pessimistic about this, it's never worked for me before, nor did I I find any existing subgroup in the rationality community that I could do this.

Or -- as you wrote -- you could create such subgroup around yourself.

Definitely, but why limit it to just rationalists in that case?

Replies from: Viliam
comment by Viliam · 2020-01-27T19:46:15.778Z · LW(p) · GW(p)
Definitely, but why limit it to just rationalists in that case?

Good point.

Not sure how well a mixed group of rationalists and non-rationalists would function. But you could create more than one group.

comment by Zack_M_Davis · 2020-01-26T23:26:26.699Z · LW(p) · GW(p)

Hopefully, it will be easier in the Bay Area than it would be otherwise.

Speaking as a Bay Area native,[1] I would not use the word "hopefully" here!

(One would hope to find or create a subgroup, but it would be nicer if it were possible to do this somewhere with less-insane housing prices and ambient [LW · GW] culture. Hoping that it needs to be done here on account of just having moved here would be the sunk cost fallacy [LW · GW].)


  1. Raised in Walnut Creek, presently in Berkeley. ↩︎

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-01-27T01:35:27.947Z · LW(p) · GW(p)

Note that as someone who has up and moved multiple times, I can assure you that it's possible to make friends in other cities. If you've never moved out of your home city, I recommend doing it at least once, for a few years, even if you move back at the end.

comment by Raemon · 2020-01-25T02:51:57.547Z · LW(p) · GW(p)

I'm curious how much of this you attribute to (the following random hypotheses I just formed, as well as any other hypotheses you have):

  • tribal integration being generally hard
  • Bay rationalists being particular bad at Tribal/friendship
  • Bay rationalists not having enough social infrastructure, or other problems distinct from "bad at Tribal" (i.e. I think the math may just not work out for many friends you can expect to make quickly, and how much help you'll have making friends)
  • specific (possibly subtle) differences from the culture-you-wanted and the culture-that-was-there. (i.e. you pushing for changes or having opinions that ran against the status quo)
Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-01-25T13:55:25.367Z · LW(p) · GW(p)

Are you asking about my particular realization here, or this part:

And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a close connection to most people in it, and feel alienated as a result.

?

Replies from: Raemon
comment by Raemon · 2020-01-25T17:45:51.402Z · LW(p) · GW(p)

Hmm, either I guess. It definitely looks like there are some kind of issues in this space that I’d like to help the Bay community improve at, but am not sure what kind of improvements are tractable and am trying to just get a better shape of the situation. 

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-02-04T23:31:24.662Z · LW(p) · GW(p)

Some thoughts on this:

I personally just am not really made to fit into communities, I do a much better job building my own.

I'd say that in my particular case, this issue screens off a lot of the other issues.

In the case of the Bay Area rationality as a whole, I think that in general it does a fairly bad job of being a friendly community for people who want to join communities, some of the causes of this seem to be (in no particular order)

  • High levels of autism and autism spectrum disorder.
  • A large gender imbalance.
  • Weird status dynamics.
comment by Isnasene · 2020-01-25T01:29:04.396Z · LW(p) · GW(p)
And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a close connection to most people in it, and feel alienated as a result.

As someone who has considered making the Pilgrimmage To The Bay for precisely that reason and as someone who decided against it partly due to that particular concern, I thank you for giving me a data-point on it.

Being a rationalist in the real world can be hard. The set of people who actually worry about saving the world, understanding their own minds and connecting with others is pretty low. In my bubble at least, picking a random hobby and incidentally becoming friends with someone at it and then incidentally getting slammed and incidentally an impromptu conversation has been the best performing strategy so far in terms of success per opportunity-cost. As a result, looking from the outside at a rationalist community that cares about all these things looks like a fantastical life-changing ideal.

But, from the outside view, all the people I've seen who've aggressively targeted those ideals have gotten crushed. So I've adopted a strategy of Not Doing That.

(pssst: this doesn't just apply to the rationalist community! it applies to any community oriented around values disproportionately held by individuals who have been disenfranchised by broader society in any way! there are a lot of implications here and they're all mildly depressing!)

comment by Matt Goldenberg (mr-hire) · 2019-07-02T01:22:53.145Z · LW(p) · GW(p)

LESS BAD ORGANIZATIONS

The Gervais Principle says that when an organization is run by Sociopaths, it inevitably devolves into infighting and politics that the sociopaths use to make decisions, and then blame them on others. What this creates is a misaligned organization - people aren't working towards the same thing, and therefore much wasted work goes towards undoing what others have done, or assigning blame to someone that isn't yourself. Organizations with people that aren't aligned can sometimes luck into good outcomes, especially if the most skilled players (the most skilled sociopaths) want them to. They aren't necessarily dead players, but they're running on borrowed time - borrowed for the usefulness to the sociopaths.

Dead organizations are those that are run by Rao's clueless (or less commonly, by Rao's losers, in which case you have a Bureaucracy that outlived its' founder). They can't do anything new because they're run by people that can't question the rulesets they're in. As a clueless leading a dead organization, one effective strategy seems to be to accept the memes around you unquestioningly and really executing on them. The most successful people in Silicon valley make their own rules, but the next tier are the people who take the memes of Silicon Valley and follow them unquestioningly. This is how organizations enter Mythic Mode [LW · GW] - they believe in the culture around them so much that they channel the god of that culture, and are able to attract funding, customers, results etc purely through the resulting aura.

Running Good Organizations

Framing the Gervais principle in terms of Kegan:

Losers - Kegan 3

Clueless - Kegan 4

Sociopaths - Kegan 4.5

To run a great organization, the first thing you need is to be lead not by a socipath, but someone who is Kegan 5. Then you need sociopath repellent.

Short Form Feed is getting too long. I'll write more on good organizations at some point soon.

comment by Matt Goldenberg (mr-hire) · 2020-07-01T19:57:48.803Z · LW(p) · GW(p)

THE THREE TYPES OF RATIONALITY AND EFFECTIVE LEADERSHIP

The Instrumental/Epistemic split is awful.  If rationality is systematized winning, all rationality is instrumental.

So then, what are three types of Instrumental Rationality?

  1. Generative Rationality
    1. What mental models will best help me/my organization/my culture generate ideas that will allow us to systematically win?
  2. Evaluative Rationality
    1. What mental models will best help me/my organization/my culture evaluate ideas, and predict which ones will allow us to systematically win?
  3. Effectuative Rationality
    1. What mental models will best  help me/my organization/my culture implement those ideas in an effective way that will help us to systematically win?

Evaluation typically gets lumped under "Epistemics" , Effectuation typically gets lumped under "Instrumentals" and Generation is typically given the shaft - certainly creativity is undervalued as an explicit goal in the rationality community (although it's implicitly valued in that people who create good ideas are given high status).

Great leaders can switch between these 3 modes at will.  

If you look at Steve Jobs' reality distortion field, it's him being able to switch between the 3 modes at will, only using evaluative reality when choosing a direction - other times he's operating on Generative and Effectuative Rationality principles. This allows him to eventually shape reality to the vision he generated using his effectuative principles.  By using the proper types of rationality at the right time, he's actually able to shape reality instead of merely predicting it.

If you look at Walt Disney, he used to frequently say a phrase that indicates he knew how to switch between these 3 modes: He used to talk about he was "actually 3 different Walts: The Dreamer, The Realist, and the Spoiler".    Access to these 3 modes allowed Walt to do things that other's would have looked at with their Evaluative Rationality and viewed as impossible."

You can see with Elon Musk too.  Look at that the difference between how he acts with budgeting and how he acts with deadlines.  When he's budgeting, he uses his evaluative rationality - when he's making deadlines, he's using his effectuative rationality - he knows large visions and hard to reach goals actually help people take better action.  You shouldn't view his deadlines as predictions, but as motivation tools.

Are great leaders then liars?  No, great leaders are Kegan 5 players who don't just say things, but are actually operating through these 3 frameworks (to a first approximation) at any given time.  When a great leader is generating, their not worried about evaluating their ideas.  When they're evaluating, theyre not worried about effectuating those ideas. When they're effectuating, they're not generating. 

They're using whatever framework can make the most MEANING out of the current situation, both now in the long term. They're skillfully cycling through these frames in themselves - and outputting the truth of whatever ontology they're operating through at the given moment.

One of my worries with the talk about Simulacra Levels and how it relates to Moral Mazes is that it's not distinguishing between Kegan 2 players (who are lying and manipulating the system for their own gain), with Kegan 4.5 players (who are lying and manipulating the system because they actually have no ontology to operate through except revenge and power), with Kegan 5 players (who are viewing truth and social dynamics as objects to be manipulated because there is no truth of which tribe their a part of or what they believe about a specific thing - it's all dependent on what will generate the most meaning for them/their organization/their culture).  

It's absolutely imperative that you create systems to filter out Sociopathic Kegan 4.5 lizard people if you want your organization to avoid being captured by self-interest. 

 At the same time, it's absolutely imperative that you have systems that can find, develop and promote Kegan 5 leaders that can create new systems and operate through all 3 types of rationality.  Otherwise your organizations/cultures values won't be able to evolve with changing situation.

I worry framing things as Simulacra levels don't distinguish between these two types of players.

Replies from: mr-hire, mr-hire, Raemon
comment by Matt Goldenberg (mr-hire) · 2020-07-01T20:16:35.222Z · LW(p) · GW(p)

P.S. Was thinking about writing this up more coherently as a top level post. Is there any interest in that?

Replies from: Dagon
comment by Dagon · 2020-07-01T21:36:48.887Z · LW(p) · GW(p)

I'd like to see it, and even more I'd like to see the tweaking and objections from people who see the levels as exclusive and incremental, rather than filters which can be simultaneously used or switched among as needed.

comment by Matt Goldenberg (mr-hire) · 2020-07-30T00:38:00.159Z · LW(p) · GW(p)

What happens if you the parts of your mind responsible for generative rationality, the positive optimistic part, takes over without input from Evaluative and Effectuative rationality?  It might look a light like Persistent Euphoric States.

comment by Raemon · 2020-07-01T20:38:30.357Z · LW(p) · GW(p)

One of my worries with the talk about Simulacra Levels and how it relates to Moral Mazes is that it's not distinguishing between Kegan 2 players (who are lying and manipulating the system for their own gain), with Kegan 4.5 players (who are lying and manipulating the system because they actually have no ontology to operate through except revenge and power), with Kegan 5 players (who are viewing truth and social dynamics as objects to be manipulated because there is no truth of which tribe their a part of or what they believe about a specific thing - it's all dependent on what will generate the most meaning for them/their organization/their culture).

At the same time, it's absolutely imperative that you have systems that can find, develop and promote Kegan 5 leaders that can create new systems and operate through all 3 types of rationality.  Otherwise your organizations/cultures values won't be able to evolve with changing situation.

I worry framing things as Simulacra levels don't distinguish between these two types of players.


This is an interesting concern. I think it's useful to distinguish these things. I'm not sure how big a concern it is for the Simulacra Levels thing to cover this case – my current worry is that the Simulacra concept is trying to do too many things. But, since it does look like Zvi is hoping to have it be a Grand Unified Theory, I agree the Grand Unified version of it should account for this sort of thing.

comment by Matt Goldenberg (mr-hire) · 2019-09-16T18:43:48.859Z · LW(p) · GW(p)

Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person.

I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skill but are now good, and model the strategies that are common for them to make the switch.

The episode is cut to tell a narrative of what the skills are to be acquired, what beliefs/attitudes need to be let go of and acquired, and the process to acquire them, rather than focusing on interviewing a particular person

If there's enough interest, I'll do a pilot episode. Comment with what skillset you'd love to see a pilot episode on.

Upvote if you'd have 50% or more chance of listening to the first episode.

Replies from: Viliam, mr-hire, William_Darwin
comment by Viliam · 2019-09-16T21:51:48.300Z · LW(p) · GW(p)

Sounds interesting!

The question is, how good are people at introspection: what if the strategies they report are not the strategies they actually use? For example, because they omit the parts that seem unimportant, but that actually make the difference. (Maybe positive or negative thinking is irrelevant, but imagining blue things is crucial.)

Or what if "the thing that brings success" causes the narrative of the cognitive strategy, but merely changing the cognitive strategy will not cause "the thing that brings success"? (People imagining blue things will be driven to succeed in love, and also to think a lot about fluffy kittens. However, thinking about fluffy kittens will not make you imagine blue things, and therefore will not bring you success in love. Even if all people successful in love report thinking about fluffy kittens a lot.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-16T23:09:59.615Z · LW(p) · GW(p)

I think its' probably likely that gaining knowledge in this way will have systematic biases (OK, this is probably true of all types of knowledge acquisition strategies, but you pointed out some good ones for this particular knowledge gathering technique.)

Anyways, based on my own research (and practical experience over the past few months doing this sort of modelling for people with/without procrastination issues) here are some of the things you can do to reduce the bias:

  • Try to inner sim using the strategy yourself and see if it works.
  • Model multiple people, and find the strategies that seem to be commonalities.
  • Check for congruence with people as they're talking. Use common indicators of cached answers like instant answers or lack of emotional charge.
  • Make sure people are embodied in a particular experience as they discuss, rather than trying to "figure themselves out" from the outside.
  • Use introspection tools from a variety of disciplines like thinking at the edge, coherence therapy, etc that can allow people to get better access to internal models.

All that being said, there will still be bias, but I think with these techniques there's not SO much bias that its' a useless endeavor.

comment by Matt Goldenberg (mr-hire) · 2020-09-15T19:31:07.985Z · LW(p) · GW(p)

I'm doing interviews for this now.

I've gotten great feedback from people I've interviewed, saying it gave then a better understanding of themselves.

If you're interested in being interviewed, sign up here.

comment by William_Darwin · 2019-09-16T22:05:53.796Z · LW(p) · GW(p)

Sounds interesting. I think it may be difficult to find a person, let alone multiple people on a given topic, who are have a particular skill but are also able to articulate it and/or identify the cognitive strategies they use successfully.

Regardless, I'd like to hear about how people reduce repetitive talk in their own heads - how to focus on new thoughts as opposed to old, recurring ones...if that makes sense.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-17T00:51:39.508Z · LW(p) · GW(p)

Is this ruminating, AKA repetively going over bad memories and negative thoughts? Or is it more getting stuck with cached thoughts and not coming up with original things?

comment by Matt Goldenberg (mr-hire) · 2019-08-28T13:57:14.771Z · LW(p) · GW(p)

The four levels of listening, from some old notes:

1. Content - Do you actually understand what this person is saying? Do they understand that you understand?

2. Subtext - Do you actually understand how this person feels about what they're saying? Do they understand that you understand?

3. Intent- Do you actually understand WHY this person is saying what they're saying? Do they understand that you understand?

4. Paradigm - Do you actually understand what all of the above says about who this person is and how they view the world? Do they understand that you understand?

comment by Matt Goldenberg (mr-hire) · 2019-07-10T20:47:32.304Z · LW(p) · GW(p)

SOCIOPATH REPELLENT FOR GOOD ORGANIZATIONS AND COMMUNITIES

The role of the Kegan 5 in a good organization:

1. Reinvent the rules and mission of the organization as the landscape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. Notice when sociopaths are arbitraging the difference between the rules and the terminal goals, and shut it down.

----------

Sociopaths (in the Gervais principle sense) are powerful because they're Kegan 4.5. They know how to take the realities of Kegan 4's and 3's and deftly manipulate them, forcing them into alignment with whatever is a good reality for the Sociopath.

The most effective norm I know to combat this behavior is Radical Transparency. Radical transparency is different from radical honesty. Radical honesty says that you should ignore consideration and consequences in favor of courage [LW(p) · GW(p)]. Radical transparency doesn't make any suggestions about what you should say, only that everyone in the organization should be privy to things everyone says. This makes it exceedingly hard for sociopaths to maintain multiple realities.

  • One way to implement radical honesty is to do what David Ogilvy used to do. If someone used BCC in their emails too much, he would fire them. That's an effective Sociopath repellent.
  • Another way to implement radical honesty is to record all your conversations and make them available to everyone, like Bridgewater does. That's an effective Sociopath repellent.

Once I was part of an organization that was trying to create a powerful culture. Someone had just told us about the recording all conversations thing, so me and another leader in the organization decided to try it in one of our conversations. We found we had to keep pausing the recording because the level of honesty we were having with each other would cause our carefully constructed narratives with everyone else to crumble. We were acting as sociopaths, and we had constructed an awful organization.

I left shortly after, but it would have been an exceedingly painful process to convert to a good organization at that time. Creating sociopath repellent organizations is painful because most of us act like sociopaths some of the time, and operating from a place of universal common knowledge means that we have to be prepared to bring our full selves to every situation, instead of crafting ourself to the person in front of us.

---------

The second most effective norm I know to act as sociopath repellent is that anyone should be able to apply the norms to anyone else. Here's how I described that in a previous post:

Anyone should be able to apply the values to anyone else. If "Give critical feedback ASAP, and receive it well" is a value, then the CEO should be willing to take feedback from the new mail clerk. As soon as this stops being the case, the 3's get look for their validation elsewhere, and the 4's get disillusioned.

Besides selective realities, another way that sociopaths gain advantage is through selective application of the norms when it suits them. By creating norms that anyone can apply to anyone else (and making them clear by providing the opposites, as well as examples) you prevent this behavior from sociopaths and take away one of their main weapons.

Once, I was the leader of an organization (ok, I was actually the captain of a team in highschool, but same thing). I was elected leader because I exemplified the norms as good or better than most others, and had the skills to back it up. Once I became the leader, I eventually ran into challenges with sociopathic (again in the Gervais principle sense) behavior trying to undermine my authority. Instead of leaning back on the principles that had earned me the position, I leaned on my power to force people to do what I wanted, while ignoring the principles that got me there. This made others lose faith in the principles, and killed morale, leading to infighting and politics.

The lesson for me as a leader was to lead with influence based on moral authority, not power. But the lesson for me as an organization designer was to allow ANYBODY to enforce the norms, not just the leader, and to make this ability part of the norms themselves. This would have immediately prevented from ruining team morale when I descended into petty behavior.

-------

The final important behavior for sociopath repellent is to notice when the instrumental values of the organization aren't serving the terminal goals, and relentlessly redefine the core values to make them closer to spirit, rather than the letter. This is important because Gervais Sociopaths ALSO have this ability to notice when the instrumental values aren't serving the terminal goals, and will arbitrage this difference for their own gain. A good Kegan 5 leader will be able to point to the values, show how they're meant to lead to the results, then lead the organization in redefining them so that sociopaths can't get away with anything.

Occasionally, Kegan 5 leaders will have to take a look at the landscape, notice its' changed, and make substantial changes to the values or mission of an organization to keep up with the current reality.

------

The next question becomes, if you want a long lasting organization, and a skilled Kegan 5 leader is necessary for a long running organization, how do you get a steady stream of Kegan 5 leaders? This is The Succession Problem. One answer is to create Deliberately Developmental organizations, [LW · GW]that put substantial effort into helping their members become more developed humans. That will be the subject of the next post in the sequence.

Replies from: ChristianKl, Viliam, None
comment by ChristianKl · 2019-08-08T15:36:24.806Z · LW(p) · GW(p)

It feels to me unwise to use the term Sociopaths in this way because it means that you lose the ability to distinguish clinical sociopaths from people who aren't.

Distinguishing clinical sociopaths from people that aren't is important because interaction with them is fundamentally different. Techniques for dealing with grief that were taught to prisoners helped reduce recidivism rates for the average prisoner but increased it for sociopaths.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-08T16:23:00.823Z · LW(p) · GW(p)

I'm importing the term from Venkatash Rao, and his essays on the Gervais principle. I agree this is an instance of word inflation, which is generally bad. From now on I'll start referring to this as "Gervais Sociopaths" in my writing.

comment by Viliam · 2019-09-17T21:46:10.553Z · LW(p) · GW(p)
Radical transparency doesn't make any suggestions about what you should say, only that everyone in the organization should be privy to things everyone says. This makes it exceedingly hard for sociopaths to maintain multiple realities.

Seems like it could work, but I wonder what other effects it could have. For example, if someone makes a mistake, you can't tell them discreetly; the only way to provide a feedback on a minor mistake is to announce it to the entire company.

By the way, are you going to enforce this rule after working hours? What prevents two bad actors from meeting in private and agreeing to pretend having some deniable bias in other to further their selfish goals? Like, some things are measurable, but some things are a matter of subjective judgment, and two people could agree to always have the subjective judgment colored in each other's favor, and against their mutual enemy. In a way that even if other people notice, you could still insist that what X does simply feels right to you, and what Y does rubs you the wrong way even if you can't explain why.

Also, people in the company would be exposed to each other, and perhaps the vulnerability would cancel out. But then someone leaves, is no longer part of the company, but still has all the info on the remaining members. Could this info be used against the former colleagues? The former colleagues still have info on the one that left, but not on his new colleagues. Also, if someone strategically joins only for a while, he could take care not to expose himself too much, while everything else would be exposed to him.

the CEO should be willing to take feedback from the new mail clerk.

This assumes the new mail clerk will be a reasonable person. Someone who doesn't understand the CEO's situation or loves to create drama could use this opportunity to give the CEO tons of useless feedback. And then complain about hypocrisy when others tell him to slow down.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-17T23:07:08.256Z · LW(p) · GW(p)
Seems like it could work, but I wonder what other effects it could have. For example, if someone makes a mistake, you can't tell them discreetly; the only way to provide a feedback on a minor mistake is to announce it to the entire company.By the way, are you going to enforce this rule after working hours?
What prevents two bad actors from meeting in private and agreeing to pretend having some deniable bias in other to further their selfish goals? Like, some things are measurable, but some things are a matter of subjective judgment, and two people could agree to always have the subjective judgment colored in each other's favor, and against their mutual enemy. In a way that even if other people notice, you could still insist that what X does simply feels right to you, and what Y does rubs you the wrong way even if you can't explain why.
Also, people in the company would be exposed to each other, and perhaps the vulnerability would cancel out. But then someone leaves, is no longer part of the company, but still has all the info on the remaining members. Could this info be used against the former colleagues? The former colleagues still have info on the one that left, but not on his new colleagues. Also, if someone strategically joins only for a while, he could take care not to expose himself too much, while everything else would be exposed to him.

I had already updated away from this particular tool, and this comment makes me update further. I still have the intuition that this can work well in a culture that has transcended things like blame and shame, but for 99% of organizations radical transparency might not be the best tool.

This assumes the new mail clerk will be a reasonable person. Someone who doesn't understand the CEO's situation or loves to create drama could use this opportunity to give the CEO tons of useless feedback. And then complain about hypocrisy when others tell him to slow down.

Yes, there are in fact areas where this can break down. Note that ANY rule can be gamed, and the proper thing to do is to refer back to values rather than trying to make ungameable rules. In this case, the others might in fact point out that the values of the organization are such that everyone should be open to feedback, including mail clerks. If this happened persistently with say 1 in every 4 people, then the organization would look at their hiring practices to see how to reduce that. If this happened consistently with new hires, the organization would look at their training practices, etc.

The sociopath repellent here only works in the context of the other things I've written about good organizations, like strongly teaching and ingraining the values and making sure decisions always point back to them, having strong vetting procedures, etc. Viewing this or other posts in the series as a list of tips risks taking them out of context.


comment by [deleted] · 2019-09-17T23:51:26.400Z · LW(p) · GW(p)

This note won't make sense to anyone who isn't already familiar with the Sociopath framework in which you're discussing this, but I did want to call out that Venkat Rao (at least when he wrote the Gervais Principle) explicitly stated that sociopaths are amoral and has fairly clearly (especially relative to his other opinions) stated that he thinks having more Sociopaths wouldn't be a bad thing. Here are a few quotes from Morality, Compassion, and the Sociopath which talk about this:

So yes, this entire edifice I am constructing is a determinedly amoral one. Hitler would count as a sociopath in this sense, but so would Gandhi and Martin Luther King.

In all this, the source of the personality of this archetype is distrust of the group, so I am sticking to the word “sociopath” in this amoral sense. The fact that many readers have automatically conflated the word “sociopath” with “evil” in fact reflects the demonizing tendencies of loser/clueless group morality. The characteristic of these group moralities is automatic distrust of alternative individual moralities. The distrust directed at the sociopath though, is reactionary rather than informed.

Sociopaths can be compassionate because their distrust only extends to groups. They are capable of understanding and empathizing with individual pain and acting with compassion. A sociopath who sets out to be compassionate is strongly limited by two factors: the distrust of groups (and therefore skepticism and distrust of large-scale, organized compassion), and the firm grounding in reality. The second factor allows sociopaths to look unsentimentally at all aspects of reality, including the fact that apparently compassionate actions that make you “feel good” and assuage guilt today may have unintended consequences that actually create more evil in the long term. This is what makes even good sociopaths often seem callous to even those among the clueless and losers who trust the sociopath’s intentions. The apparent callousness is actually evidence that hard moral choices are being made.

When a sociopath has the resources for (and feels the imperative towards) larger scale do-gooding, you get something like Bill Gates’ behavior: a very careful, cautious, eyes-wide-open approach to compassion. Gates has taken on a world-hunger sized problem, but there is very little ceremony or posturing about it. It is sociopath compassion. Underlying the scale is a residual distrust of the group — especially the group inspired by oneself — that leads to the “reluctant messiah” effect. Nothing is as scary to the compassionate and powerful sociopath as the unthinking adulation and following inspired by their ideas. I suspect the best among these lie awake at night worrying that if they were to die, the headless group might mutate into a monster driven by a frozen, unexamined moral code. Which is why the smartest attempt to engineer institutionalized doubt, self-examination and formal checks and balances into any systems they design.

I hope my explanation of the amorality of the sociopath stance makes a response mostly unnecessary: I disagree with the premise that “more sociopaths is bad.” More people taking individual moral responsibility is a good thing. It is in a sense a different reading of Old Testament morality — eating the fruit of the tree of knowledge and learning to tell good and evil apart is a good thing. An atheist view of the Bible must necessarily be allegorical, and at the risk of offending some of you, here’s my take on the Biblical tale of the Garden of Eden: Adam and Eve were clueless, having abdicated moral responsibility to a (putatively good) sociopath: God. Then they became sociopaths in their own right. And were forced to live in an ecosystem that included another sociopath — the archetypal evil one, Satan — that the good one could no longer shield them from. This makes the “descent” from the Garden of Eden an awakening into freedom rather than a descent into baseness. A good thing.

I apologize if this just seems like nitpicking your terminology, but I'm calling it out because I'm curious whether you agree with his abstract definition but disagree with his moral assessment of Sociopaths, vice versa, or something else entirely? As a concrete example, I think Venkat would argue that early EA was a form of Sociopath compassion and that for the sorts of world-denting things a lot LWers tend to be interested in, Sociopathy (again, as he defines it) is going to be the right stance to take.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-18T00:00:04.876Z · LW(p) · GW(p)

Rao's sociopaths are Kegan 4.5, they're nihilistic and aren't good for long lasting organizations because they view the notion of organizational goals as nonsensical. I agree that there's no moral bent to them but if you're trying to create an organization with a goal they're not useful. Instead, you want an organization that can develop Kegan 5 leaders.

Replies from: Raemon
comment by Raemon · 2019-09-18T00:15:03.244Z · LW(p) · GW(p)

This doesn't seem like it's addressing Anlam's question though. Gandhi doesn't seem nihilist. I assume (from this quote, which was new to me), that in Kegan terms, Rao probably meant something ranging from 4.5 to 5.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-18T00:59:12.541Z · LW(p) · GW(p)

I think Rao was at Kegan 4.5 when he wrote the sequence and didn't realize Kegan 5 existed. Rao was saying "There's no moral bent" to Kegan 4.5 because he was at the stage of realizing there was no such thing as morals.

At that level you can also view Kegan 4.5's as obviously correct and the ones who end up moving society forward into interesting directions, they're forces of creative destruction. There's no view of Kegan 5 at that level, so you'll mistake Kegan 5's as either Kegan 3's or other Kegan 4.5's, which may be the cause of the confusion here.

comment by Matt Goldenberg (mr-hire) · 2020-01-25T18:28:17.871Z · LW(p) · GW(p)

How to Read a Book is the quintessential how to book on gaining knowledge from a modernist perspective. What would a metamodern version of HTRAB look like?

HTRAB says that the main question you should be asking when reading a book is "Is this true?" The relationship you're concerned with is between the material and the real world.

But in a meta-modern perspective, you want to consider many other relationships.

One of those is the three way relationships between yourself, the material, and reality. Asking questions like "What new perspectives can I gain from this?" and "How does this relate to my other models of the world?"

Another is the relationship between the author and their source material. What does this writing say about the perspective of the author? Why did they choose to write this. This is bringing in a more post-modern/critical theory perspective.

HTRAB recommends "Synoptic Reading" - finding many books on the same subject or that circle around a specific topic to get a broad overview of the topic.

A meta-modern take would also look into other ways of grouping books. What about exploring facets of yourself through exploring authors that think differently and similarly to you? What about crafting a narrative as you dig into interesting parts of each book you move through?

What other takes would a Meta-Modern version of HTRAB encompass?

comment by Matt Goldenberg (mr-hire) · 2019-06-21T18:19:20.188Z · LW(p) · GW(p)

ON SAFE SPACES

There's at least 3 types of psychological "safe spaces":

1. We'll protect you.
We'll make sure there's nothing in the space that can actively touch your wounds. This is a place to heal with plenty of sunshine and water. Anyone who's in this space is agreeing to be extra careful to not poke any wounds, and the space will actively expel anyone who does. Most liberal arts colleges are trying to achieve this sort of safety.

2. Own your safety.
There may or may not be things in this space that can actively touch your wounds. You're expected to do what's necessary to protect them, up to and including leaving the space if need be. You have an active right to know your own boundaries and participate or not as needed. Many self-help groups are looking to achieve this sort of safety.

3. We'll make you grow.
This space is meant to poke at your wounds, but only to make you grow. We'll probably waterboard the shit out of you, but we won't let you drown. Anyone who's too fragile for this environment should enter at their own peril. This is Bridgewater, certain parts of the US Military, and other DDOs.

This is a half formed thought that seems to ping enough of my other important concepts that it seems worth sharing. Which one you think should be default relates a lot to how you view the world

It relates to:

- Why you would choose decoupling vs. contextualizing norms (https://www.lesswrong.com/…/decoupling-vs-contextualising-n… [LW · GW])

- Why you would allow or not allow punch bug (https://medium.com/@Th…/in-defense-of-punch-bug-68fcec56cd6b)

-Whether you want to protect Cluess, Losers, or Sociopaths (https://www.ribbonfarm.com/…/the-gervais-principle-or-the-…/)

- The left/right culture war.

comment by Matt Goldenberg (mr-hire) · 2019-08-09T13:56:02.837Z · LW(p) · GW(p)

There's a pattern I've noticed in my self that's quite self-destructive.

It goes something like this:

  • Meet new people that I like, try to hide all my flaws and be really impressive, so they'll love me and accept me.
  • After getting comfortable with them, noticing that they don't really love me if they don't love the flaws that I haven't been showing them.
  • Stop taking care of myself, downward spiral, so that I can see they'll take care of me at my worst and I know they REALLY love me.
  • People justifiably get fed up with me not taking care of myself, and reject me. This triggers the thought that I'm unlovable.
  • Because I'm not lovable, when I meet new people, I have to hide my flaws in order for them to love me.

This pattern is destructive, and has been one of the main things holding me back from becoming as self-sufficient as I'd like. I NEED to be dependent on others to prove they love me.

What's interesting about this pattern is how self-defeating it is. Do people not wanting to support me mean that they don't love me? No, it just means that they don't want to support another adult. Does hiding all my flaws help people accept me? No, it just sets me up for a crash later. Does constantly crashing from successful ventures help any of this? No, it makes it harder to seem successful, AND harder to be able to show my flaws without having people run away.

Replies from: mr-hire, ChristianKl, Raemon
comment by Matt Goldenberg (mr-hire) · 2020-07-02T12:38:17.704Z · LW(p) · GW(p)

I've made significant progress on this by working on self-love and self-trust.

comment by ChristianKl · 2019-08-12T11:06:04.107Z · LW(p) · GW(p)

That sounds to me like the belief "I'm not lovable" causes you trouble and it would make sense to get rid of it. Transform Yourself provides one framework of how to go about it. The Lefkoe method would be a different one.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-12T11:15:29.004Z · LW(p) · GW(p)

I've tried both of those, as well as a host of other tools. I only recently (the past year) developed the belief "I am lovable", which allowed me to see this pattern. I can now belief report both " I am lovable" and " I'm not lovable"

comment by Raemon · 2019-08-09T22:21:08.250Z · LW(p) · GW(p)

Don't have much else to say for now but :(

comment by Matt Goldenberg (mr-hire) · 2019-12-04T22:40:53.609Z · LW(p) · GW(p)

As part of the Athena Rationality Project, we've recently launched two new prototype apps that may be of interest to LWers

Virtual Akrasia Coach

The first is a Virtual Akrasia Coach, which comes out of a few months of studying various interventions for Akrasia, then testing the resulting ~25 habits/skills through internet based lessons to refine them.  We then took the resulting flowchart for dealing with Akrasia, and created a "Virtual Coach" that can walk you through a work session, ensuring your work is focused, productive and enjoyable.

Right now about 10% of people find it useful to use in every session, 10% of people find it useful to use when they're procrastinating, and 10% of people find it useful to use when they're practicing the anti-akrasia habits. The rest don't find it useful, or think it would be useful but don't tend to use it.

I know many of you may be wondering how the idea of 25 skills fits in with the Internal Conflict model of akrasia. One way to frame the skills is that for people with chronic akrasia, we've found that they tend to have certain patterns that lead to internal conflict - For instance, one side thinks it would be good to work on something, but another side doesn't like uncertainty.  You can solve this by internal double crux, or you can have a habit to always know your next action so there's no uncertainty.  By using this and the other 24 tools you can prevent a good portion of internal conflict from showing up in the first place.

Habit Installer/Uninstaller App

The habit installer/uninstaller app is an attempt to create a better process for creating TAPs, and using a modified Murphyjitsu process to create setbacks for those taps.

Here's how it works.

  1. When you think of a new TAP to install, add it to your habit Queue..
  2. When the TAP reaches the top of the Habit Queue, it gives you a "Conditioning Session" - these are a set of audio sessions that take you through processes to strengthen habits, such as visualization, memory re-consolidation, and mental contrasting.
  3. The app will check in with you about how frequently you've been executing the TAP, using a spaced repetition schedule, giving you more conditioning sessions when you're likely to fail at your habit, starting frequently then less and less frequently as you master the habit.
  4. When the habit is 10% mastered, you'll be walked through a murphyjitsu process, coming up with new habits and actions that can prevent you from failing to install this habit.
  5. Any new habits you create using the Murphyjitsu process are added to the habit queue, making the process fractal.
  6. When a habit is 100% mastered, you no longer receive conditioning sessions or checkins, allowing you room to install more TAPs.

The app is definitely in prototype form, and quite ugly and hacky, but I've personally find it quite useful for creating new habits.

As an experiment for this particular app based on learning from our previous Akrasia Coach prototype, we're charging a small ($2.99) fee for trying the prototype. The fee is basically to get more committed users, and will of course be refunded if at any point you decide the app is not for you or too early stage.

If you're interested in that, the link to test it out is here.

Note that we've been getting reports of the confirmation emails ending up in spam, so be sure to check your spam folder once you sign up.

Anyways, feel free to try both of those out, and if you have any questions, I'll do my best to answer.

comment by Matt Goldenberg (mr-hire) · 2020-09-15T19:27:22.111Z · LW(p) · GW(p)

Trying to describe a particular aspect of Moloch I'm calling hyper-inductivity:

 

The machine is hyper-inductive. Your descriptions of the machine are part of the machine. The machine wants you to escape, that is part of the machine. The machine knows that you know this. That is part of the machine.

Your trauma fuels the machine. Healing your trauma fuels the machine. Traumatizing your kids fuels the machine. Failing to traumatize your kids fuels the machine.

Defecting on the prisoner's dilemma fuels the machine. Telling others not to defect on the prisoner's dilemma fuels the machine.

Your intentional community is part of the machine. Your meditation practice is part of the machine. Your art installation is part of the machine. Your protest is part of the machine.

A select few will escape the machine. That is part of the machine. The machine will simplify, the machine will distort, the machine will politicize, the machine will consumerize.

Jesus is part of the machine. Buddha is part of the machine. Elijah is part of the machine. Zuess is part of the machine.

Your Kegan-5 ability to see outside the machine is part of the machine. Your mental models are part of the machine. Your bayesianism is part of the machine. Your shitposts are part of the machine.

The machine devours. The machine creates. Your attempts to protect your ideas from the machine is part of the machine.

Your attempts to fix the machine is part of the machine. Your attempts to see that the machine is an illusion is part of the machine. Your attempts to use the machine for your own purposes is part of the machine.

The machine's goal is to grow the machine. The machine does not have a goal. The machine is designed to be anti-fragile. The machine is not designed.

This post is part of the machine.
 

comment by Matt Goldenberg (mr-hire) · 2020-08-12T18:56:10.471Z · LW(p) · GW(p)

Recently went on a quest to find the best way to minimize the cord clutter, cord management, and charging anxiety that creates a dozen trivial inconveniences throughout the day.
 

Here's what worked for me:

1. For each area that is a wire maze, I get one of these surge protectors with 18 outlets and 3 usb slots: https://amzn.to/33UfY7i

2. For everywhere I am that I am likely to want to charge something, I fill 1 -3 of the slots with these 6ft multi-charging usb cables (more slots if I'm likely to want to charge multiple things). I get a couple extras for travel so that I can simply leave them in my travel bag: https://amzn.to/33RV48T

3. For everywhere that I am likely to want to plug in my laptop, I get one of these universal laptop chargers. Save the attachments somewhere safe for future laptops, and leave the attachment that works for my laptop plugged in at each place. I get an extra to keep and put into my travel bag: https://amzn.to/3iwHjkf

4. I run the USB cords and laptop cord through these nifty little cord clips, so they stay in place: https://amzn.to/31KdcPA

5. All the excess wiring, along with the surge protector, is put into this cord box. I use the twisty ties with that to secure wires from dangling, and ensure they go into the box neatly. Suddenly, the wires are super clean: https://amzn.to/2PIGbxA

6. (Bonus Round) I have a charging case for my phone, so the only time I have to worry about charging it as night. I use this one for my Pixel 3A, but you'll have to find one that works for your phone: https://amzn.to/31MuxHn

7. (Bonus Round 2): Work to go wireless for things that have that option, like headphones.

This will set you back $200 - $500 (depending on much of each thing you need) but man is it nice to not ever have to worry about finding a charging cord, moving a cord around, remembering to pack your charger, tripping over wires or having the wire jungle distract, etc.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-08-12T18:58:57.524Z · LW(p) · GW(p)

^ Affiliate links.  Feel free to search them on your own if you don't want some of the money to go to me.  If affiliate links are against the rules, let me know mods!

Replies from: Dagon
comment by Dagon · 2020-08-12T21:08:53.627Z · LW(p) · GW(p)

Not a mod, but personally, I'm happy to have links to products that long-term members personally use and recommend. I'd mildly prefer smile.amazon.com links over affiliate or normal links, but not enough to worry about it.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2020-08-13T01:14:59.545Z · LW(p) · GW(p)

A link can be both affiliate and smile, they stack.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-08-13T01:22:44.455Z · LW(p) · GW(p)

But I'm not sure how to do it with their affiliate link creator. The default link they give me is not smile.

comment by Matt Goldenberg (mr-hire) · 2019-11-22T23:03:45.752Z · LW(p) · GW(p)

In response to a "sell LW to me" post:

I think that the thing LW is trying to do is hard. I think that there's a legitimate split in the community, around the things you're calling "cyber-bullying" - I think there should be a place for crockers rules style combat culture reasoning, but I also want a community that is charitable and respectful and kind while maintaining good epistemics.

I also think there's a legitimate split in the community around the things you're calling "epistemically sketchy" - I think there should be a place for post-rational ponderings, but I also think there should be a place for not catering to them..

I have an impression that LW is trying to cater to both sides of the splits, and basically ending up in a middle ground that no one wants, driving a lot of the most interesting posters away.

That being said, I'm quite impressed by the team running LW. I'm quite impressed by the product that is LW. I'm also quite impressed by the experiments and direction of LW - I perceive it as actively getting better over time, and grappling with hard questions. I don't know a better place to put things to create common knowledge about things I with were common knowledge in the rationalist community, and I expect that things I put there will benefit from the improvements over time.

I think that the mods are justifiably being very careful about imposing norms, because splitting the community is very dangerous, but I do have a small amount of faith they'll navigate it correctly - enough to make posting on there worth it.

comment by Matt Goldenberg (mr-hire) · 2019-08-22T21:13:05.464Z · LW(p) · GW(p)

A frequent failure mode that I have as a leader:

  • Someone comes on to a new project, and makes a few suggestions.
  • All of those suggestions are things we/I have thought about and discussed in detail, and we have detailed reasons why we've made the decisions we have.
  • I tell the person those reasons.
  • The person comes away feeling like the project isn't really open to criticism feedback, and their ideas won't be heard.

I think a good policy is to just say yes to WHATEVER experiment someone who is new to the project proposes, and let them take their own lumps, or pleasantly surprised.

But, despite having known this for a bit, I always seem to forget to do this when it matters. I wonder if I can add this to our onboarding checklists.

Replies from: None, mr-hire
comment by [deleted] · 2019-08-22T22:29:26.015Z · LW(p) · GW(p)

I've rarely seen teams do this well and agree that your proposed approach is much better than the alternative in many cases. I've definitely seen cases where insiders thought something was impossible and then a new person went and did it. (I've been the insider, the new person who ignored the advice and succeeded, and the new person who ignored the advice and failed.)

That said, I think there's a middle ground where you convey why you chose not to do something but also leave it open for the person to try anyway. The downside of just letting them do it without giving context is they may fail for a silly rather than genuine reason.

What I'm suggesting could look something like the following.

That's an awesome idea! This is something some of us explored a bit previously and decided not to pursue at the time for X, Y, and Z reasons. However, as insiders, we are probably biased towards viewing things as hard, so it's important for team health to have new people re-try and re-explore things we may have already thought about. You should definitely not take our reasons as final and feel free to try The Thing if you still feel like it might work or you'll learn something by doing so.

comment by Matt Goldenberg (mr-hire) · 2019-08-24T15:10:54.076Z · LW(p) · GW(p)

Some concrete updates I had around this idea, based on discussion on Facebook.

  • One really relevant factor is the criticism coming from a person in authority, and leaders should be extra careful of critizing ideas. By steering them towards other, less authorative figures that you think will give valid critiques, you can avoid this failure.
  • Another potential obvious pitfall here is people feeling like they were set up to fail by not having all the relevant information. The idea here is to make people feel like they have agency, obviously not to hide information.
  • Even if you do the above, people can feel patronized if it seems like you're doing this as a tactic because you think they can't take criticism. This can be true even if giving them criticism would indeed be harmful for the team dynamic. Thus, the emphasizing ways to increase agency over avoiding criticism is key here.
Replies from: Raemon
comment by Raemon · 2019-08-24T17:29:58.443Z · LW(p) · GW(p)

This combination of failure modes seems pretty dicey.

I think I've encountered something similar in relationships, where my naive thought was "they're doing something wrong/harmful and I should help them avoid it" but I eventually realized "them having an internal locus of control and not feeling like I'm out to micromanage them is way more important than any given suboptimal thing they're doing."

comment by Matt Goldenberg (mr-hire) · 2019-07-22T20:50:38.309Z · LW(p) · GW(p)

I think philosophical bullet biting is usually wrong. It can be useful to make a theory that you KNOW is wrong, and bite a bullet in order to make progress on a philosophical problem. However, I think it can be quite damaging to accept a practical theory of ethics that feels practical and consistent to you, but breaks some of your major moral intuitions. In this case I think it's better to go "I don't know how to come up with a consistent theory for this part of my actions, but I'll follow my gut instead."

Note that this is the opposite of becoming a robust agent. [LW · GW] However, the alternative is CREATING a robust agent that is not in fact aligned with its' creator. I've seen people who adopted a moral view for consistency, and now make choices that they NEVER would have endorsed before they chose to bite bullets for consistency.

I think this is one of my major disagreements with Raemon's view of becoming a robust agent.

Replies from: SaidAchmiz, Raemon, Pattern
comment by Said Achmiz (SaidAchmiz) · 2019-07-22T22:23:27.507Z · LW(p) · GW(p)

Related: this old comment of mine about rules and exceptions [LW(p) · GW(p)].

Replies from: Raemon
comment by Raemon · 2019-07-22T22:48:05.018Z · LW(p) · GW(p)

FYI I think that'd make a good post with a handy title that'd make it easier to refer to

Replies from: SaidAchmiz, Pattern
comment by Pattern · 2019-07-23T02:29:58.042Z · LW(p) · GW(p)

"There are no exceptions." "Rules contain exceptions." "How to make Rules." "How to make Exceptions."

comment by Raemon · 2019-07-22T21:33:44.380Z · LW(p) · GW(p)

Thanks for the crisp articulation.

One short answer is: "I, Raemon, do not really bite bullets. What I do is something more like "flag where there were bullets I didn't bite, or areas that I am confused about, and mark those on my Internal Map with a giant red-pen 'PLEASE EVALUATE LATER WHEN YOU ARE HAVE TIME AND/OR ARE WISER' label."

One example of this: I describe my moral intuitions as "Sort of like median-preference utilitarianism, but not really. Median-preference-utilitarianism seems to break slightly less often in ways slightly more forgiveable than other moral theories, but not by much."

Meanwhile, my decision-making is something like "95% selfish, 5% altruistic within the 'sort of but not really median-preference-utilitarian-lens', but I look for ways for the 95% selfish part to get what it wants while generating positive externalities for the 5% altruistic part." And I endorse people doing a similarly hacky system as they figure themselves out.

(Also, while I don't remember exactly how I phrased things, I don't actually think robust agency is a thing people should pursue by default. It's something that's useful for certain types of people who have certain precursor properties. I tried to phrase my posts like 'here are some reasons it might be better to be more robustly-agentic, where you'll be experiencing a tradeoff if you don't do it', but not making the claim that the RA tradeoffs are correct for everyone)

Replies from: Raemon
comment by Raemon · 2019-07-22T21:35:36.002Z · LW(p) · GW(p)

On the flipside, I think a disagreement I have with habryka (or did, a year or two ago), was something like habryka saying: "It's better to build an explicit model, try to use the model for real, and then notice when it breaks, and then build a new model. This will cause you to underperform initially but eventually outclass those who were trying to hack together various bits of cultural wisdom without understanding them."

I think I roughly agree with that statement of his, I just think that the cost of lots of people doing this at once are fairly high and that you should instead do something like 'start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.'

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-22T21:51:26.590Z · LW(p) · GW(p)
start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.'

I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results. Over time I notice that certain parts work and don't, and that certain models tend to work in certain situations. Eventually, I examine my actual beliefs on the situation and find something like "Oh, I've actually developed my own theory of this that ties together the best parts of all of these models and my own observations." Sometimes I help this along explicitly by introspecting on the switching rules/similarities and differences between models, etc.

This feels related to the thing that happens with my moral intuitions, except that there are internal models that didn't seem to come from outside or my own experiences at all, basic things I like and dislike, and so sometimes all these models converge and I still have a separate thing that's like NOPE, still not there yet.

Replies from: Raemon
comment by Raemon · 2019-07-22T22:03:59.881Z · LW(p) · GW(p)
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results.

This seems basically fine, but I mean my advice to apply to, like, 4 and 12 year olds who don't really understand what a model is. Anything model-shaped or robust-shaped has to bootstrap from something that's more Cultural wisdom shaped. (but, I probably agree that you can have cultural wisdom that more directly bootstraps you into 'learn to build models')

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-22T22:14:31.961Z · LW(p) · GW(p)

I think I was viewing "cultural wisdom' as basically its' own blackbox model, and in practice I think this is basically how I treat it.

Nitpick: Human's are definitely creating models at 12, and able to understand that what they're creating are models.

comment by Pattern · 2019-07-23T02:12:09.635Z · LW(p) · GW(p)

How does this compare with empiricism - specifically saying "This is testable, so let's test it."?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-23T18:36:25.509Z · LW(p) · GW(p)

I think there's an inferential distance step I'm missing here, because I'm actually a bit at a loss as to how to relate my post to empiricism.

comment by Matt Goldenberg (mr-hire) · 2020-06-15T15:34:24.456Z · LW(p) · GW(p)

Something I've been thinking about lately is the concept of Aesthetic Pathology. The idea that our trauma's and beliefs can shape what we allow ourselves to see as beautiful or ugly.

Take for instance the broad aesthetic of order, or chaos. Depending on what we've been punished or admired for, we may find one or the other aesthetic beautiful.

This can then bleed into influencing our actual beliefs, we may think that someone who keeps order is "good" if we have the order aesthetic, or have the belief that "in order to get things done we must maintain order".

The counter to this is to begin to develop what you could call Aesthetic Nuance - Recognizing that different things can be beautiful or ugly for different situations.

Chaos can in fact have it's own beauty, once we realize that, that can bleed through in our beliefs, and we can realize that in this situation, in order to act fast enough to get things done, we must embrace the beauty of chaos.

I've seen this show up in the Postrationality community - many were traumatized by the rationality aesthetic. They develop an Aesthetic Pathology for the unexplainable.

The aesthetic nuance here is - The innefable is beautiful, as is the explained from different perspectives in different situations.

Similarly, for a long time I've had an Aesthetic Pathology related to growth. I find stagnation abhorrent. However, as I begin to develop Aesthetic Nuance for stagnation, I can see the beauty in the eternal and unchanging.

Replies from: Dagon
comment by Dagon · 2020-07-16T15:55:54.657Z · LW(p) · GW(p)

I tend to model aesthetics as more deeply entwined with other preferences and heuristics. Whether caused by trauma, early or late training, genetic or environmental predilection, or whatever, there are many elements of each individual's utility function that are somewhat resistant to introspection.

Your proposed causality (trauma, and punished/rewarded framework) is generally applicable - not only to things generally in the aesthetic realm, but also in the policy-preference, social-interaction, and many other topics where "belief" mostly means "more trusted models" rather than "concrete probabilities of propositional future experiences".

As you note, it's not fully resistant to introspection - you can train yourself to notice and enjoy (or to notice and disprefer) things differently than your past. Sometimes a partial explanation of causality for your belief can help. Sometimes it's a non-explanation just-so story, giving you permission to change. And sometimes you can change just by deciding that you'll meet your considered goals more easily if you let go of those particular heuristics.

comment by Matt Goldenberg (mr-hire) · 2020-03-28T01:47:59.386Z · LW(p) · GW(p)

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.

This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they're safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they're worth.

These types of rules are very hard to repeal after the fact because of absence blindness - someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.

comment by Matt Goldenberg (mr-hire) · 2020-03-25T15:01:33.060Z · LW(p) · GW(p)

Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.

Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.

On the other hand, I think it's quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we'll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.

But getting the time of this right is everything. Striking the right balance of "deaths from economic freefall" and "deaths from an overloaded medical system" is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.

Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is

1. Mostly in non-profits

and

2. Orders of magnitude smaller than funding for AI capabilities

It's quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren't speaking cogently about the tradeoffs between depression and lives saved from Corona - they have gone through this same train of thought, and decided that preventing a depression is an information hazard.

Replies from: MiroFurtado, mr-hire
comment by miro (MiroFurtado) · 2020-03-25T15:26:38.707Z · LW(p) · GW(p)

It's interesting because you would intuitively think this, but there is actually not terrible evidence linking periods of economic growth to increased mortality.

Here is the article in nature.

Is non-profit funding really that inelastic in depression?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-03-25T16:02:57.337Z · LW(p) · GW(p)
It's interesting because you would intuitively think this, but there is actually not terrible evidence linking periods of economic growth to increased mortality.

Wow that is fascinating. It does make the case harder to make because you have to start quantifying happiness/depression, etc and trade off against lives. Much much harder to simplify enough to make it viral. Updates towards capitalism being horrible.

Is non-profit funding really that inelastic in depression?

It probably varies quite a bit by sector, and where funding comes from for different non-profits. In the case of AI safety I think it's likely more inelastic than AI capability.

comment by Matt Goldenberg (mr-hire) · 2020-03-25T17:05:43.678Z · LW(p) · GW(p)

It was brought to my attention on Lesswrong that depressions actually save lives.

Which would make it much harder to build a simple "two curves to flatten" narrative out of.

Replies from: Dagon
comment by Dagon · 2020-03-25T18:56:20.861Z · LW(p) · GW(p)

Wait, you received evidence that didn't just refute your hypothesis, it reversed it. If you accept that, shouldn't you also reverse your proposed remedy? Shouldn't you now argue _IN FAVOR_ of shutting down more completely - it saves lives both directly by limiting the spread of the virus AND indirectly by slowing the economy.

(note: this is intended to be semi-humorous - my base position is that the economic causes and effects are far too complex and distributed to really predict impact on that level, or to predict what policies might improve what outcomes).

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-03-25T20:46:40.729Z · LW(p) · GW(p)

I did update from this quite significantly.

comment by Matt Goldenberg (mr-hire) · 2020-03-09T18:35:04.508Z · LW(p) · GW(p)

When trying to browse LW keyboard only using Vimium, there are some tasks I get blocked on because they're not marked as links or buttons. E.g. the "Read More" button is not recognized as clickable by Vimium so I have to use the mouse.

I suspect this means that the read more button is also not picked up by many accessibility tools. Something for the LW team to look at, and may be worth doing a general accessibility audit.

Replies from: habryka4
comment by habryka (habryka4) · 2020-03-09T19:16:57.686Z · LW(p) · GW(p)

Oh, interesting. That's a fair point.

comment by Matt Goldenberg (mr-hire) · 2020-03-05T21:44:20.495Z · LW(p) · GW(p)

# HOW TO CONSISTENTLY USE BLOCKING SOFTWARE
One of my favorite life hacks to stop procrastinating is to install website/app blocking software on your phone and computer.

However, many people have tried this method, and found that they can't do it consistently. They inevitably end up uninstalling or disabling the software a few months into using it.

In a moment of "weakness", they uninstall/disable/remove the software, and then never end up reinstalling/enabling it for months.

The truth is, this moment of "weakness" isn't weakness at all. It's a natural human response to lack of autonomy, which Self-Determination Theory posits as one of the three basic human needs.

When a wall is getting in the way of our basic autonomy, our natural response is to knock down the wall.

Solutions to procrastination should never feel like you're **coercing** yourself into doing the "right behavior" as decided by you at a particular point in time - these are unsustainable and actually create more procrastination in the long run because we're taking away the autonomy of our present selves.

Rather, environmental solutions to procrastination should feel more like you're **cooperating** between your past, present, and future selves, taking input from all 3 selves to decide what makes the most sense in the moment.

## TURN WALLS INTO GATES
For blocking software, the solution to this issue is to turn walls into gates. Instead of making it impossible to get to the other side, you want to make a series of gates, which take some effort to get through, but allow you increasingly more freedom as you go through each successive gate.

This way you're not limiting your freedom, but instead just allowing a short reminder from your past self saying "Hey, just a reminder I wasn't so thrilled about what's on the other side of this gate," while allowing your present self to say "I hear, and this one time I'm deciding that it's important for us to go on the other side of the gate now."

In addition to turning walls into gates, you need to make sure your gates are robust enough that it's not easier to just knock them down then to go through them.

If you build your gates really flimsy, it's to easy for your present self to say "Oh, I just want to get onto the other side of the gate the fastest way possible" while forgetting to cooperate with your past and future selves. The path of least resistance has to be to pass through security you've set up at the gate.

## HOW TO CREATE ROBUST GATES WITH BLOCKING SOFTWARE

So the first way to make sure you use blocking software is to make sure it's hard to just knock down. Your blocking software should have robust protection against all the easy ways to knock down the gate like:
- Removing it from startup
- Uninstalling or disabling it
- Closing it using the task manager
- Using a different browser
- Switching computer users

In addition, the software should make it easy to install various levels of gates with differing security to get through various blocking plans, like:
- Having a way to pause the plan for just a little time, that you need to enter a random set of characters to access.
- Having a way to enter a few random characters to whitelist a particular site, so that for instance you can whitelist a particular youtube video you need without allowing all of youtube.
- Having a setting that will automatically re-enable plans at the beginning of a new day, so that even if you've decided to enter your random password and take a day to just lounge and watch Netflix, it doesn't require any intervention to re-erect the gate.
- Having Pomodoro style blocking plans that can continually block then allow short breaks on a schedule.

## WHAT SOFTWARE ALLOWS THE CREATION OF ROBUST GATES?
The only software I know of that has these features (having tried between half a dozen and a dozen different blocking software and tools) is FocusMe. It's not the most user friendly blocking software out there, but it's incredibly good at creating robust gates that allow you to cooperate between your past/present/future selves.

Unfortunately the Android version isn't yet that great at creating gates, but the Mac/Windows version incredible.

I highly recommend this blocking software if you're working on overcoming procrastination, and learning the settings to use to create a system of gates.

It also has excellent customer service, and a "lifetime plan" which prevents you from having to subscribe.

If you're interested in the software, you can check it out using my affiliate link here: https://focusme.com/?ref=102&campaign=LW

Or, if you're not down with the affiliate thing, use a non-affiliate link here: https://focusme.com/

I'm also interested if anyone knows any Android blocking software that allows for the creation of robust gates!

comment by Matt Goldenberg (mr-hire) · 2019-12-20T19:34:25.202Z · LW(p) · GW(p)

I think one of the biggest problems with ouble crux is that by finding double cruxes, it implicitly encourages us to look at the most mutually legible parts of our maps.

However, the biggest differences in frames aren't where you think X and I think not X, it's where you think X and I think "What the hell do you mean by X?" or "Why do you even care about X anyway it seems irrelevant?"

In my previous startup, this led to a situation where we were agreeing on what to do, but there were deep unaddressed differences in why we were doing it, leading to a million different decisions at the level of "how it was done."

One of the things that excites me about Frame double crux and Aesthetic double crux is that it seems to be getting at some of these deeper issues. However, I think the entire frame of double crux is slightly broken for getting to these deeper issues, because again its' always focused on mutual legibility that as "what parts of your map are also important in my map" and not "How can I understand which parts of your map are most important to you?


comment by Matt Goldenberg (mr-hire) · 2019-09-02T16:40:15.229Z · LW(p) · GW(p)

Does anyone here struggle with perfectionism? I'd love to talk to you and get an understanding of your experience.

comment by Matt Goldenberg (mr-hire) · 2019-08-27T14:47:14.125Z · LW(p) · GW(p)

One of the enduring insights I've gotten from elityre is that different world models are often about the weight and importance of different ideas, not about how likely those things are to be true. For instance, The Elephant in the Brain isn't about whether or not signalling exists, its' about how central signalling is to the worldview of Simler and Hanson. Similarly with Antifragility and Nassim Taleb.

One way to say this is that disagreement is often about the importance of an idea, not its' truth.

Another way to say this is that worldview differences are often about the centrality and interconnectedness of a node within a graph, and not its' existence.

A third way to say this is that disagreements are often about tradeoffs, not truths.

I've used all of these when trying to point to this idea, but I'd like a single, catchy word or phrase to use and a blog post I can point to so that this idea can enter the rationalist lexicon. Does this blogpost already exist? If not, any ideas for what to name this?

Replies from: Pattern, cousin_it, Slider
comment by Pattern · 2019-08-27T19:16:08.165Z · LW(p) · GW(p)
any ideas for what to name this?

A Matter of Degree

comment by cousin_it · 2019-08-27T15:06:35.167Z · LW(p) · GW(p)

Yeah. This problem is especially bad in politics. I've been calling it "importance disagreements", e.g. here [LW(p) · GW(p)] and here [LW(p) · GW(p)]. There's no definitive blogpost, you're welcome to write one :-)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-27T15:43:32.534Z · LW(p) · GW(p)

Note that I think we're talking about similar things, but have slightly different framing. For instance, you say :

I've had similar thoughts but formulated them a bit differently. It seems to me that most people have the same bedrock values, like "pain is bad". Some moral disagreements are based on conflicts of interest, but most are importance disagreements instead. Basically people argue like "X! - No, Y!" when X and Y are both true, but they disagree on which is more important, all the while imagining that they're arguing about facts. You can see it over and over on the internet.

I think "Value Importance" disagreements definitely do happen, and Ruby talks about them in "The Rock and the Hard Place [LW · GW]".

However, I'm also trying to point at "Fact Importance" as a thing that people often assume away when trying to model each other. I'd even go as far to say that often what seems like "value importance" intractable debates are often "hidden assumption fact importance debates".

For instance, we might both have the belief that signalling effects peoples' behaviors, and the belief that people are trying to achieve happiness, and we both assign moderately high probability on each of these factors. However, unless I understand, in their world model, how MUCH they think signalling effects behaviors in comparison to seeking happiness, I've probably just unknowingly imported my own importance weights onto those items.

Any time you're using heuristics (which most good thinkers are) its' important to go up and model the meta-heuristics that allow you to choose how much a given heuristic effects a given situation.

Replies from: cousin_it, Ruby, mr-hire
comment by cousin_it · 2019-08-27T16:19:28.203Z · LW(p) · GW(p)

Yeah, I guess I wasn't separating these things. A belief like "capitalists take X% of the value created by workers" can feel important both for its moral urgency and for its explanatory power - in politics that's pretty typical.

Replies from: Pattern
comment by Pattern · 2019-08-27T19:15:14.396Z · LW(p) · GW(p)

Depends on the value of X.

comment by Ruby · 2019-08-30T06:13:38.989Z · LW(p) · GW(p)

Just wanted to quickly assert strongly that I wouldn't characterize my post cited above as being only about value disagreements (value disagreements might even be a minority of applicable cases).

Consider Alice and Bob who are aligned on the value of not dying. They are arguing heatedly over whether to stay where they are vs run into the forest.

Alice: "If we stay here the axe murderer will catch us!" Bob: "If we go into the forest the wolves will eat us!!" Alice: "But don't you see, the axe murderer is nearly here!!!"

Same value, still a rock and hard place situation.

comment by Matt Goldenberg (mr-hire) · 2019-08-27T15:48:47.982Z · LW(p) · GW(p)

Similarly, we might both agree on the meta-heuristics in a specific situation, but I have models that apply a heuristic to 50x the situations that you do, so even though you agree that the heuristic is true, you disagree on how important it is because you don't have the models to apply it to all the situations that I can.

comment by Slider · 2019-08-30T14:57:34.780Z · LW(p) · GW(p)

If you make it explicit like "X is important" vs "X is not important" I have hard time to use the word "disagree" on it. Like if A and B emphasis and have signaling as similarly central in their worldviews saying "we agree on signaling" sounds wrong. Also saying stuff like "I disagree with racism" sounds like a funky way to get that point across.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-30T15:07:55.341Z · LW(p) · GW(p)

I think disagree is not semantically accurate for the thing I'm trying to point at, but it still feels internally often like "We have a fundamental disagreement about how to view this situation", it make more sense to talk about "our models being in agreement" than us being in agreement.

comment by Matt Goldenberg (mr-hire) · 2019-08-18T15:16:21.541Z · LW(p) · GW(p)

I've had a draft sitting in my posts section for months about shallow, deep, and transfer learning. Just made a Twitter thread that gets at the basics. And figured I'd post here to gauge interest in a longer post with examples.

Love kindle, love Evernote. But never highlight good ideas. It's level one reading. Instead use written notes and link important ideas to previous concepts you know.

Level 1: What's important? What does this mean?

Level 2: How does this link to compare/contrast to previous concepts or experiences? Do I believe this?

Level 3: How is this a metaphor for seemingly unrelated concepts? How can this frame my thinking?

4 questions to get to level 2:

  • How is this similar to other things I know?
  • How is this different from other things I know?
  • What previous experiences can I relate this to?
  • In what circumstances would I use this knowledge? How would I use it?

3 Questions to ask to get to level 3:

  • How does it feel to view the world through this lens?
  • How does this explain everything?
  • What is this a metaphor for?
Replies from: Raemon
comment by Raemon · 2019-08-18T17:38:22.843Z · LW(p) · GW(p)

I notice that this all makes perfect sense but that I don't expect to use it that much.

Which I think is more of a failure of my part to set up my life such that I can be using my "deliberate effort" brain while reading. I mostly do reading in the evening when I'm tired (where the base-situation was "using facebook or something", and I was trying to at least get extra value out of my dead brain state)

Currently my "deliberate effort" hours go into coding, and writing. This seems probably bad, but it feels like a significant sacrifice to do less of either. Mrr.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-18T22:36:36.438Z · LW(p) · GW(p)

Note this this mostly doesn't feel like deliberate effort anymore now that it's a habit for me. It took maybe 3 months of it being deliberate effort, but now my mind just automatically notices something important while I'm learning and asks "what is this related to?"

I haven't checked if reading is more tiring than before, but I also haven't noticed anything to that effect.

Replies from: Raemon
comment by Raemon · 2019-08-18T22:41:01.396Z · LW(p) · GW(p)

That all makes sense – once the habit is ingrained I wouldn't expect it to be deliberate effort per se (but, would still require me to make time for this that isn't 'right before I go to sleep while lying in bed')

comment by Matt Goldenberg (mr-hire) · 2024-06-16T12:16:26.233Z · LW(p) · GW(p)

In the early 2000s, we all thought the next productivity system would save us. If we could just follow Tim Ferriss's system and achieve a four-hour workweek, or adopt David Allen's "Getting Things Done" (GTD) methodology, everything would be better. We believed the grind would end.

In retrospect, this was our generation's first attempt at addressing the growing sacredness deficit disorder that was, and still is, ravaging our souls. It was a good distraction for a time—a psyop that convinced us that with the perfect productivity system, we could design the perfect lifestyle and achieve perfection.

However, the edges started to fray when put into action. Location-independent digital nomads turned out to be just as lonely as everyone else. The hyper-productive GTD enthusiasts still burned out.

For me, this era truly ended when Merlin Mann, the author of popular GTD innovations like the "hipster PDA" and "inbox zero," failed to publish his book. He had all the tools in the world and knew all the systems. But when it mattered—when it came to building something from his soul that would stand the test of time—it didn't make a difference.

Merlin wrote a beautiful essay about this failure called "Cranking" (https://43folders.com/2011/04/22/cranking). He mused on the sterile, machine-like crank that would move his father's bed when he could no longer walk. He compared this to the sterile, machine-like systems he used to get himself to write, not knowing what he was writing or why, just turning the crank.

No amount of cranking could reconnect him to the sacred. No system or steps could ensure that the book he was writing would touch your soul, or his. So instead of sending his book draft to the editor, he sent the essay.

Reading that essay did something to me, and I think it marked a shift that many others who grew up in the "productivity systems" era experienced. It's a shift that many caught up in the current crop of "protocols" from the likes of Andrew Huberman and Bryan Johnson will go through in the next few years—a realization that the sacred can't be reached through a set of steps, systems, lists, or protocols.

At best, those systems can point towards something that must be surrendered to in mystery and faith. No amount of cranking will ever get you there, and no productivity system will save you. Only through complete devotion or complete surrender to forces beyond yourself will you find it.

Replies from: Viliam, Nate Showell
comment by Viliam · 2024-06-17T07:45:49.959Z · LW(p) · GW(p)

The "productivity tools" are often solving the wrong problem. They teach you how to better organize the paperwork for solving problems that you ultimately don't care about. When the actual problem is that you lack motivation to do those steps, because on the emotional level you perceive the entire thing as meaningless.

(That doesn't necessary mean that the thing is meaningless, could be e.g. that the meaning in far mode fails to translate to emotional motivation in near mode. Or maybe the thing is meaningless, and you should be doing something else instead. Or maybe the thing is meaningful, but there is something more important in your life that you currently ignore, and your brain is sending you signals that this is not the thing you should be focusing on.)

comment by Nate Showell · 2024-06-16T19:54:00.753Z · LW(p) · GW(p)

There are more than two options for how to choose a lifestyle. Just because the 2000s productivity books had an unrealistic model of motivation doesn't mean that you have to deceive yourself into believing in gods and souls and hand over control of your life to other people.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-06-16T20:10:06.066Z · LW(p) · GW(p)

It's precisely when handing your life to forces beyond yourself (not Gods, thats just handing your life over to someone else) that you can avoid giving your life over to others/society.

Souls is metaphorical of course, not some essential unchanging part of yourself - just a thing that actually matters, that moves you

Replies from: Nate Showell
comment by Nate Showell · 2024-06-18T06:33:54.968Z · LW(p) · GW(p)

Then what do you mean by "forces beyond yourself?" In your original shortform it sounded to me like you meant a movement, an ideology, a religion, or a charismatic leader. Creative inspiration and ideas that you're excited about aren't from "beyond yourself" unless you believe in a supernatural explanation, so what does the term actually refer to? I would appreciate some concrete examples.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-06-18T11:30:11.088Z · LW(p) · GW(p)

One way that think about "forces beyond yourself" is pointing to what it feels like to operate from a right-hemisphere dominant mode, as defined by Ian McGilcrist.

The language is deliberately designed to evoke that mode - so while I'll get more specific here, know that to experience the thing I'm talking about you need to let go of the mind that wants this type of explanation in order to experience what I'm talking about.

When I'm talking about "Higher Forces" I'm talking about states of being that feel like something is moving through you - you're not a head controlling a body but rather you're first connecting to, then channeling, then becoming part of a larger universal force.

In my coaching work, I like to use Phil Stutz's idea of "Higher forces" like Infinite Love, Forward Motion, Self-Expression, etc, as they're particularly suited for the modern Western Mind.

Here's how Stutz defines the higher force of Self-Expression on his website:

"The Higher Force You’re Invoking: Self-Expression The force of Self-Expression allows us to reveal ourselves in a truthful, genuine way—without caring about others' approval. It speaks through us with unusual clarity and authority, but it also expresses itself nonverbally, like when an athlete is "in the zone." In adults, this force gets buried in the Shadow. Inner Authority, by connecting you to the Shadow, enables you to resurrect the force and have it flow through you."

Of course, religions also have names for these type of special states, calling them Muses, Jhanas, Direct Connection to God.

All of these states (while I can and do teach techniques, steps, and systems to invoke them) ultimately can only be accessed through surrender to the moment, faith in what's there, and letting go of a need for knowing.

comment by Matt Goldenberg (mr-hire) · 2021-01-06T12:32:06.796Z · LW(p) · GW(p)

CW: Don't recommend reading this post if you're prone to disordered eating.

Am I being being too incautious by doing an 88 hour fast once a week? It seems pretty unstudied in the long term, there's mostly studies on 48 hour fasts, and then like 30 day fasts.

The few studies on people with cancer or arthritis seem to indicate only good things, and the animal studies point to some really good things like resetting parts of your immune systems in great ways.

It also seems to be the most consistent way for me to get the type of calorie restriction that has been show to increase longevity.

Subjectively, I'm near the most productive I've ever been, feel close to the healthiest I ever have, and am energetic and happy.

What say you?

Replies from: jimrandomh, wunan
comment by jimrandomh · 2021-01-07T03:04:17.997Z · LW(p) · GW(p)

If you're going to do this, I would suggest getting a few DEXA scans to make sure you aren't losing muscle mass. Also, you may need to replenish salt during the fast, and your salt needs may change with the weather, so watch out if heat or exercise makes you sweat.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-01-07T23:02:19.079Z · LW(p) · GW(p)

Yeah. Currently I can just tell from body makeup and strength gains when weightlifting that I've actually been gaining muscle mass, but this may just be "regaining" and not sure if I'll begin losing when I hit my previous point.

 

I have been trying to drink water throughout the day, every other glass I include a bit of salt in. One thing I wondered about was electrolytes, do you know if I should be adding those?

comment by wunan · 2021-01-06T17:44:45.359Z · LW(p) · GW(p)

Is losing weight one of your goals with this?

 

Like you said, since it hasn't been studied you're not going to find anything conclusive about it, but it may be a good idea to skip the fast once a month (i.e. 3 weeks where you do 88 hour fasts, then 1 week where you don't fast at all).

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-01-06T17:53:11.471Z · LW(p) · GW(p)

Yes, it's definitelyone of the goals here, although equality about longevity, helping my acid reflux, and other immune system benefits

comment by Matt Goldenberg (mr-hire) · 2020-10-06T20:04:36.914Z · LW(p) · GW(p)

I can't wrap my brain around the computational theory of consciousness.

Who decides how to interpret the computations?  If I have a beach, are the lighter grains 0 and darker grains 1?  What about the smaller and bigger grains? What if I decide to use the motion of the planets to switch between these 4 interpretations.

Surely under infinite definitions of computation, there are infinite consciousnesses experience infinite states at any given time, just from pure chance.

Replies from: ESRogs, Dagon, Chris_Leong, riceissa, interstice
comment by ESRogs · 2020-10-06T23:55:13.045Z · LW(p) · GW(p)

Suppose that consciousness were not a no-place function [LW · GW], but rather a one-place function. Specifically, whether something is conscious or not is relative to some reality. (A bit like movement relative to reference frames in physics.)

Would that help?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-07T16:13:49.705Z · LW(p) · GW(p)

Specifically, whether something is conscious or not is relative to some reality

 

How does this relate back to the example with the sand?  Is there a sand-planet reality that's just like ours, but in that reality the sand is conscious and we're not?

I don't think I quite get what a reality is in the function.

Replies from: ESRogs
comment by ESRogs · 2020-10-07T18:50:44.443Z · LW(p) · GW(p)

I was thinking of the computational theory of consciousness as basically being the same thing as saying that consciousness could be substrate independent. (E.g. you could have conscious uploads.)

I think this then leads you to ask, "If consciousness is not specific to a substrate, and it's just a pattern, how can we ever say that something does or does not exhibit the pattern? Can't I arbitrarily map between objects and parts of the pattern, and say that something is isomorphic to consciousness, and therefore is conscious?"

And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there's some reference frame where you could map between grains of sand and neurons, but it's a crazy reference frame and not one that we care about.

Replies from: ESRogs, mr-hire
comment by ESRogs · 2020-10-07T18:56:58.130Z · LW(p) · GW(p)

I don't have a well-developed theory here. But a few related ideas:

  • simplicity matters
  • evolution over time matters -- maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
  • accessibility matters -- I'm a bit hesitant on this one. I don't want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there's no way for us to interact with it, then maybe that's less relevant.
Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-07T19:07:58.654Z · LW(p) · GW(p)

Ahh I see. Yeah, I think that assigning moral weight to different properties of consciousness might be a good way forward here. But it still seems really weird that there are infinite consciousnesses operating at any given time, and makes me a bit suspicious of the computational theory of consciousness.

comment by Matt Goldenberg (mr-hire) · 2020-10-07T19:01:29.357Z · LW(p) · GW(p)

And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there's some reference frame where you could map between grains of sand and neurons, but it's a crazy reference frame and not one that we care about.

I mean, from that reference frame, does that consciousness feel pain?  If so, why do we not care about it?  It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism). Maybe we want to tile the universe in such a way that there more infinitely countable pleasure patterns than pain patterns, or something.

And how does this relate back to realities? Are we saying that the sand operates in separate reality?

Replies from: ESRogs
comment by ESRogs · 2020-10-07T21:16:30.195Z · LW(p) · GW(p)

It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism).

For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-08T00:00:48.269Z · LW(p) · GW(p)

For the way I mean reference frame, I only care about my reference frame. 

 

How do you define reference frame?  

Replies from: ESRogs
comment by ESRogs · 2020-10-08T18:26:40.275Z · LW(p) · GW(p)

I don't have a good answer for this. I'm kinda still at the vague intuition stage rather than clear theory stage.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-08T20:43:34.262Z · LW(p) · GW(p)

My sense is that reference frame for you is something like "how externally similar is this entity to me" whereas for me the thing that matters is "How similar internally is this consciousness to my consciousness."  Which, if the computational theory of consciousness is true, the answer is "many consciousnesses are very similar."

Obviously this is at the level of "not even a straw man" since you're gesturing at vague intuitions, but based on our discussion so far this is as close as I can point to a crux.

Replies from: ESRogs
comment by ESRogs · 2020-10-09T04:14:29.596Z · LW(p) · GW(p)

Hmm, it's not so much about how similar it is to me as it is like, whether it's on the same plane of existence.

I mean, I guess that's a certain kind of similarity. But I'm willing to impute moral worth to very alien kinds of consciousness, as long as it actually "makes sense" to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.

Replies from: ESRogs
comment by ESRogs · 2020-10-09T04:16:17.534Z · LW(p) · GW(p)

Here's an analogy -- is Hamlet conscious?

Well, Hamlet doesn't really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-15T20:50:57.421Z · LW(p) · GW(p)

But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.

To me this is simply empirical.  Is the computational theory of consciousness true without reservation?  Then if the computation exists in our universe, the consciousness exists.  Perhaps it's only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but 

comment by Chris_Leong · 2020-10-06T21:54:46.543Z · LW(p) · GW(p)

Yeah, this has always been my worry as well

comment by riceissa · 2020-12-01T10:15:45.172Z · LW(p) · GW(p)

Have you seen Brian Tomasik's page about this? If so what do you find unconvincing, and if not what do you think of it?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-02T02:58:11.415Z · LW(p) · GW(p)

He seems to be trying to formalize the intuition about what types of computational consciousness we already intuitively give moral weight to, but the very thing I'm worried about is that our intuitions are wrong (in the same way that our intuitions about physics don't hold when we think about environments much bigger or smaller than our own).

That is, if the computational consciousness theory is true, and computations with higher complexity feel just as much pain and pleasure and dreams and goals etc as things we normally define as conscious, why should we lower their moral weight?

Replies from: riceissa
comment by riceissa · 2020-12-05T20:48:04.866Z · LW(p) · GW(p)

That makes sense, thanks for clarifying. What I've seen most often on LessWrong is to come up with reasons for preferring simple interpretations in the course of trying to solve other philosophical problems such as anthropics, the problem of induction, and infinite ethics. For example, if we try to explain why our world seems to be simple we might end up with something like UDASSA [LW · GW] or Scott Garrabrant's idea of preferring simple worlds [LW · GW] (this section is also relevant). Once we have something like UDASSA, we can say that joke interpretations do not have much weight since it takes many more bits to specify how to "extract" the observer moments given a description of our physical world.

comment by interstice · 2020-10-15T17:37:03.377Z · LW(p) · GW(p)

That's why you need to use some sort of complexity-weighting for theories like this, so that minds that are very hard to specify(given some fixed encoding of 'the world') are considered 'less real' than easy-to-specify ones.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-15T19:13:00.447Z · LW(p) · GW(p)

I think that only makes sense to do if those minds are literally "less conscious" than other minds though. Otherwise why would I care less about them because they're more complex? 

It does make sense to me to talk about "speed" and "number of observer moments" as part of moral weight, but "complexity of definition" to me only makes sense if those minds experience things differently than I do.

Replies from: interstice
comment by interstice · 2020-10-16T02:55:49.477Z · LW(p) · GW(p)

Description complexity is the natural generalization of "speed" and "number of observer moments" to infinite universes/arbitrary embeddings of minds in those universes. It manages to scale as (the log of) the density of copies of an entity, while avoiding giving all the measure to Boltzmann brains.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-16T05:57:53.289Z · LW(p) · GW(p)

Description complexity is the natural generalization of "speed" and "number of observer moments.

Again this seems to be an empirical question that you can't just assume.

Replies from: interstice
comment by interstice · 2020-10-16T16:25:34.906Z · LW(p) · GW(p)

Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)

Here's a (not particularly rigorous) philosophical argument in favour. The substrate on which a mind is running shouldn't affect its moral status. So we should consider all computable mappings from the world to a mind as being 'real'. On the other hand, we want the total "number" of observer-moments in a given world to be finite(otherwise we can't compare the values of different worlds). This suggests that we should assign a 'weight' to different experiences, which must be exponentially decreasing in program length for the sum to converge.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-16T17:00:41.553Z · LW(p) · GW(p)

Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)

We could talk to different minds and have them describe their experience, and then compare the number of observer moments to their complexity.

Replies from: interstice
comment by interstice · 2020-10-16T20:43:41.870Z · LW(p) · GW(p)

But the question then becomes how you sample these minds you are talking to. Do you just go around literally speaking to them? Clearly this will miss a lot of minds. But you can't use completely arbitrary ways of accessing them either, because then you might end up packing most of the 'mind' into your way of interfacing with them. Weighting by complexity is meant to provide a good way of sampling minds, that includes all computable patterns without attributing mind-fulness to noise.

(Just to clarify a bit, 'complexity' here is referring to the complexity of selecting a mind given the world, not the complexity of the mind itself. It's meant to be a generalization of 'number of copies' and 'exists/does not exist', not a property inherent to the mind)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-16T21:36:45.538Z · LW(p) · GW(p)

It seems like you can get quite a bit of data with minds that you can interface with?  I think it's true that you can't sample the space of all possible minds, but testing this hypothesis on just a few seems like high VoI.

Replies from: interstice
comment by interstice · 2020-10-16T23:04:31.051Z · LW(p) · GW(p)

What hypothesis would you be "testing"? What I'm proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.

If you mean that we should check if the minds we usually see in the world have low complexity, I think that already seems to be the case, in that we're the end-result of a low-complexity process starting from simple conditions, and can be pinpointed in the world relatively simply.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-17T00:51:19.780Z · LW(p) · GW(p)

What hypothesis would you be "testing"? What I'm proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.

 

I mean, I'm saying get minds with many different complexities, figure out a way to communicate with them, and ask them about their experience.  

That would help to figure out if complexity is indeed correlated with observer moments.

But how you test this feels different from the question of whether or not it's true.

Replies from: interstice
comment by interstice · 2020-10-17T02:01:56.352Z · LW(p) · GW(p)

I think we're talking about different things. I'm talking about how you would locate minds in an arbitrary computational structure(and how to count them), you're talking about determining what's valuable about a mind once we've found it.

comment by Matt Goldenberg (mr-hire) · 2019-08-21T20:39:14.231Z · LW(p) · GW(p)

Here are some of the common criticisms I get of myself. If you know me, either in person, through secondhand accounts feel free to comment with your thoughts on which ones feel correct to you and any nuance or comments you'd like to make. Full license for this particular thread to operate on Crocker's rules and not take my feelings into account. If you don't feel comfortable commenting publicly, also feel free to message with your thoughts.


  • I have too low epistemic rigor.
  • Too confident in myself
  • Not confident enough in myself.
  • Too focused on status.
  • I don't keep good company.
  • I'm too impulsive.
  • Too risk seeking.
comment by Matt Goldenberg (mr-hire) · 2019-08-04T17:51:46.309Z · LW(p) · GW(p)

I've had a similar conversation many times recently related to Kegan's levels of development and Constructive-developmental theory:

X: Okay, but isn't this just pseudoscience like Myers-Briggs?

Me: No, there's been a lot of scientific research into constructive-developmental theory.

X: Yeah, but does it have strong inter-rater reliablity?

Me: Yes, it has both strong inter-rater reliablity and test retest reliablity. In addition, it has strong correlation with other measures of adult development that themselves have a strong evidence base.

X: Sure, but it seems so culturally biased.

Me: There's also strong preliminary reports on cross-culture validity.


It makes me want to make a post summarizing the evidence for Constructive-Developmental theory so people don't keep pattern matching to less-valid psychometrics like Myers-Briggs

Replies from: Raemon
comment by Raemon · 2019-08-04T19:04:06.172Z · LW(p) · GW(p)

I'd be interested in a post that was just focused on laying out what the empirical evidence was (preferably decoupled from trying to sell me on the theory too hard)

Replies from: Raemon, mr-hire, mr-hire
comment by Raemon · 2019-08-04T19:22:42.163Z · LW(p) · GW(p)

(a bit more details on how I'm thinking about this. Note that this is just my own opinion, not necessarily representing any LW team consensus)

I'm generally interested in getting LW to a state where

  • it's possible to bring up psych theories that seem wooey at first glance, but
  • it's also clearer:
    • what the epistemic status of those theories are
    • what timeframes are reasonable to expect that epistemic status to reach a state where we have a better sense of how true/useful the theory is
    • have some kind of plan to deprecate weird theories if they turn out to be BS

I think there are some additional constraints on developmental theories [LW(p) · GW(p)], where for social reasons I think it makes sense to lean harder in the "strong standards of evidence" direction. I think Dan Speyer's suspicions (articulated on FB) are pretty reasonable, and whether they're reasonable or not they also seem to a fact-of-the-matter that needs to be addressed anyhow.

I've recently updated that developmental theories might be pretty important, but I think there's a lot of ways to use them poorly and I wanna get it right.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-08-10T07:11:40.918Z · LW(p) · GW(p)

I have seen much talk on Less Wrong lately of “development stages” and “Kegan” and so forth. Naturally I am skeptical; so I do endorse any attempt to figure out if any of this stuff is worth anything. To aid in our efforts, I’d like to say a bit about what might convince me be a little less skeptical.

A theory should explain facts; and so the very first thing we’d have to do, as investigators, is figure out if there’s anything to explain. Specifically: we would have to look at the world, observe people, examine their behavior, their patterns of thinking and interacting with other people, their professed beliefs and principles, etc., etc., and see if these fall into any sorts of patterns or clusters, such that they may be categorized according to some scheme, where some people act like this [and here we might give some broad description], while other people act like that.

(Clearly, the answer to this question would be: yes, people’s behavior obviously falls into predictable, clustered patterns. But what sort, exactly? Some work would need to be done, at least, to enumerate and describe them.)

Second, we would have to see whether these patterns that we observe may be separated, or factored, by “domain”, whereby there is one sort of pattern of clusters in how people think and act and speak, which pertains to matters of religion; and another pattern, which pertains to relationship to family; and another pattern, which pertains to preferences of consumption; etc. We would be looking for such “domains” which may be conceptually separated—regardless of whether there were any correlation between clustering patterns in one domain or another.

(Here again, the answer seems clearly to be that yes, such domains may be defined without too much difficulty. However, the intuition is weaker than for the previous question; and we are less sure that we know what it is we’re talking about; and it becomes even more important to be specific and explicit.)

Now we would ask two further questions (which might be asked in parallel). Third: does categorization of an individual into one cluster or another, in any of these domains, correlate with that individual’s category membership in categories pertaining to any observable aspect of human variation? (Such observable aspects might be: cultural groupings; gender; weight; height; age; ethnicity; socioeconomic status; hair color; various matters of physical health; or any of a variety of other ways in which people demonstrably differ.) And fourth: may the clusters in any of these domains sensibly be given a total ordering (and the domain thereby be mapped onto a linear axis of variation)?

Note the special import of this latter question. Prior to answering it, we are dealing exclusively with nominal data values. We now ask whether any of the data we have might actually be ordinal data. The answer might be “no” (for instance, you prefer apples, and I prefer oranges; this puts us in different clusters within the “fruit preferences” domain of human psychology, but in no sense may these clusters be arranged linearly).

Our fifth question (conditional on answering yes to all four of the previous question) is this: among our observed domains of clustering, and looking in particular at those for which the data is of an ordinal nature, are there any such that the dimension of variation has any normative aspect? That is: is there a domain such that we might sensibly say that it is better to belong to clusters closer to one end of its spectrum of variation, than to belong to clusters closer to the other end? (Once more note that the answer might be “no”: for example, suppose that some people fidget a lot, while others do not fidget very much. Is it better to be a much-fidgeter than a not-much-fidgeter? Well… not really; nor the reverse; at least, not in any general way. Maybe fidgeting has some advantages, and not fidgeting has others, etc.; who knows? But overall the answer is “no, neither of these is clearly superior to the other; they’re just one of those ways in which people differ, in a normatively neutral way”.)

Finally, our sixth question is: does there exist any domain of clustering in human behavioral/psychological variation for which all of these are true:

  • That its clusters may naturally be given a total order (i.e., arranged linearly);
  • That this linear dimension has normative significance;
  • That membership in its categories is correlated primarily with category membership pertaining to one aspect of human variation (rather than being correlated comparably with multiple such aspects);
  • That in particular, membership in this domain’s clusters is correlated primarily with age.

Note that we have asked six (mostly[1]) empirical questions about humanity. And we have had six chances to answer in the negative.

And note also that if we answer any of these questions in the negative, then any and all theories of “moral development” (or any similar notion) are necessarily nonsense—because they purport to explain facts which (in this hypothetical scenario) we simply do not observe. Without any further investigation, we can dispose of the lot of them with extreme prejudice, because they are entirely unmotivated by the pre-theoretical facts.

So, this is what I would like to see from any proponents of Kegan’s theory, or any similar ones: a detailed, thorough, and specific examination (with plenty of examples!) of the questions I give in this comment—discussed with utter agnosticism about even the concept of “moral development”, “adult development” or any similar thing. In short: before I consider any defense of any theory of “adult development”, I should like to be convinced of such a theory’s motivation.


  1. The question of normative import is not quite empirical, but it may be operationalized by considering intersubjective judgments of normative import; that is, in any case, more or less what we are talking about in the first place. ↩︎

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-10T13:33:51.189Z · LW(p) · GW(p)

Why must a developmental theory be normative? A descriptive theory that says all humans go through stages where they get less moral over time works still as an interesting descriptive theory. Similary, there's certain Developmental stages that probably aren't normative of everyone around you is in a lower developmental stage, but it can still be descriptive as the next stage most humans go through if they indeed progress.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-08-10T13:49:47.800Z · LW(p) · GW(p)

I did not say anything about the theory being normative. “A descriptive theory that says all humans go through stages where they get less moral over time” is entirely consistent with what I described. Note that “moral” is a quality with normative significance—compare “get less extraverted over time” or “get less risk-seeking over time”.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-10T14:26:15.753Z · LW(p) · GW(p)

Ahh, so is the idea just that you don't care about a specific type of development if it doesn't have consequences that matter?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-08-10T22:43:43.832Z · LW(p) · GW(p)

Whether I care is hardly at issue; all the theories of “adult development” and similar clearly deal with variation along normatively significant dimensions.

If, for some reason, you propose to defend a theory of development that has no such normative aspect, then by all means remove that requirement from my list. (Kegan’s theory, however, clearly falls into the “normatively significant variation” category.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-11T00:25:58.800Z · LW(p) · GW(p)

I think that EG constructive-developmental theory studiously avoids normative claims. The level that fits best is context dependent on the surrounding culture.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-08-11T00:42:49.111Z · LW(p) · GW(p)

Fair enough. Assuming that’s the case, then anyone proposing to defend that particular theory is exempt from that particular question.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-11T01:33:50.015Z · LW(p) · GW(p)

Just in case it isn't clear, constructive-developmental theory and "kegan's levels of development" are two names for the same thing.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-08-11T06:26:47.180Z · LW(p) · GW(p)

Ah, my mistake.

However, in that case I don’t really understand what you mean. But, in any case, the rest of my original comment stands.

I look forward to any such detailed commentary on the fact-based motivation for any sort of developmental theory, from anyone who feels up to the task of providing such.

comment by Matt Goldenberg (mr-hire) · 2019-08-07T03:46:35.233Z · LW(p) · GW(p)

Looks like Sarah Constantine beat me to it, although I think here lit review missed a few studies I've seen.

https://srconstantin.wordpress.com/2017/04/06/are-adult-developmental-stages-real/

Replies from: ChristianKl, habryka4, Raemon
comment by ChristianKl · 2019-08-08T15:25:20.783Z · LW(p) · GW(p)

From her post:

In a study of West Point students, average inter-rater agreement on the Subject-Object Interview was 63%, and students developed from stage 2 to stage 3 and from stage 3 to stage 4 over their years in school.

Are you calling those 63% strong inter-rater reliablity or are you referring to other studies?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-08T20:21:48.762Z · LW(p) · GW(p)

There's as far as I know 3 studies on this. She found the one with 63% agreement, whereas the previous two studies had about 80% agreement

comment by habryka (habryka4) · 2019-08-08T20:23:35.246Z · LW(p) · GW(p)

My general takeaway from that post was that in terms of psychometric validity, most developmental psychology is quite bad. Did I miss something?

This doesn't necessarily mean the underlying concepts aren't real, but I do think that in terms of the quality metrics that psychometrics tends to assess things on, I don't think the evidence base is very good.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-09T00:32:43.546Z · LW(p) · GW(p)

I haven't looked into general developmental theories like Sarah Constantin, but have looked into the studies on Constructive Developmental theory.

My takeaways (mostly supported by her research, although she misses a lot) is that basically all the data points towards confirming the theory, with high information value on further research

  • high interrater reliability
  • high test-retest reliability
  • good correlation with age
  • good correlations with age in multiple cultures
  • good correlation with measures of certainty types of achievement like leadership

As Sarah points at, the biggest thing missing is evidence that the steps procede in order with no skipping, but as far as I can tell there's no counterevidence for that either. Also, replications of the other things.

Perhaps if I had went into this looking at a bunch of other failed developmental theories, my priors would have been such that I would have described it as "not enough evidence to confirm the theory". However, given this is the only developmental theory I looked into, my takeaways was "promising theory with preliminary support, needs more confirming research"

comment by Raemon · 2019-08-07T04:09:40.430Z · LW(p) · GW(p)

Oh, I was looking for that recently. Apparently predates LessWrong integration with her blog

comment by Matt Goldenberg (mr-hire) · 2019-08-04T21:03:31.188Z · LW(p) · GW(p)

Yes, this is what I'm imagining. A simple post that just summarizes the epistemic status, potentially as the start of a sequence for later posts that use it as a building block for other ideas.

comment by Matt Goldenberg (mr-hire) · 2019-07-03T16:14:46.376Z · LW(p) · GW(p)

RUNNING GOOD ORGANIZATIONS

Framing the Gervais principle in terms of Kegan:

Losers - Kegan 3

Clueless - Kegan 4

Sociopaths - Kegan 4.5

To run a great organization, the first thing you need is to be lead not by a sociopath, but someone who is Kegan 5. Then you need sociopath repellent.

The Gervais principle works on the fact that at the bottom, the losers see what the sociopaths are doing and opt-out, finding enjoyment elsewhere. The clueless, in the middle, believe the stories the sociopaths are telling them and hold the party line. The sociopaths, at the top, are infighting and trying to use the organization to get their own needs met.

In a good organization, the people at the top are Kegan 5. They have varying rules and models in their head for how the organization should act, and they use this as a best guess for the VALUES the organization should have, given the current environment - IE, they do their best to synthesize their varying models into a legible set of rules that will achieve their terminal goals (which, because they're Kegan 5, aren't pure solipsism)

The reason that they need to do this distillation process is that they need something that works for the Kegan 3's and Kegan 4's. The Kegan 4's SHARE the terminal goal of the Kegan 5 (or some more simplified version of it), and believe in the values and mission of the organization as the ONE TRUE WAY to achieve that goal.

Because the rules of the organization are set up to be legible and reward actions that actually help the terminal goal, the Kegan 3's can get their belonging and good vibes in highly legible, easy ways that are simple to understand before them. Notice now that the 3's, 4's, and 5's are all aligned, working towards the same ends instead of fighting each other.

Two important things about the values, mission, and rules of the organization.

1. The values must have sincere opposites that you could plausibly use for real decision making, otherwise they don't help the Kegan 3's and disillusion the Kegan 4s. You can't run an organization or make decisions based on "being unproductive" so "productivity" isn't a valid goal. You can make decisions that tradeoff short term productivity for long term productivity, so "move fast and break things" is a valid value, as is "Move slowly and plan carefully."

2. Anyone should be able to apply the values to anyone else. If "Give critical feedback ASAP, and receive it well" is a value, then the CEO should be willing to take feedback from the new mail clerk. As soon as this stops being the case, the 3's get look for their validation elsewhere, and the 4's get disillusioned.

Two good examples of values: Principles by Ray Dalio, The Scribe Culture Bible

The role of the Kegan 5 in this organization is twofold:

1. Reinvent the rules and mission of the organization as the landscape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. Notice when sociopaths are arbitraging the difference between the rules and the terminal goals, and shut it down.

Short Form Feed is getting too long. Next time, I'll wrote more about Sociopath repellent.

comment by Matt Goldenberg (mr-hire) · 2019-06-21T18:31:23.983Z · LW(p) · GW(p)

CHANGE IS GOOD

Something I've been noticing lately in a lot of places is that many people have the intuition that change is bad, and the default should be to maintain the status quo. This is epitomized by the Zvi article Change is Bad.

I tend to have the exact opposite intuition, and feel a sense of dread or foreboding when I see a lack of change in institutions or individuals I care about, and work to create that change when possible. Here's some of the models that seem to be behind this:

  • Change is inevitable. The broader systems in which the systems I care about exist are always changing (the culture, the economic system, etc). Trying to keep things static is MORE effort than going with the flow, so I don't buy the "conserve your energy" argument. Anyone who has ever TRIED to fight the flow of broader systems within their local system knows this to be true.
  • By Default Change is Inevitable but usually non-directed. What I mean by that is that as stated above, the systems are always changing. However, many times this is a result of local actors following local incentives and acting within local constraints. Rarely are the trickle down effects on the things you care about in any way shape or form directed towards making that thing better for human flourishing. This means that there's much gain to be had by simply working to direct and shape the change that will be happening any way to make it actually GOOD. This is also an argument for EAs being less shy about systemic change.
  • Even if change isn't inevitable, entropy is. Even in a relatively stable system, the default is not for things to stay the same, but for them to fall or drift apart. I've found that change injects a NEWNESS into the system that provides its' own momentum. This is all metaphorical, but will probably hit for anyone who has run an organization that meets regularly. If you keep doing the same thing, there's a staleness that causes people to drift away. Trying to rally the troops and prevent this drifting in the face of the staleness is like pulling teeth. However, doing something NEW in the organization, organizing a new event, a new initiative, anything, provides new energy that makes people excited to continue, and is actually EASIER than simply struggling against the staleness.
comment by Matt Goldenberg (mr-hire) · 2024-02-29T16:44:08.476Z · LW(p) · GW(p)

I can hover over quick takes to get the full comment, but not popular comments.

comment by Matt Goldenberg (mr-hire) · 2020-08-08T17:48:53.810Z · LW(p) · GW(p)

Had an excellent interview with Hazard [LW · GW]yesterday breaking down his felt sense of dealing with fear.

As someone who does parkour and tricking, he's had to develop unique models that navigate the tension between ignoring his fear (which can lead to injury or death) and being consumed by fear (meaning he could never practice his craft).

He implicitly breaks down fear into four categories, each with their own steps:

1. Fear Alarm Bells

2. Surfacing From Water

3. Listening

4. Transmuting to Resolve (or Backing off)

At each step, he has tools and techniques (again, that were implicit before we chatted) telling you how to move forward. Just over the past day, I've already had a felt shift in how I relate to fear, and navigated a couple of situations differently.

If you're interested in learning this model, I'd love to teach you! All I ask is that you let me use some of the clips from our teaching session in my podcast on the framework!

Let me know if you're interested!

Replies from: Pattern
comment by Pattern · 2020-08-10T15:55:22.493Z · LW(p) · GW(p)

You have a podcast?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-08-10T18:31:36.838Z · LW(p) · GW(p)

In development right now.

comment by Matt Goldenberg (mr-hire) · 2020-08-07T02:49:16.286Z · LW(p) · GW(p)

Couldn't Eliezer just remove every reference to Harry Potter and publish it separately? It worked for E.L James.

Replies from: Raemon, Dagon
comment by Raemon · 2020-08-07T02:56:32.474Z · LW(p) · GW(p)

A lot of what makes it neat is the deliberate contrast. Maybe not more than 50% of what made it neat but it's be a nontrivial hit. Some story beats I think were sort of dependent on the deliberate contrast for their narrative heft, so you need to redo them, which would require some craftmanship.

So, like, sure, it's doable. But the whole point of HPMOR was also to be something he could do for fun in is off hours with no willpower (which it eventually failed at anyhow).

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-08-07T03:04:38.205Z · LW(p) · GW(p)

A few years ago I remember him talking about how he was thinking about writing a thriller to get money but couldn't muster the motivation.  It feels like if that's still a possibility it at least makes sense to try to hire an editor to do this for a few key chapters and see how it turns out.

comment by Dagon · 2020-08-07T23:10:10.520Z · LW(p) · GW(p)

Is this question based on some intent or plan that Eliezer has?

It's perhaps possible to make it technically compliant with US and UK copyright law. Change the names, acknowledge the thematic (non-protected) inspiration, rewrite maybe 1/10 of scenes that are based too closely on HP books and films.

It's almost certainly impossible to do so without violating the wishes and goodwill of J.K. Rowling, who gives her blessing to create non-commercial derivative works. Making such a derivative work, then when it becomes popular due to the nature of the derivation, to skirt the law to sell it, would be fairly evil.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-08-07T23:26:23.347Z · LW(p) · GW(p)

Context for "How does this relate to Eliezer's plans?" is basically he was at one point talking on Facebook about writing a thriller similar to The Davinci code to make a ton of money and get connections(my memory about his post, don't quote it) but had trouble motivating himself to write a thriller.

Making such a derivative work, then when it becomes popular due to the nature of the derivation, to skirt the law to sell it, would be fairly evil.

I don't feel like you have to do this? Like, 50 shades of gray doesn't feel like it's skirting the law in regards to Twilight, it's a story in it's own right that has thematic elements and characters derived as inspiration.  I feel like blocking an edit on the case that it was originally using Harry Potter as inspiration would be fairly evil in itself.

Replies from: Dagon
comment by Dagon · 2020-08-08T00:06:57.023Z · LW(p) · GW(p)

I don't actually know much specifics about 50SoG - I tried to read it at the height of it's popularity, and gave up a few chapters in. I did read the first Twilight book, and didn't see that much similarity in the parts of 50SoG I got through. I never looked at the fanfic version of 50SoG. As such, I don't know how clearly derivative the fanfic was, nor how much changed to the published novel. My guesses about these factors are that they point to 50SoG being vaguely inspired by Twilight where HPMoR is clearly derived from HP books and films.

Note that my moral view is not binding - I think it'd be wrong to use someone's permission to make noncommercial derivations, then change the minimal amount to make money. That's based on the suggestion of fairly minimal rewriting to change names and replace too-obvious references, and my interpretation of J.K.R.'s wishes.

If it's a much deeper rewrite, including changing the basic plot to something other than a dark lord returning based on prophecies about a connection to a young boy hero, who turns out to be possessing a teacher at a school that's silly and amusing in some very specific ways, it's not problematic at all. And it's not HPMoR at that point either - it's some other possibly-magical story that uses some of the non-Rowling concepts from HPMoR.

comment by Matt Goldenberg (mr-hire) · 2020-07-31T16:03:40.426Z · LW(p) · GW(p)

"Medium Engagement Activities" are the death of culture creation.

Expecting someone to show up for a ~1-hour or more event every week that helps shape your culture is great for culture creation, or requiring them to wear a dress code - large commitments are good in the early stages.

Removing trivial inconveniences to following your values and rules is great for building culture, doing things that require no or low engagement but help shape group cohesion.  Design does a lot here - no commitment tools to shape culture are great during early stages.

But medium commitment tools are awful, a series of little things that take 5-50 minutes a week to work on - these things are death to early stage cultures.  It's death by a 1000 cuts of things they can't see clear immediate benefit for, and which they can see clear immediate cost for.

I don't know why exactly this is, and haven't really mapped out what's behind this intuition, it's something about the benefits of building identity vs. the time required, it's ushaped, the tails are a much more effective tradeoff than the middle.

comment by Matt Goldenberg (mr-hire) · 2020-07-24T16:01:58.509Z · LW(p) · GW(p)

 A strong vision can cover for a lot of internal tension - the external tension between your vision and what you want can hide internal tension related to not meeting all your needs.

But, it can't cover forever - eventually, your other needs get louder and louder until they drown out your vision, leading to a crash in productivity.

It can help to know what your leading indicators for ignoring your needs... that way, you can catch a crash before it happens, and make sure you resolve that internal tension.  For me, it's my weight creeping up.  I use food as a way to ignore negative emotions.

So, when I see my weight creeping up over the course of a few days, I take time out to process emotions, take care of myself, and see what needs I've been ignoring.  1 hour of attending to other needs can save me weeks of total burnout.

comment by Matt Goldenberg (mr-hire) · 2020-07-16T14:21:28.367Z · LW(p) · GW(p)

3 Possibilities for a Lesswrong talk:

1. In this shortform [LW(p) · GW(p)],  I show how the attractor for a cult (Kegan 4.5 leaders) is very easy to confuse with the attractor for a great culture (Kegan 5 leaders).   This is a pattern I've noticed a bunch when looking at good cultures, and I'd love to do a talk called "Cult is the root of culture"  where I show a bunch of instances of this.

2. I've been continuing to explore the idea aesthetic bias in beliefs and the concept of aesthetic pathology [LW(p) · GW(p)].  I'd love to do a talk exploring some of those ideas and giving examples.

3. The thing I've been spending most of my time on is teaching how to overcome akrasia.  I have a workshop that shows experientially what it's like to act from a non-coercive place (that goes over much of the material in this comment [LW(p) · GW(p)]) and I'd love to lesswrongify it and run some exercises during the talk.

 

Which of these would you be most interested in as a talk?

Replies from: mr-hire, Raemon
comment by Matt Goldenberg (mr-hire) · 2020-07-16T22:08:08.670Z · LW(p) · GW(p)

Ehh, I realized that I don't understand the first two well enough to give a good 5 minute talk, and the last one can't be given experientially in 5 minutes. Will instead choose a topic that's more transparent to me and conceptual in nature.

comment by Raemon · 2020-07-16T17:53:02.778Z · LW(p) · GW(p)

I’m most interested in number 1

comment by Matt Goldenberg (mr-hire) · 2020-07-14T13:20:31.917Z · LW(p) · GW(p)

Grudgingness is the productivity killer.

We've noticed all our choices. We've brainstormed better options. We've decided that this is the best course of action.

And yet, it's an awful choice. Reality forced us into a bad situation, and we hold a grudge against.

So we do our task.

But we kick, and scream, and moan about having to do it. We can do it, but we're not gonna like it! We can do it, but by god are we gonna expend energy showing ourselves how much we don't like it.

And so we sit there, pushing against that which can not be moved.

Holding on to our grudge against reality.

Shoulding ourselves in the foot.

And it's at this point we can ask ourselves... is this serving us?

Sometimes, the answer is yes. This grudge connects us to our values, or protects us from a truth we're not equipped to handle.

But often... far far more often, the answer is no.

All that kicking against a brick wall has done for us is to give us a stubbed toe.

So we stare at this grudge, and we thank this grudge for connecting us to our values. And we ask ourselves, with an open heart:

Is it time to let this go?

Replies from: Dagon
comment by Dagon · 2020-07-14T15:04:27.435Z · LW(p) · GW(p)

One step deeper into the maze - why fight it? Why bother to remember that this is currently necessary to meet our immediate goals, but also contradicts our overall preferences?

(note: I generally agree, just giving a counterpoint. I think the key is that letting-go is temporary. You can accept it and move on, but you should have a trigger or date to re-examine the grudge and determine if it's time to do something about it.)

comment by Matt Goldenberg (mr-hire) · 2019-10-11T22:44:03.500Z · LW(p) · GW(p)

*Virtual Procrastination Coach*

For the past few months I've been doing a deep dive into Procrastination, trying to find the cognitive strategies that people who have no trouble with procrastination use to overcome their procrastination.
--------------
This deep dive has involved:

* Introspecting on my own cognitive strategies
* Reading the self help literature and mining cognitive strategies
* Scouring the scientific literature for reviews and meta studies related to overcoming procrastination, and mining the cognitive strategies.
*Interviewing people who have trouble with procrastination, and people who have overcome it, and modelling their cognitive strategies.

I then took these ~18 cognitive strategies, split them into 7 lessons, and spent ~50 hours taking people individually through the lessons and seeing what worked, what didn't and what was missing.

This resulted in me doing another round of research, adding a whole new set of cognitive strategies, (for a grand total of 25 cognitive strategies taught over the course of 10 lessons) and testing for another round of ~50 hours to again test these cognitive strategies with 1-on-1 lessons to see what worked for people.
-------------------------------------
The first piece of more scalable testing is now ready. I used Spencer Greenberg's GuidedTrack tool to create a "virtual coach" for overcoming procrastination. I suspect it won't be very useful without the lessons (I'm writing up a LW sequence with those), but nevertheless am still looking for a few people who haven't taken the lessons to test it out and see if its' helpful.

The virtual coach walks you through all the parts of a work session and holds your hand. If you feel unmotivated, indecisive, or overwhelmed, its' there to help. If you feel ambiguity, perfectionism, or fear of failure, its' there to help.

If you're interested in alpha testing, let me know!

comment by Matt Goldenberg (mr-hire) · 2019-06-22T19:56:32.906Z · LW(p) · GW(p)

POST-RATIONALITY IS SYSTEMATIZED WINNING

John is a Greenblot, a member of the species that KNOWS that the ultimate goal, the way to win, is to minimize the amount of blue in the world [LW · GW], and maximize the amount of green.

The Greenblots have developed theories of cooperation, that allow them to work together to make more green. And complicated theories of light to explain the true nature of green, and several competing systems of ethics that describe the greenness or blueness of various actions, in a very complicated sense that actually clearly leads to the color.

One day, John meets Ted. Ted is a member of the Lovelots. John is aghast when he finds out that Lovelots can't perceive the difference between Blue and Green. Ted is aghast that John can't perceive the difference between love and hate. They both go on their merry way.

The next day, John is doing his daily meditation, imagining the cessation of endless blue and the ascendance of endless green, but thoughts of Ted and his inability to perceive this situation keep intruding. Suddenly, John experiences a subject-object shift [LW · GW]. He is able to perceive his meditation as Ted perceives it, with both colors being the same. In the next moment, he has a flash of the Greenblots celebrating when they've achieved their goal, and John now knows what its' like to experience the thing Ted called love.

John is confused, he thought the Greenblots had built a a fullproof theory of winning, of how to maximize the green and minimize the blue. But then he experienced endless green, and knew how it was for that to not be winning at all. And he experienced the thing Ted was describing, and the sensation of winning felt the same. John thought he knew everything about winning, but in fact he knew nothing.

John vows to understand the true nature of winning, and develop the discipline of being able to work with the sensation just like he previously was able to work with beliefs about making things greener. John will become the Greenblots' first post-rationalist.

comment by Matt Goldenberg (mr-hire) · 2024-11-04T18:42:55.762Z · LW(p) · GW(p)

Enjoyed this video by Veritasium with data showing how Politics is the Mind Killer

 

Replies from: brambleboy, Dana, localdeity
comment by brambleboy · 2024-11-05T07:40:08.790Z · LW(p) · GW(p)

While the broader message might be good, the study the video is about didn't replicate.

Replies from: D0TheMath, mr-hire, Cipolla
comment by Garrett Baker (D0TheMath) · 2024-11-05T23:32:18.963Z · LW(p) · GW(p)

Kicking myself for not making a fatebook about this. It definitely sounded like the kind of thing that wouldn't replicate.

comment by Matt Goldenberg (mr-hire) · 2024-11-05T19:01:44.116Z · LW(p) · GW(p)

They replicated it within the video itself?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-05T19:14:07.990Z · LW(p) · GW(p)

I watched the video and didn't see any stats from their own experiment. Do you have a frame or a section?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-11-05T21:32:52.706Z · LW(p) · GW(p)

I don't remember them having the actual stats, not watching it again though. I wonder if they published those elsewhere

comment by Cipolla · 2024-11-05T21:07:17.590Z · LW(p) · GW(p)

Do you know why the error bars in the replication are smaller than the original one?  (just more people?) And with which confidence is the null hypothesis (difference = 0) is rejected in both cases?

comment by Dana · 2024-11-05T17:33:16.171Z · LW(p) · GW(p)

A few glaring issues here:
1) Does the question imply causation or not? It shouldn't.
2) Are these stats intended to be realistic such that I need to consider potential flaws and take a holistic view or just a toy scenario to test my numerical skills? If I believe it's the former and I'm confident X and Y are positively correlated, a 2x2 grid showing X and Y negatively correlated should of course make me question the quality of your data proportionally.
3) Is this an adversarial question such that my response may be taken out of context or otherwise misused?

The sample interviews from Veritasium did not seem to address any of these issues: 
(1) They seemed to cut out the gun question, but the skin cream question implied causation, "Did the skin cream make the rash better or worse?"
(2) One person mentioned "I Wouldn't have expected that..." which implies he thought it was real data, 
(3) the last person clearly interpreted it adversarially.

In the original study, the question was stated as "cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime." This framing is not as bad, but still too close to implying causation in my opinion.

comment by localdeity · 2024-11-05T12:22:16.570Z · LW(p) · GW(p)

The political version of the question isn't functionally the same as the skin cream version, because the former isn't a randomized intervention—cities that decided to add gun control laws seem likely to have other crime-related events and law changes at the same time, which could produce a spurious result in either direction.  So it's quite reasonable to say "My opinion is determined by my priors and the evidence didn't appreciably affect my position."

comment by Matt Goldenberg (mr-hire) · 2024-11-03T21:50:28.429Z · LW(p) · GW(p)

Want to help me out?

Vote on the book cover for my new book!

It'll be up for a couple of days. The contest website only gives me a few days before I have to pick finalists.

https://form.jotform.com/243066790448060

Replies from: gwern
comment by gwern · 2024-11-04T00:23:53.504Z · LW(p) · GW(p)

There's no way I can meaningfully pick from like 100 covers. Pick 5 or 10, max, if you expect meaningful votes from people.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-11-04T00:32:04.872Z · LW(p) · GW(p)

I'll send out to you round 2 when I've narrowed things done. Right now I'm looking for gut check system 1 decisions, and if you have trouble doing tahat I'd recommend waiting.

comment by Matt Goldenberg (mr-hire) · 2020-08-18T03:53:20.450Z · LW(p) · GW(p)

Are there big takeaways from Moral Mazes that you don't get from The Gervais Principle?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-08-18T19:25:09.941Z · LW(p) · GW(p)

My memory of The Gervais Principle is that it gets wrapped up in lots of fairly specific models of how people interact, whereas Moral Mazes has a more diffuse "you are contaminated by interacting with the system" vibe. So in the end maybe pretty similar, but with different emphases.

comment by Matt Goldenberg (mr-hire) · 2020-06-12T20:15:28.339Z · LW(p) · GW(p)

Having trouble being decisive? Turns out there's only two simple mindset shifts that separate decisive people from indecisive people.

Indecisive people view decisions as a fork in the road. They can stand there forever, trying to decide which way to go.

Decisive people view decisions more like a train switch, that will change the direction of the train they're already inside. If they don't pull the lever in time, the decision to stay on their current path is made for them.

When indecisive people try out this metaphor, sometimes they discover something... thinking of decisions like this is really stressful!

This brings us to the second big mindset shift. Indecisive people view all decisions as the same! Decisive people don't do that.

Instead, they bucket their decisions. They have in mind a clear picture of the things they value, and their vision for the future. If a decision doesn't effect those things, they make it quickly and intuitively. Only if does do they put more time into the decision.

This allows them to not sweat most decisions, and makes decision making much less stressful. It also allows them to put more time and effort into the truly important decisions, because they're not wasting time on the decisions that don't matter.

comment by Matt Goldenberg (mr-hire) · 2020-02-11T11:21:39.656Z · LW(p) · GW(p)

The things that I'm most qualified to teach are the things that I'm worst at.

Take procrastination for example. My particular genetic and cultural makeup ensured that focus would never be a strong suit. As a result, I went through basically every problem that someone who struggles through procrastination goes through. I ran into a ton of issues surrounding it, attacked it from a variety of angles, and got to a point where I can ship cool projects and do great work. Probably average or slightly above in productivity, but functional.

Meanwhile, when I teach overcoming procrastination, I can truly talk about the path you need to learn the material. When a student runs into an issue, its' rare that it's an issue I haven't overcome myself (usually multiple times in different forms) and I can give excellent advice on a path to success.

Meanwhile, the things that I'm best at are the things I'm worst at teaching.

Take constructing conceptual models. It's something that has always come naturally to me. Upon realizing that it was a particular strength of mine, I worked to hone it and understand it and push it to the limits. However, even with this deep understanding, I'm still not great at teaching it. I can tell people what it feels like, and my introspection on the parts of it, and all of the systems I've built to enhance it and the reasoning behind them.

But, I cannot tell them the path to go from not having the skill of conceptual model building to having it. It's like breathing to me. If they run into a problem in acquiring the skill, I cannot help them overcome it because I never ran into it myself. It's much harder for me to truly understand what it's like to be someone who struggles with the skill.

Replies from: jimrandomh, Pattern
comment by jimrandomh · 2020-02-11T21:21:33.784Z · LW(p) · GW(p)

While this seems accurate in these cases, I'm not sure how far this model generalizes. In domains where teaching mostly means debugging, having encountered and overcome a sufficiently a wide variety of problems may be important. But there are also domains where people start out blank, rather than starting out with a broken version of the skill; in those cases, it may be that only the most skilled people know what the skill even looks like. I expect programming, for example, to fall in this category.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-02-12T00:13:02.445Z · LW(p) · GW(p)

Agree, the model doesn't fully generalize and lacks nuance. I think programming is a plausible counterexample.

comment by Pattern · 2020-02-11T18:04:22.524Z · LW(p) · GW(p)

Are you good at teaching people (your) existing conceptual models? (As opposed to how to make their own.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-02-11T19:27:04.566Z · LW(p) · GW(p)

I think I'm decent at it. I suppose you could answer this question better than I.

comment by Matt Goldenberg (mr-hire) · 2019-07-12T17:30:38.164Z · LW(p) · GW(p)

INSTRUMENTAL RATIONALITY CURRICULUM

A few weeks ago I ran a workshop at the EA hotel that taught my Framework for internal debugging [LW · GW]. It went well but there was obviously too much content, and I have doubts about the ability for it to consistently effect people in the real world.

I've started planning for the next workshop, and creating test content. The idea is to teach the material as a series of habits where specific mental sensations/smells are associated with specific mental moves. These implementation intentions can be practiced through focused meditations. There are 3 "sets" of habits that each have seven or 8 meditations attached to them.

The idea is that the first course, the way of focus, teaches people the basic skills of working with intentions and focusing that are needed to not procrastinate. That is there are basic skills to focusing, that even if you don't have any internal conflict or trauma, you still need to get things done. The first course starts with that.

THE WAY OF FOCUS (Overcoming Akrasia).

1. Noticing dropped intention -> Restabilizing intention

2. Noticing Competing Intention -> Loving Snooze (+ Setting Up Pomodoros or Consistent Break Schedule)

3. Noticing Potential Intention -> Mental Contrasting

4. Noticing Coercive Intention -> Switching to Non-coercive Possiblity

5. Noticing Ambiguious/Overwhelming Intention -> Generating Specific Next Action

6. Noticing Context Switch -> Intention Clearing (+ Habits for Removing Distractions)

7. Noticing Productivity - > Reinforcing Self-Concept as Productive Person (+ Changing Environment to That of Productive Person)

THE WAY OF LETTING GO (Overcoming Trauma)

Sometimes, you'll have competing intentions come up that are very persistent, because they're related to deep emotional issues/trauma. You can find them by looking for feelings of avoidance or the inability to avoid, and then use the following techniques to dispell.

1. Noticing Avoidance-> Fuse with the Feeling

2. Noticing Magnetism -> Dissociate from Feeling

3. Inhabiting Feeling -> Finding Emotional Core

4. Finding Emotional Core -> Re-experince Memories

5. Sticky Belief -> Question Belief Via Work of Byron Katie

6. Sticky Feeling -> Let Go of Feeling Via Sedona Method

7. Sticky Memories -> Reframe Memories Via Lefkoe Belief Process

8. Process Fails-> Find Second Layer Emotion.

THE WAY OF ALIGNMENT (Overcoming Internal Conflict)

Sometimes, you'll notice competing intentions that aren't unambigiously negative or positive, and it's hard to know what to do. In those cases, you can notice the "conflicted" feeling, and use the following habits to deal with them over a period of time.

0. Noticing Conflict -> Fuse/Dissociate With Feeling (Already Taught)

0. Easy to Fuse/Dissociate -> Find Emotional Core (Already Taught)

1. Familiar Conflict-> Alternate Fusing/Dissociating (practice switching perspectives)

2. Easy to shift perspectives -> Practice holding both at once

3. Easy to hold both at once -> Internal Double Crux

4. Memory Reconsolidated -> Stack Attitudes

5. Attitudes Stacked -> Core Transformation

6. Core Transformed -> Parental Timeline Reimprinting

7. Timeline Reimprinted -> Modality Mind Palace

ASK:

I'm just finishing up the content for THE WAY OF FOCUS, and I'm looking for people to help test the material. It will involve commiting 30 minutes over the internet a day for 7 days. 10 minutes to practice previous meditations, 10 minutes to teach the new material, and 10 minutes to practice the new material via a new type of meditation.

comment by Matt Goldenberg (mr-hire) · 2020-12-12T17:00:30.046Z · LW(p) · GW(p)

(Taken from a comment)

One of the problem's with Rao's Gervais principle that I later realized(that I think Zvi's sequence shares to some degree) is that it doesn't distinguish between Kegan 4.5 Sociopaths, and Kegan 5 leaders.  This creates the impossible choice between having freedom as a loser, meaning as a clueless, or influence as as a sociopath, pick one.

Similarly, Zvi's sequence gives the choice of truth as a simulacra 1,  belonging as Simulacra 2, and influence as Simulacra 4.

Neither framing admits that it's possible to get to a stage of leadership in which you can fluidly cycle between variations of the 3 modes.

Replies from: gworley, Vladimir_Nesov, Dagon, ckai
comment by Gordon Seidoh Worley (gworley) · 2020-12-12T20:59:34.927Z · LW(p) · GW(p)

Thanks, I think this helps me see what I find slightly off about both, and also Zvi's writing on "moral mazes".

In all three cases, it's acting as if the frames and roles people feel themselves to be trapped in are the ground reality, rather than a way of being those people are choosing to take on. They present models that seem to claim a complete description, but fail to realize that even if they are complete descriptions it's possible to pull back and see people and statements and roles to be in multiple states at once, or for parts of the model to be under or over specified such that stuff gets lumped together that should be split apart.

comment by Vladimir_Nesov · 2020-12-12T17:57:40.870Z · LW(p) · GW(p)

The simulacra levels are not mutually exclusive, a given statement should be interpreted at all four levels simultaneously:

  • Level 1 (facts): What does the statement claim about the world?
  • Level 2 (deception): What actions does belief in the statement's truth incite?
  • Level 3 (identity): Which groups does uttering this statement serve as evidence for belonging to?
  • Level 4 (consequences): What goals does uttering this statement serve?
Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-12T18:53:27.902Z · LW(p) · GW(p)

Yes, and I think this is largely missing or distorted in the sequence.

I think the post that gets closest to really truly recognizing this is "Simulacra levels and their interactions"

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2020-12-12T21:00:00.564Z · LW(p) · GW(p)

My takeaway was that awareness of all levels is necessary if you want to reliably remain on level 1 (make sure that you don't trigger responses for levels 2-4 by crafting statements that have no salient interpretations at levels 2-4). So both the problem and the solution involve reading statements at multiple levels.

(The innovation is in how this heuristic is more principled/general than things like "don't talk about politics or religion". You might even manage to talk about politics and religion without triggering levels 2-4.)

comment by Dagon · 2020-12-12T17:48:35.907Z · LW(p) · GW(p)

Is this a problem for the theory, or a problem for human participants in society that the theory exposes?  I suspect that people of varying capability do have this conundrum - it may not be a pure choice they make, but the paths they take will lead them to less-than-perfect situations and interactions.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-12T17:54:08.819Z · LW(p) · GW(p)

It reveals an incompleteness in the theory.  

comment by ckai · 2020-12-12T20:26:35.920Z · LW(p) · GW(p)

But The Gervais Principle is a model of a tv show, not directly of reality.  I haven't seen the particular show, but most tv shows are not trying to model reality, but reflect it, and distorting it is fair and even expected.  There's an argument to be made that the distortions are what makes it interesting.

Do you see this differently?

Replies from: gworley, mr-hire
comment by Gordon Seidoh Worley (gworley) · 2020-12-12T20:56:11.769Z · LW(p) · GW(p)

I think Rao is clearly trying to take a model from the show and present it as saying something meaningful about the world we live in and not just the world of the show.

Replies from: ckai
comment by ckai · 2020-12-13T03:22:25.025Z · LW(p) · GW(p)

Yes, I agree with that.  Of course it's meaningful!  It wouldn't be a reflection of reality if it wasn't.  But meaningful isn't the same as complete or undistorted.

For example, I think it's meaningful (maybe not the most insightful thing that could possibly be said, but meaningful) to talk about the original Star Trek in terms of head, heart, and gut as reflected in the characters of Spock, McCoy, and Kirk.  I don't think this covers everything that Star Trek is, or everything that those characters are, or everything that real people can be, but it's an interesting pattern (and from there one can have some fun considering felt senses and gut feelings, because so often people use an even simpler model and just contrast head and heart, so I think it's fun to consider the gut as Captain).

I saw The Gervais Principle as a way of looking at the show and at those aspects of reality that are reflected in the show (I read the whole thing for the reflections of reality, not the show analysis), and an interesting one, but not necessarily intended to be complete to every possibility (especially possibilities not explored in the show) or even...I mean, I'd have to read it again, but just as real people aren't only one of head heart gut, in terms of The Gervais Principle, I thought there was some simplification going on, but I can't actually remember if I thought the categories were more like personality types (which are usually a continuum), or like cultures, or like roles that one is forced into and then forced to act according to.  I remember aspects of all of these, actually.

comment by Matt Goldenberg (mr-hire) · 2020-12-12T20:47:41.654Z · LW(p) · GW(p)

Yeah, I think that Rao is using the Office to illustrate what he sees as a real world pattern.

comment by Matt Goldenberg (mr-hire) · 2020-09-22T20:23:11.672Z · LW(p) · GW(p)

It seems like the spirit of the Litany of Gendlin is basically false?

Owning up to what's true makes things way worse if you don't have the psychological immune system to handle the negative news/deal with the trauma or whatever.

And it's precisely the things that you are avoiding looking at that are  most likely to be those things you can't handle, as that's WHY you developed the response of not looking at them.

Replies from: jimrandomh, Raemon, Pongo, AllAmericanBreakfast
comment by jimrandomh · 2020-09-22T22:09:28.323Z · LW(p) · GW(p)

Pedantically speaking, whether this is true or not depends on what you mean by "it"; owning up to it [a fact about the world external to oneself] does not make it [that fact] worse, but if your psychology can't handle unpleasant truths, then owning up to it [a specific fact about the external world] make may it [the world as a whole] worse.

But this is a bit of a dodge; I think the right way to look at it is that, in most cases, a false belief is a form of debt; you'll probably have to own up to it eventually, and there's a cost to be paid when you do, but time-shifting that cost further into the future creates additional costs, because you make worse decisions and form other incorrect beliefs in the mean time.

comment by Raemon · 2020-09-24T01:02:51.882Z · LW(p) · GW(p)

Habryka framed the Gendlin litany as a stoic meditation, which made me dislike it a bit less. i.e, it's something you say to yourself to help make it true that you can endure the truth, by choosing to adopt a frame where the truth is already out there. (not sure if habryka exactly endorses this summary)

The main issue I then have with it (through this frame) is it says "people can endure what is true", rather than "I can endure what's true" – "people" sounds like it's making a claim about the external world, rather than a mantra I'm repeating to myself. (Although I can imagine a reading where the "people" is still directed inward rather than outward)

I guess put another way, further steelmanning the original version: the fact that people can stand what's true, doesn't mean that they do stand what's true. You can be reminding yourself of what's possible, and committing to cleave towards the truth and be the sort of the person who will stand what's true by framing it as something you're already enduring.

comment by Pongo · 2020-09-24T00:46:29.775Z · LW(p) · GW(p)

I think it's probably true that the Litany of Gendlin is irrecoverably false, but I feel drawn to apologia anyway.

I think the central point of the litany is its equivocation between "you can stand what is true (because, whether you know it or not, you already are standing what is true)" and "you can stand to know what is true".

When someone thinks, "I can't have wasted my time on this startup. If I have I'll just die", they must really mean "If I find out I have I'll just die". Otherwise presumably they can conclude from their continued aliveness that they didn't waste their life, and move on. The litany is an invitation to allow yourself to have less fallout from acknowledging or finding out the truth because you finding it out isn't what causes it to be true, however bad the world might be because it's true. A local frame might be "whatever additional terrible ways it feels like the world must be now if X is true are bucket errors [LW · GW]".

So when you say "Owning up to what's true makes things way worse if you don't have the psychological immune system to handle the negative news/deal with the trauma or whatever", you're not responding to the litany as I see it. The litany says (emphasis added) "Owning up to it doesn't make it worse". Owning up to what's true doesn't make the true thing worse. It might make things worse, but it doesn't make the true thing worse (though I'm sure there are, in fact, tricky counterexamples here)

(The Litany of Gendlin is important to me, so I wanted to defend it!)

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-09-23T17:08:14.059Z · LW(p) · GW(p)

We obviously can’t give our attention to every truth. The LoG has to be contextual. If you’re spending a lot of resources pursuing an impossible goal because you’re willfully ignoring an uncomfortable fact, stop denying the truth. Build the emotional skills to work through disappointment in a healthy way and move on with your life.

My issue with the LoG is its tone. It seems to frame the process of coping with disappointment as a dispassionate one. Like we’re supposed to be a computer. I think that’s unhelpful on the margin for most people most of the time.

Replies from: Pongo
comment by Pongo · 2020-09-24T00:35:49.019Z · LW(p) · GW(p)

I wonder why it seems like it suggests dispassion to you, but to me it suggests grace in the presence of pain. The grace for me I think comes from the outward- and upward-reaching (to me) "to be interacted with" and "to be lived", and grace with acknowledgement of pain comes from "they are already enduring it"

comment by Matt Goldenberg (mr-hire) · 2020-08-07T19:17:08.258Z · LW(p) · GW(p)

Just had an excellent chat with CFAR Cofounder (although no longer a part of CFAR) Michael Smith breaking down in excruciating detail a skill he calls "Breaking Free."

A step by step process to:

1. Notice auto-pilot scripts you are running that are causing you pain.

2. Dissolve them so you can see what actions will lead to what you truly want.

Now, I'm looking for people to teach this skill to! It would involve a ~2 hour session where I ask you why you want the skill, and teach it to you, then a ~30 minute followup session a couple weeks later where we talk about what the skill has done for you.

I'm happy to give free coaching on the skill to anyone who asks, all I ask is that I can use the recordings of your session in the podcast about the skill.

Anyone interested?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-11T05:09:36.007Z · LW(p) · GW(p)

I may be interested. DM me

comment by Matt Goldenberg (mr-hire) · 2020-08-05T01:23:01.238Z · LW(p) · GW(p)

CFAR's "Adjust Your Seat" principle and associated story is probably one of my most frequently referenced concepts when teaching rationality techniques.

I wish there was a LW post about it.

comment by Matt Goldenberg (mr-hire) · 2019-12-11T19:19:38.337Z · LW(p) · GW(p)

My biggest win lately (Courtesy of Elliot Teperman) in regards to self love is to get in the habit of thinking of myself as the parent of a child (myself) who I have unconditional love for, and saying what that parent would say.

An unexpected benefit of this is that I've started talking like this to others.

Like, sometimes my friends just need to hear that I appreciate them as a human being, and am proud of them for what they accomplished and its' not the type of thing I used to say at all.

And so do I, I didn't realize how much I needed to hear that sort of thing from myself until I started saying it regularly.

One could call this Internal Parent Systems. Not to be confused with the default installed one that many of has that judges, criticizes, or blames in our parents' voice :). A close cousin of Qiaochu Yuan's Internal Puppy Systems

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2019-12-11T21:18:12.640Z · LW(p) · GW(p)

I think this has some interesting parallels to transactional analysis. In that model you could think of it as exercising your parent part to talk to your child part and to talk to the child part of others.

comment by Matt Goldenberg (mr-hire) · 2019-10-21T19:14:18.375Z · LW(p) · GW(p)
  • Today I had a great chat with a friend on the difference between #Fluidity and #Congruency
  • For the past decade+ my goal has been #Congruency (also often called #Alignment), the idea that there should be no difference between who I am internally, what I do externally, and how I represent myself to others
  • This worked well for quite a long time, and led me great places, but the problems with #Congruency started to show more obviously recently.
  • Firstly, my internal sense of "rightness" wasn't easily encapsulated in a single sense of consistent principles, it's very fuzzy and context specific. And furthermore, what I can even define as "right" shifts as my #Ontology shifts.
  • Secondly, and in parallel, as the idea of #Self starts to appear less and less coherent to me, the whole base that the house is built on starts to collapse.
  • This had led me to begin a shift from #Congruency to #Fluidity. #Fluidity is NOT about behaving by an internally and externally consistent set of principles, rather it's being able to find that sense of "Rightness" - the right way forward - in increasingly complex and nuanced situations.
  • This "rightness" in any given situation is influenced by the #Ontology's that I'm operating under at any given time, and the #Ontologies are influenced by the sense of "rightness".
  • But as I hone my ability to fluidly shift ontologies, and my ability to have enough awareness to be in touch with that sense of rightness, it becomes easier to find that sense of rightness/wrongness in a given situation. This is as close as I can come to describing what is sometimes called #SenseMaking.
Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-10-21T19:22:31.815Z · LW(p) · GW(p)

Sorry for all the hashtags, this was originally written in Roam.

Replies from: Pattern
comment by Pattern · 2019-10-22T05:11:56.367Z · LW(p) · GW(p)

Is Roam as useful a medium for you to read in, as it is for you to write in?

comment by Matt Goldenberg (mr-hire) · 2019-06-25T20:01:36.061Z · LW(p) · GW(p)

STEELMANNING KEGAN 3 (OR, KEGAN 3, TO THE TUNE OF KEGAN 4)

Ruby recently made an excellent post called Causal Reality vs. Social Reality [LW · GW]. One way to frame what he was writing was he was trying to point at that 58% of the population is on Kegan's stage 3, and a lot of what rationality is doing is trying to move people to stage 4.

I made a reply to that (knowing it might not be that well received) essentially trying to steelman Kegan 3 from a Kegan 4 perspective - that is, is there a valid systemic reason based on long term goals to act as if all you care about is how you make yourself and others feel.

Here's my slightly edited attempt:

The thing we actually care about... Is it how everyone feels? People being happy and content and getting along, love and meaning - it seems to be based in large part on the fundamental question of how people feel about other people, how we get along - the questions that are asked in Kegan 3.

It might be understandable if you're a person that cares about a world where people love and cherish each other, and are able to pursue meaning - you might think that the near term effects of how people think and feel relate to what happens effect the long term of how people think and feel and relate as well. If you don't have a lot of power, you might even subconsciously think that the flowthrough effects from your ability to effect how people around you feel is your best chance at affecting the "ultimate goal" of everyone getting along.

And when you run into someone who (in your mind) doesn't care about that reality of how their actions effect the harmony of the group, and instead is focused on weird rules that discard those obvious effects, you might think them cold and calculating and importantly in opposition to that ultimate goal.

Then you might write up a post about how sure, rules and Kegan 4 and principles of action are important sometimes, but the important thing is just being good and kind to other people, and things will work themselves out - That Kegan 3 actions are actually the best way to achieve Kegan 4 goals.

Replies from: Raemon, Ruby
comment by Raemon · 2019-06-25T20:05:47.789Z · LW(p) · GW(p)
The thing we actually care about... Is it how everyone feels?

I happen to roughly agree with this but be warned that there are people who get off this train right about here.

Replies from: habryka4, mr-hire
comment by habryka (habryka4) · 2019-06-25T21:10:04.952Z · LW(p) · GW(p)

*raises hand and gets off the train*

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-06-25T21:59:13.708Z · LW(p) · GW(p)

You strike me as someone very heaven focused, so I am surprised you got off the train at about here.

I wonder, if you expand the concept of "how everyone feels" to include Eudomonic happiness - that is, its' not just about how they feel, but second order ideas of how they would feel about the meaningfullness/rightness of their own feelings (and how you feel about the meaningfullness/rightfullness of their actions), do you still get off the train?

Replies from: habryka4
comment by habryka (habryka4) · 2019-06-25T23:22:14.114Z · LW(p) · GW(p)

Yeah, it seems pretty plausible that I care about things that don't have any experience. It seems likely that I prefer a universe tiled with amazing beautiful paintings but no conscious observers to a universe filled with literal mountains of feces but no conscious observers. I don't really know how much I prefer one over the other, but if you give me the choice between the two I would definitely choose the first one.

comment by Matt Goldenberg (mr-hire) · 2019-06-25T20:16:06.959Z · LW(p) · GW(p)

There's a lot of underlying models here around the "Heaven and Enlightenment" dichotomy that I've been playing with. That is, it seems like when introspecting people either same to want to get to a point where everyone feels great, or get to a point where they can feel great/ok/at peace with everyone not feeling great. (Some people are in the middle, and for instance want to create heaven with their proximate tribe or family, and enlightenment around the suffering of the broader world).

One of the things I found out recently that makes me put more weight into the heaven and enlightenment dichotomy is that research into Kegan stage 5 has found there are two types of Kegan stage 5 - people who get really interested in other people and how they feel and how to make them do better (Heaven), and people who get really interested in their own experience and their own body and what's going on internally (enlightenment). That is, when you've discarded all your instrumental values and ontologies as fluid and contextual and open to change and growth, whats' left is your terminal values - Either heaven, or enlightenment.

comment by Ruby · 2019-06-25T20:19:52.676Z · LW(p) · GW(p)

I responded to your original comment here [LW(p) · GW(p)]. I don't know the Kegan types well enough (perhaps I should) to say whether that's a framing I agree with or not.

comment by Matt Goldenberg (mr-hire) · 2019-06-21T18:15:33.965Z · LW(p) · GW(p)

WHY VIBING IS IMPORTANT

Vibing is a type of communication where the content is a medium through which you can play with the emotional rhythm. I've said before that the Berkely rationalist community is missing this, and that that's important, but have never really explained why vibing is important.

Firstly, vibing is one of the purest forms of play - if you're playing with others, but you're not vibing, there's an important emotional connection component missing from your play.

Secondly, vibing is a way to screen for people whose emotional rhythm can sync up with a group. It's a vital screening mechanism to figure out if you can brainstorm well together, work well together, and get along.

Finally, the speed at which you communicate vibing means you're communicating almost purely from System 1, expressing your actual felt beliefs. It makes deception both of yourself and others much harder. Its much more likely to reveal your true colors. This allows it to act as a values screening mechanism as well.

Replies from: moses
comment by moses · 2019-06-22T04:15:00.241Z · LW(p) · GW(p)

I'm so curious about this. I presume there isn't, like, a video example of "vibing"? I'd love to see that

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-06-22T05:17:52.583Z · LW(p) · GW(p)

I don't think vibing is that an unsual a method of communication, most people have seen it and participated in it... rationalists in Berkeley just happen to be really bad at it.

Unfortunately I can't find a video example (don't know what to search for) but I did write up a post that was trying to explain it from the inside. https://www.lesswrong.com/posts/jXHwYYnqynhB3TAsc/what-vibing-feels-like [LW · GW]

Replies from: moses
comment by moses · 2019-06-22T14:45:46.619Z · LW(p) · GW(p)

Yeah, I've read that one, and I guess that would let someone who've had the same experience understand what you mean, but not someone who haven't had the experience.

I feel similarly to when I read Valentine's post on kensho—there is clearly something valuable, but I don't have the slightest idea of what it is. (At least unlike with kensho, in this example it is possible to eventually have an objective account to point to, e.g. video.)

comment by Matt Goldenberg (mr-hire) · 2024-02-24T21:36:28.194Z · LW(p) · GW(p)

Surprising thing I've found as I begin to study and integrate skillful coercive motivation is the centrality of belief in providence and faith of this way of motivating yourself. Here are some central examples: the first from War of Art, the second from The Tools, the third from David Goggins. these aren't cherry picked (this is a whole section of War of Art and a whole chapter of The Tools).

Image
Image
Image

This has interesting implications given that as a society (at least in america) we've historically been motivated by this type of masculine, apollonian motivation - but have increasingly let go of faith in higher powers as a tenet of our central religion, secular humanism.This means the core motivation that drives us to build, create, transcend our nature... is running on fumes. We are motivated by gratitude, w/o sense of to what or whom we should be grateful, told to follow our calling w/o a since of who is calling.

We've tried to hide this contradiction. our seminaries separate our twin Religion (Secular Humanism and Scientific Materialism) into stem and humanities tracks to hide that what motivates The Humanities to create is invalidated by the philosophy that allows STEM o discover. But this is crumbling, the cold philosophy of scientific materialism is eroding the shaky foundations that allow secular humanists to connect to these higher forces - this is one of the drivers of the meaning the crisis.

I don't really see any way we can make it through the challenges we're facing with these powerful new technologies w/o a new religion that connects us to the mystical truly wise core that allows us to be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism, and what Monastic Academy is trying to do with a new, mystical form of dataism - but both these projects are moonshots to massively change the direction of culture.

Replies from: Viliam, mr-hire
comment by Viliam · 2024-02-25T19:19:45.166Z · LW(p) · GW(p)

We need a research on whether atheists are more likely to suffer from akrasia.

If we take Julian Jaynes seriously, the human brain has a rational hemisphere and a motivating hemisphere. Religion connects these hemispheres, allowing them to work in synergy. Skepticism seems to split them.

Effective atheists are probably the ones who despite being atheists still believe in some kind of "higher power", such as fate or destiny or the spirit of history or some bullshit like that. Probably still activates the motivating hemisphere to some degree, only now instead of hearing a clear voice, only some nonverbal guidance is provided. Deep atheism probably silences the motivating hemisphere completely.

The question is, how to harness the power of the religious hemisphere without being religious (or believing some nominally non-religious bullshit). How to be fully rational and fully motivated at the same time.

Can we say something like "I know this is pure bullshit, but God please give me the power to accomplish my goals and smite my enemies!" and actually mean it? Is this what will unleash the true era of rationalist world optimization?

comment by Matt Goldenberg (mr-hire) · 2024-02-24T21:37:27.518Z · LW(p) · GW(p)

Request for feedback: Do I sound like a raving lunatic above?

Replies from: maxwell-peterson
comment by Maxwell Peterson (maxwell-peterson) · 2024-02-24T22:30:07.537Z · LW(p) · GW(p)

I do think it has some of that feeling to me, yeah. I had to re-read the entire thing 3 or 4 times to understand what it meant. My best guesses as to why:

I felt whiplashed on transitions like “be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism”, since I don’t know him or that type of Humanism, but the sentence structure suggests to me that I am expected to know these. A possible rewrite could perhaps be “There are two projects I know of that aim to create a belief system that works with, instead of against, technology. The first is Marc Gafni; he calls his ‘Cosmo-Erotic Humanism’…”

There are some places I feel a colon would be better than a comma. Though I’m not sure how important these are, it would help slow down the pace of the writing:

“increasingly let go of faith in higher powers as a tenet of our central religion: secular humanism.” “But this is crumbling: the cold philosophy”

While minor punctuation differences like this are usually not too important, the way you wrote gives me a sense of, like, too much happening too fast: “wow, this is a ton of information delivered extremely quickly, and I don’t know what appolonian means, I don’t know who Gafni is, or what dataism is…” So maybe slowing down the pace with stronger punctuation like colons is more important than it would otherwise be?

Also, phrases like “our central religion is secular humanism” and “mystical true wise core” read as very Woo. I can see where both are coming from, but I’ve read a lot of Woo, but I think many readers would bounce off these phrases. They can still be communicated, but perhaps something like “in place of religion, many have turned to Secular Humanism. Secular humanism says that X, Y, Z, but has no concept of a higher power. That means the core motivation that…”

(To be honest I’ve forgotten what secular humanism is, so this was another phrase that added to my feeling of everything moving too fast, and me being lost).

There are some typos too.

So maybe I’d advise making the overall piece of writing slower, by giving more set-up each time you introduce a term readers are likely to be unfamiliar with. On the other hand, that’s a hassle, and probably annoying to do in every note, if you write on this topic often. But it’s the best I’ve got!

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-02-24T22:48:54.277Z · LW(p) · GW(p)

Thanks. Appreciate this. I'm going to give another shot at writing this

comment by Matt Goldenberg (mr-hire) · 2023-11-10T16:10:43.954Z · LW(p) · GW(p)

My sense is that most people who haven't done one in the last 6 months or so would benefit from at least a week long silent retreat without phone, computer, or books.

comment by Matt Goldenberg (mr-hire) · 2021-11-27T16:14:05.978Z · LW(p) · GW(p)

I just realized that humans are misaligned mesaoptimizers. Evolution "wanted" us to be pure reproduction maximizers but because of our training distribution we ended up valuing things like love, truth and beauty as terminal values. We're simply misaligned AIs run amok.

comment by Matt Goldenberg (mr-hire) · 2020-12-03T14:06:30.085Z · LW(p) · GW(p)

How do you nominate a post for the 2019 review.  When I click on "Nominations" I only see posts that were already nominated.  When I go to a posts page to make a comment, I don't see any obvious way to make it a nomination.

Edit: Found it! Click the 3 dots menu at the top of a post.

comment by Matt Goldenberg (mr-hire) · 2020-11-25T19:12:47.711Z · LW(p) · GW(p)

Alright, now somebody needs to write the "Pain is a contextually useful unit of effort of which the value varies depending on your situation, genetics, and upbringing" post.

I sort of want to create a gpt-3 bot that automatically does this for any X is Good or X is Bad post.

comment by Matt Goldenberg (mr-hire) · 2020-09-26T18:21:05.019Z · LW(p) · GW(p)

Mods are asleep, post pictures of mushroom clouds.

comment by Matt Goldenberg (mr-hire) · 2020-09-23T22:36:44.186Z · LW(p) · GW(p)

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

comment by Matt Goldenberg (mr-hire) · 2020-07-13T21:01:18.904Z · LW(p) · GW(p)

When interviewing people who were both very productive, and enjoyed work immensely, they turned out to be remarkably similar in terms of the emotional content of how they related to tasks. Here are the 5 emotions that can make work productive and enjoyable:

  1. Unqualified Desire
    1. Definition: Wanting the outcome of your task without reservation. Wanting to do the task without reservation.
    2. Questions:
      • What's bad about this outcome or task?
      • How can I remove the bad aspects?
  2. Resolve
    • Definition: A sense that "I will do this task". As opposed to Unqualified Desire, which is a sense that "I want this outcome."
    • Questions:
      • How do I feel currently about the project or task?
      • How will I feel once the project or task is done?
      • How can I make that difference real to myself?
  3. Playfulness
    • Definition: A sense of intrinsic enjoyment for your task.
    • Questions:
      • What is the nearest state to what I'm currently feeling, that includes a sense of enjoyment?
      • What does a task need for it to be intrinsically enjoyable to me?
      • Which of those things can I add to this task to most quickly get to that Nearest Playful State?
  4. Meaning
    • Definition: A sense that you're connected to your deepest values when doing a task.
    • Questions:
      • What is the nearest state to what I'm currently feeling, that includes a sense of Meaning?
      • What are my values?
      • Which of those values can I tie to this task, to more quickly get to that Nearest Meaningful State
  5. Intentionality
    • Definition: A state where you feel as if you choosing to do what you want to do, when you want to do it.
    • Questions:
      • What is the nearest state to what I'm currently feeling, that includes a sense of Intentionality?
      • What can I do to move myself into that state?
      • Is that something I'll actually do from my current state?
comment by Matt Goldenberg (mr-hire) · 2020-07-02T14:29:41.434Z · LW(p) · GW(p)

One of the things I've been working on in the background over the past ~year is changing my relationship to money. This has allowed me to make more of it while feeling great about it.

Here are the 2 biggest shifts I made:

1. I had a deep-rooted sub-conscious belief that if I got money, it would corrupt me, amplify the worst parts of me. Then, I realized that having money will allow me to hire coaches and advisors who's sole purpose is to help me reach my deepest values. I spent lots of time consciously visualizing this, and recognizing on a deep level that I could consciously direct my money to amplify the best parts of me.

2. I used to view money as a transaction, a fair trade between giving money, and getting something back of equal or greater value. But, that caused me to miss out on the human component of money - it caused me to focus on the money and the product, rather than the people behind them.

Another parallel perspective I've adopted is that money is a gift. A gift of trust in the person being bought from, a gift of freedom in the sense of what the money means. When someone gifts me money, I've gotten in the habit of consciously "receiving" that money, with gratitude and love. This has changed how I approach my products, and how I approach my "customers".

These two shifts have allowed me to be more comfortable with money, even develop a powerful, mutually beneficial relationship with it :).

comment by Matt Goldenberg (mr-hire) · 2019-08-11T22:54:50.899Z · LW(p) · GW(p)

I had one of my pilot students for the akrasia course I'm working on point out today that something I don't cover in my course is indecision. I used to have a bit of problem with that, but not enough to have sunk a lot of time into determining the qualia and mental moves related to defeating it.

Has anyone reading this gone from being really indecisive (and procrastinating because of it) to much more decisive? Or is currently working on making the switch I'd love to talk to you/model you.

As a bonus thank you, you'll of course get a free version of the course (along with all the guided meditations and audios) when it's complete.

comment by Matt Goldenberg (mr-hire) · 2019-06-21T18:17:34.807Z · LW(p) · GW(p)

ON HEAVEN AND ENLIGHTENMENT

https://scontent-sjc3-1.xx.fbcdn.net/v/t1.0-9/56656099_10220056198495676_9079758874621247488_n.jpg?_nc_cat=107&_nc_oc=AQm42c-keDXguTwDHsVQz7hGt5AK-DkYK_eG13XXmHcybXql4JvgoYZC4r0Uy4LvMAU&_nc_ht=scontent-sjc3-1.xx&oh=bb4a1f996cfde07165c9e22fdfe7c06d&oe=5D901596

At the extremes, people have one of four life goals: To achieve a state of nothingness (hinayana enlightenment), to achieve a state of oneness (mahayana enlightenment), to achieve a utopia of meaning (galts gulch), or to achieve a utopia of togetherness (hivemind).

In practice, most people exist somewhere in the middle, depending on how much they want to change their conception of the world (enlightenment) vs. changing the world itself (heaven), and depending on how much they view their identity as seperate from other things (individualism) or the same as other things (collectivism).

I think I'm already past stream entry, and this is why the above diagram scares the shit out of me:

It seems like hinayana enlightenment may be an attractor state even if I have a significant amount of values that would want to create a utopia of meaning.

If I was confident that I could go the mahayana path, there's the "Bodhisattva option" - stepping back from your enlightenment to bring others in, thus creating heaven.

But it's not clear to me that I won't end up at nothingness instead of oneness, and I'm not aware of a path to step back from nothingness and create a utopia of meaning, in fact they feel almost diametrically opposed.

Hence 'Stream entry considered harmful.'

Replies from: Raemon, aleksi-liimatainen
comment by Raemon · 2019-06-22T07:42:22.893Z · LW(p) · GW(p)

I'm interested in a medium-fleshed-out version of this comment that holds my hand more than the current one does. (Not sure whether I'd want the full fledged post version yet)

(In general, happy to see more people using shortform feeds)

((also, you probably didn't mean to call it a short-term feed))

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-06-22T23:34:24.384Z · LW(p) · GW(p)

Will do.

Replies from: Elo
comment by Elo · 2019-06-23T00:02:06.935Z · LW(p) · GW(p)

You should add integral's interior and exterior to the diagram.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-10T21:04:16.164Z · LW(p) · GW(p)

Interior and exterior is one component of heaven and enlightenment. It's possible to break up that one axis into several axes but its' usually correlated enough to not have to do that for the vast majority of people and organizations.

comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-11T07:16:33.616Z · LW(p) · GW(p)
At the extremes, people have one of four life goals: To achieve a state of nothingness (hinayana enlightenment), to achieve a state of oneness (mahayana enlightenment), to achieve a utopia of meaning (galts gulch), or to achieve a utopia of togetherness (hivemind).

These are not distinct things - they're alternative ways to frame one thing. All roads lead to Rome, so to speak. The way I see it, full enlightenment entails attaining all four at once. Just don't get distracted by the taste of lotus on the way.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-11T17:37:52.107Z · LW(p) · GW(p)

This is a common belief and it may in fact be true, but it's at odds with the ontology as presented. There are tradeoffs between which one you choose in this ontology.

Replies from: aleksi-liimatainen
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-12T10:06:21.934Z · LW(p) · GW(p)

Ontologically distinct enlightenments suggest path dependence. That seems correct on reflection; updating and reframing.

Enlightenment is caused by a certain observation about mind/reality that is salient, obvious in retrospect and reliably triggers major updates. The referent of this observation is universal and invariant but its interpretation and the resulting updates may not be; the mind can only work with what it has.

In other words, enlightenment has one referent in the territory but the resulting maps are path dependent. This seems consistent with what I know about spirituality-related failure modes and doctrinal disagreements. Also, the sixties.

So yeah. Caution is warranted. Just keep in mind that your skull is an information bottleneck, not an ontological boundary.

comment by Matt Goldenberg (mr-hire) · 2024-11-18T14:11:18.939Z · LW(p) · GW(p)

A lot of people are looking at the implications of o1's training process as a future scaling paradigm, but it seems to me that this implementation of applying inference time compute to just in time fine tune the model for hard questions is equally promising and may have equally impressive results if it scales with compute, and has equal potential in terms of low hanging fruit to be picked to improve it.

Don't sleep on test time training as a potential future scaling paradigm.

comment by Matt Goldenberg (mr-hire) · 2024-09-13T16:48:34.357Z · LW(p) · GW(p)

It seems like the obvious thing to do with a model like o1 trained on reasoning through problems would be to train it to write code that helps it solve reasoning problems.

Perhaps the idea was to not give it this crutch so it could learn those reasoning skills without the help of code.

But it seems like from the examples that while its great at high level reasoning and figuring out where it went wrong, it still struggles with basic things like counting, which, if it had the instinct to write code in those areas which it's likely to get tripped up, would be easily solved.

comment by Matt Goldenberg (mr-hire) · 2024-04-24T14:10:36.848Z · LW(p) · GW(p)

Zuck and Musk point to energy as a quickly approaching deep learning bottleneck over and above compute.

This to me seems like it could slow takeoff substantially and effectively create a wall for a long time.

Best arguments against this?

Replies from: hauke-hillebrandt
comment by Hauke Hillebrandt (hauke-hillebrandt) · 2024-04-24T14:38:09.910Z · LW(p) · GW(p)

You can compute where energy is cheap, then send the results (e.g. weights, inference) on where ever needed.

But Amazon just bought rented half a nuclear power plant (1GW) near Pennsylvania, so maybe it doesn't make sense now.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-04-24T20:31:40.843Z · LW(p) · GW(p)

i don't think the constraint is that energy is too expensive? i think we just literally don't have enough of it concentrated in one place

but i have no idea actually

comment by Matt Goldenberg (mr-hire) · 2021-11-28T23:36:40.644Z · LW(p) · GW(p)

Yes, but people also constantly exchange increased reproductive capacity for love, truth, and beauty (the world would look very different if reproductive capacity was the only terminal value people were optimizing for).  It's not that reproductive capacity isn't a terminal value of humans, it's that it's not the only one, and people make tradeoffs for other terminal values all the time.

comment by Matt Goldenberg (mr-hire) · 2021-01-29T16:20:09.659Z · LW(p) · GW(p)

It sort of seems like Predictive Processing provides a grounded foundation for the simulation argument.

comment by Matt Goldenberg (mr-hire) · 2021-01-20T23:46:41.596Z · LW(p) · GW(p)

Random question for traders?

What percent of "gains" from trading do you think currently come from algorithms and AI vs. human traders?

Replies from: ChristianKl
comment by ChristianKl · 2021-01-21T21:30:41.670Z · LW(p) · GW(p)

If any trader answers it, I would also be very interested in their error bars. How much uncertainty is there?

comment by Matt Goldenberg (mr-hire) · 2020-12-12T12:58:02.263Z · LW(p) · GW(p)

Is society just a tool to get Kegan 3 frames to want to LARP Kegan 4 and Kegan 5 frames?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-12-12T20:54:40.422Z · LW(p) · GW(p)

I mean, this is a weird way to put it, but kinda.

At Kegan 3 the ground truth is taken for granted, and is heavily constructed via social reality. You can have a traditional society that isn't trying to do anything other than maintain the existing reality that works for people at this stage of development.

On the other hand, modern civilization (as in, modern industrial civilization with loose family ties and trusting strangers and impersonal organizations that function like machinery) basically demands people at least come up to Kegan 4 to really succeed, and historically put lots of systems in place to help people get there. It does end up asking people to try their best and fake it until they actually develop, with people playing at Kegan 4 without actually being there.

A classic example I can think of is the way modern society, and especially modern organizations, expect people to function in compartmentalized ways. Like, say you work at a company, and you, Alice, have beef with your coworker, Bob. The expectation is that you'll act "professionally", which is essentially the LARPing thing you're getting at, where there are rules around how you are supposed to behave in the workplace, and one of those is engaging with people in the workplace only on limited terms. The whole person doesn't come to work, only their work "mask". So your beef with Bob must be kept out of the workplace, lest you be fired yourself, and Bob can readily get himself out of trouble if you break the rules and bring the beef to work by saying "hey, Alice isn't acting professionally!".

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-12-14T02:00:57.429Z · LW(p) · GW(p)

I feel like I only wrote half that comment. Here's the rest.

That kind of compartmentalization is not something that comes naturally to people without systems in place to push them to it. In a traditional society, there's just sort of one social sphere (attempts at secret groups for ritual purposes notwithstanding) that overlaps with everything and you can bring your whole self all the time everywhere and people will expect you to do that. It's only that we ask more of people in our modern world because compartmentalization works well as a bridge to help people at stage 3 get by in a world that expects them to be at stage 4 or higher: it keeps complex social interactions functioning when the people involved would otherwise interact in ways that would eventually, compounded over many interactions, tear modern society apart.

I think much of the difficulty and dissatisfaction people find in the modern world comes from some critical mass of people developing to Kegan 5, reshaping society, and then tearing apart some of the things that helped people bridge into the Kegan 4 level and modern society. To go back to professionalism, lots of aspects of traditional professionalism have broken down. For the people this works for they really enjoy the more casual atmosphere, but it makes it harder for people who are at Kegan 3 to fully join the party because it requires stepping out into a groundless space that's difficult to navigate without solid frames provided to them. For all the stifling of traditional notions of professionalism, it at least created an equal ground with known rules that made it easier for people to come up into. I think we underappreciate how difficult the erosion of these norms makes it for people who are at Kegan 3.

And I want to importantly emphasize that we need to make a world that's accessible to folks at Kegan 3, because this is the natural developmental level for most adult humans. So, in fact, I'd say, to use your metaphor, we're asking people to LARP in a situation where the rules are vague and you don't necessarily understand why the DM (or whatever this is called in LARPing) punished you, or even necessarily the genre of LARP you're doing. We used to have clearer rules, we cleared them away because the people who stopped LARPing and starting living in game (out of game?) full time thought it would be more fun that way, and it is for them, but not for everyone else.

comment by Matt Goldenberg (mr-hire) · 2020-10-01T00:46:59.658Z · LW(p) · GW(p)

I have a visceral negative reaction to the comments on this post.

It really annoys me that rationalists are so bad at understanding and using analogy.

https://www.lesswrong.com/posts/HzDcLf2LJg4x66fcH/not-all-communication-is-manipulation-chaperones-don-t [LW · GW]

comment by Matt Goldenberg (mr-hire) · 2020-08-15T03:25:25.118Z · LW(p) · GW(p)

What can I do to get an intuitive grasp of Kelly betting? Are there apps I can play or exercises I can try?

comment by Matt Goldenberg (mr-hire) · 2020-07-24T13:05:01.442Z · LW(p) · GW(p)

But can't you just believe in Rokos anti-basilisk, the aligned AI that will punish you if you bring a malevolent AI into existence?

Replies from: ChristianKl
comment by ChristianKl · 2020-07-25T19:35:24.685Z · LW(p) · GW(p)

There's no feedback loop that results in that AI being created. 

Replies from: romeostevensit, mr-hire
comment by romeostevensit · 2020-07-30T04:39:10.352Z · LW(p) · GW(p)

You do if the super benevolent AI isn't dumber than a defectbot.

comment by Matt Goldenberg (mr-hire) · 2020-07-25T20:11:28.745Z · LW(p) · GW(p)

I mean, you get the standard utopia that the aligned AI gives you.  And you're more likely to end up in worlds with aligned AIs that disincentivize unaligned AIs from being created, so maybe there's an anthropic feedback loop? 

Replies from: ChristianKl
comment by ChristianKl · 2020-07-25T21:40:33.715Z · LW(p) · GW(p)

I'm not sure that most people who seek to create aligned AI's want an AI that starts doing the Last Judgment and punishes people for their misdeads for causal trade reasons. 

It's been a while since I read Roko's post, but I don't think that it makes any argument for the resulting AI being non-Aligned. Being aligned doesn't prevent the AI from assuming that it's existence is very high utility and doing acausal trade to further the chances of existing.

comment by Matt Goldenberg (mr-hire) · 2020-06-30T17:08:25.371Z · LW(p) · GW(p)

I've been thinking a bit about the relationship between Perfectionism, Fear-of-Failure, and Fear-of-Success, as I've been teaching them this week in my course.

They all have a very similar structure, where each has a component of a "shadow value" - something that's important to us that we tend not to acknowledge, as well as a "acknowledged value" - something that we allow ourselves to acknowledge as important.  

The solution for all 3 is similar - separate the shadow value from the known value, then figure out if each value (both shadow and known) actually applies to the situation, and how best to apply it.

For Perfectionism, the Shadow Value is pleasing/being loved by/being accepted by others. The acknowledged value is having high standards for ourselves and our work.

For Fear-of-Failure, the Shadow Value is protecting our identity. The acknowledged value is dealing with the negative external consequences of failure.

For Fear-of-Success, the Shadow Value is being deserving of what we receive. The acknowledged value is dealing with the negative external consequences of success.

What bugs me is... I don't know why all 3 of these happen to develop this very similar structure.  It could just be a coincidence, but my gut tells me there is something unifying all 3 of these items together that I'm not seeing, and that understanding what it is would give me a more complete understanding of Procrastination.  

They all seem to somehow be related to "Standards" - but I'm still not seeing the underlying system.

Replies from: Pattern
comment by Pattern · 2020-06-30T21:49:24.099Z · LW(p) · GW(p)

Is the shadow value always identity related? (You are good/[identity X which is good]/not? Perception/model of self worth?)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-07-01T19:23:02.594Z · LW(p) · GW(p)

I'm not sure if the perfectionism case (being perfect to please others) fits the identity pattern.  Although admittedly, in some people the shadow/acknowledged value is flipped - some people will acknowledge being perfect to please others, but won't acknowledge the part of themselves that want to do it for themselves.

Replies from: Pattern
comment by Pattern · 2020-07-02T15:48:28.003Z · LW(p) · GW(p)

Thinking that some things aren't all right to acknowledge might be more fundamental.

I was guessing that "all of the shadow stuff is about how people think of themselves (i.e. identity. I am _, I am not _.) because it's something people get tied up in, and it's a reason someone might want to deny something.

I also think of Perfectionism (and it's opposites, not trying (if the standard is unobtainable*)) as being (related to) fear of failure.


*This might cash out as:

"I'm good at X" -> does well, puts in a lot of effort (Maybe judges people for having low standards, or has different personal standards, whether high, nonjudgemental, distributed, etc.), may seek it out + challenges in domain

"I'm bad at Y" -> doesn't try, scrapes by, avoids/ugh field/procrastinates, says 'it doesn't matter'/'i don't care', judges self, maybe dirty pain

(It's not super easy to delineate 'enjoys/seeks out thing' from (consistently) 'works to get better at it'.)