Posts

Dishonorable Gossip and Going Crazy 2023-10-14T04:00:35.591Z
Frame Bridging v0.8 - an inquiry and a technique 2023-06-20T19:46:39.502Z
Post-COVID Integration Rituals 2021-04-12T16:54:53.557Z
3 Cultural Infrastructure Ideas from MAPLE 2019-11-26T18:56:48.921Z
Unreal's Shortform 2019-08-03T21:11:22.475Z
Dependability 2019-03-26T22:49:37.402Z
Rest Days vs Recovery Days 2019-03-19T22:37:09.194Z
Active Curiosity vs Open Curiosity 2019-03-15T16:54:45.389Z
Policy-Based vs Willpower-Based Intentions 2019-02-28T05:17:55.302Z
Moderating LessWrong: A Different Take 2018-05-26T05:51:40.928Z
Circling 2018-02-16T23:26:54.955Z
Slack for your belief system 2017-10-26T08:19:27.502Z
Being Correct as Attire 2017-10-24T10:04:10.703Z
Typical Minding Guilt/Shame 2017-10-24T09:39:35.498Z

Comments

Comment by Unreal on CFAR Takeaways: Andrew Critch · 2024-02-27T20:08:37.510Z · LW · GW

Rationality seems to be missing an entire curriculum on "Eros" or True Desire.

I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught. 

I think maybe Rationality should be more open to sending people to different places for different trainings and stop trying to do everything on its own terms. 

It has been way better for me to learn how to enter/exit different frames and worldviews than to try to make everything fit into one worldview / frame. I think some Rationalists believe everything is supposed to fit into one frame, but Frames != The Truth. 

The world is extremely complex, and if we want to be good at meeting the world, we should be able to pick up and drop frames as needed, at will. 

Anyway, there are three main curricula: 

  1. Eros (Embodied Desire) 
  2. Intelligence (Rationality)
  3. Wisdom (Awakening) 

Maybe you guys should work on 2, but I don't think you are advantaged at 1 or 3. But you could give intros to 1 and 3. CFAR opened me up by introducing me to Focusing and Circling, but I took non-rationalist trainings for both of those. As well as many other things that ended up being important. 

Comment by Unreal on If Clarity Seems Like Death to Them · 2023-12-31T00:17:00.948Z · LW · GW

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore that, and we should actually treat this like a triage situation. Or at least a call to some kind of intervention. And it would be sick to treat this like everything is normal, and that you are sane, and I am sane, and we're just chatting about stuff and oh isn't the weather nice today. 

LessWrong is the wrong place for this to happen. This kind of "prioritization" sanity does not flourish here. 

Not-sane people get stuck on LessWrong in order to stay not-sane because LW actually reinforces a kind of mental unwellness and does not provide good escape routes. 

If you're going to write stuff on LW, it might be better to write a journal about what the various personal, lifestyle interventions you are making to get out of the personal, unwell hole you are in. A kind of way to track your progress, get accountability, and celebrate wins. 

Comment by Unreal on Here's the exit. · 2023-12-30T23:03:26.238Z · LW · GW

Musings: 

COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment. 

I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas." 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T20:45:29.088Z · LW · GW

I don't interpret that statement in the same way. 

You interpreted it as 'lied to the board about something material'. But to me, it also might mean 'wasn't forthcoming enough for us to trust him' or 'speaks in misleading ways (but not necessarily on purpose)' or it might even just be somewhat coded language for 'difficult to work with + we're tired of trying to work with him'. 

I don't know why you latch onto the interpretation that he definitely lied about something specific. 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T16:57:23.135Z · LW · GW

I was asked to clarify my position about why I voted 'disagree' with "I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO." 

I'm putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on "for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way." And given that OpenAI has ~800 employees, telling them would basically count as a 'public' announcement. (I would update significantly on the claim if it was only a select group of trusted employees, rather than all of them.)

To me, people seem too-biased in the direction of "this info should be public"—maybe with the assumption that "well I am personally trustworthy, and I want to know, and in fact, I should know in order to be able to assess the situation for myself." Or maybe with the assumption that the 'public' is good for keeping people accountable and ethical. Meaning that informing the public would be net helpful. 

I am maybe biased in the direction of: The general public overestimates its own trustworthiness and ability to evaluate complex situations, especially without most of the relevant context. 

My overall experience is that the involvement of the public makes situations worse, as a general rule. 

And I think the public also overestimates their own helpfulness, post-hoc. So when things are handled in a public way, the public assesses their role in a positive light, but they rarely have ANY way to judge the counterfactual. And in fact, I basically NEVER see them even ACKNOWLEDGE the counterfactual. Which makes sense because that counterfactual is almost beyond-imagining. The public doesn't have ANY of the relevant information that would make it possible to evaluate the counterfactual. 

So in the end, they just default to believing that it had to play out in the way it did, and that the public's involvement was either inevitable or good. And I do not understand where this assessment comes from, other than availability bias?

The involvement of the public, in my view, incentivizes more dishonesty, hiding, and various forms of deception. Because the public is usually NOT in a position to judge complex situations and lack much of the relevant context (and also aren't particularly clear about ethics, often, IMO), so people who ARE extremely thoughtful, ethically minded, high-integrity, etc. are often put in very awkward binds when it comes to trying to interface with the public. And so I believe it's better for the public not to be involved if they don't have to be.

I am a strong proponent of keeping things close to the chest and keeping things within more trusted, high-context, in-person circles. And to avoid online involvement as much as possible for highly complex, high-touch situations. Does this mean OpenAI should keep it purely internal? No they should have outside advisors etc. Does this mean no employees should know what's going on? No, some of them should—the ones who are high-level, responsible, and trustworthy, and they can then share what needs to be shared with the people under them.

Maybe some people believe that all ~800 employees deserve to know why their CEO was fired. Like, as a courtesy or general good policy or something. I think it depends on the actual reason. I can envision certain reasons that don't need to be shared, and I can envision reasons that ought to be shared. 

I can envision situations where sharing the reasons could potentially damage AI Safety efforts in the future. Or disable similar groups from being able to make really difficult but ethically sound choices—such as shutting down an entire company. I do not want to disable groups from being able to make extremely unpopular choices that ARE, in fact, the right thing to do. 

"Well if it's the right thing to do, we, the public, would understand and not retaliate against those decision-makers or generally cause havoc" is a terrible assumption, in my view. 

I am interested in brainstorming, developing, and setting up really strong and effective accountability structures for orgs like OpenAI, and I do not believe most of those effective structures will include 'keep the public informed' as a policy. More often the opposite.

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T03:06:22.108Z · LW · GW

Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T03:01:29.727Z · LW · GW

The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org. 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T02:41:56.933Z · LW · GW

The partnership between Microsoft and OpenAI is a net negative for AI safety. And: What can we do about that? 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T02:40:29.878Z · LW · GW

We should consider other accountability structures than the one OpenAI tried (i.e. the non-profit / BoD). Also: What should they be?

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-19T17:39:48.933Z · LW · GW

I would never have put it as either of these, but the second one is closer. 

For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don't expect most people do, but I've developed this as a practice, and I am guessing most people can, with some effort or practice. 

I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like "I'm just saying something to say something" or "I just said something off/false/inauthentic" or "I didn't quite mean what I just said or am saying". 

Although, the motivations to really look out for are like "I want someone else to hurt" or "I want to hurt myself" or "I hate" or "I'm doing this out of fear" or "I covet" or "I feel entitled to this / they don't deserve this" or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get 'blended' with these without realizing we're blended, and then act out of them. 

Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn't noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from "I wanted to just gossip" to "Now I want to do what I can to help." It does not work like that. It's more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation. 

What does it mean to take personal responsibility? 

I mean, for one example, if I later find out something I did caused harm, I would try to 'take responsibility' for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing. 

And for another thing, I try to realize that my actions have consequences and that it's my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time. 

I almost never use a calculating frame to try to think about this. I think that's inadvisable and can drive people onto a dark or deluded path 😅

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-18T23:08:50.908Z · LW · GW

I'm fine with drilling deeper but I currently don't know where your confusion is. 

I assume we exist in different frames, but it's hard for me to locate your assumptions. 

I don't like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-16T04:50:41.772Z · LW · GW

Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns. 

RE: The bullet point on compassion... maybe just strike that bullet point.  It doesn't really affect the rest of the points. 

It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness before they can be asked. 

I think I get what you're getting at. And I feel in agreement with this sentiment. I don't want well-intentioned people to hamstring themselves. 

I certainly am not claiming ppl should make a model justify its usefulness in a specific way. 

I'm more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn't be cavalier about it.... but especially they should not delude themselves into believing they have good intentions for info when they do not. 

And so to casually ask about Alice's sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice's or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:48:08.421Z · LW · GW

Oh, okay, I found that a confusing way to communicate that? But thanks for clarifying. I will update my comment so that it doesn't make you sound like you did something very dismissive. 

I feel embarrassed by this misinterpretation, and the implied state of mind I was in. But I believe it is an honest reflection about something in my state of mind, around this subject. Sigh. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:32:54.140Z · LW · GW

But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of.  There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.  

There is a chance we don't have a disagreement, and there is a chance we do. 

In brief, to see if there's a crux anywhere in here:

  • Don't need ppl to boot up 'care as a friend' module. 
  • Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real. 
  • So it matters if the convo is like (A) "I care about the world, and doing good in the world, and knowing about Renshin's sanity is about that, at the base. I will use this information for good, not for evil." Ideally the info is relevant to something they're responsible for, so that it's somewhat plausible the info would be useful and beneficial. 
  • Versus (B) "I'm just idly curious about it, but I don't need to know and if it required real effort to know, I wouldn't bother. It doesn't help me or anyone to know it. I just want to eat it like I crave a potato chip. I want satisfaction, stimulation, or to feel 'I'm being productive' even if it's not truly so, and I am entitled to feel that just b/c I want to. I might use the info in a harmful way later, but I don't care. I am not really responsible for info I take in or how I use info." 
  • And I personally think the whole endeavor of modeling the world should be for the (A) motive and not the (B) motive, and that taking in any-and-all information isn't, like, neutral or net-positive by default. People should endeavor to use their intelligence, their models, and their knowledge for good, not for evil or selfish gain or to feed an addiction to feeling a certain way. 
  • I used a lot of 'should' but that doesn't mean I think people should be punished for going against a 'should'. It's more like healthy cultures, imo, reinforce such norms, and unhealthy cultures fail to see or acknowledge the difference between the two sets of actions. 
Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:13:51.394Z · LW · GW

I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating. 

The shorter version:

His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous. 

His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine whether they're lying (either to others or to themselves). I would be surprised if Oliver would actually find Alice and Bob both saying "trust me i'm fine" would be 'totally flat' data, given he probably has to discern deception on a regular basis.

Also I'm not exactly the 'trust me i'm fine' type, and anyone who knows me would know that about me, if they bothered trying to remember. I have both the skill of introspection and the character trait of frankness. I would reveal plenty about my motives, aliefs, the crazier parts of me, etc. So paragraph 2 sounds like a flimsy excuse to be avoidant? 

But the IMPORTANT thing is... I don't want to argue. I wasn't interested in that. I was hoping for something closer to perspective-taking, reconciliation, or reaching more clarity about our relational status. But I get that I was sounding argumentative. I was being openly frustrated and directing that in your general direction. Apologies for creating that tension. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T02:59:10.887Z · LW · GW

The 'endless list' comment wasn't about you, it was a more 'general you'. Sorry that wasn't clear. I edited stuff out and then that became unclear. 

I mostly wanted to point at something frustrating for me, in the hopes that you or others would, like, get something about my experience here. To show how trapped this process is, on my end.

I don't need you to fix it for me. I don't need you to change. 

I don't need you to take me for my word. You are welcome to write me off, it's your choice. 

I just wanted to show how I am and why. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T02:05:37.990Z · LW · GW

FTR, the reason I am engaging with LW at all, like right now... 

I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys. 

But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere. 

I concern myself with your norms, your ethical conduct, etc. I wish well for you, and wish you to do right by yourselves, each other, and the world. The way you conduct yourselves has big implications. Big implications for impacts to me, my friends, the world, the future of the world. 

You've chosen a certain level of global-scale responsibility, and so I'm going to treat you like you're AT THAT LEVEL. The highest possible levels with a very high set of expectations. I hold myself AT LEAST to that high of a standard, to be honest, so it's not hypocritical. 

And you can write me off, totally. No problem. 

But in my culture, friends concern themselves with their friends' conduct. And I see you as friends. More or less. 

If you write me off (and you know me personally), please do me the honor of letting me know. Ideally to my face. If you don't feel you are gonna do that / don't owe me that, then it would help me to know that also. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T01:32:45.384Z · LW · GW

so okay i'm actually annoyed by a thing... lemme see if i can articulate it. 

  1. I clearly have orders of magnitude more of the relevant evidence to ascertain a claim about MAPLE's chances of producing 'crazy' ppl as you've defined—and much more even than most MAPLE people (both current and former). 
  2. Plus I have much of the relevant evidence about my own ability to discern the truth (which includes all the feedback I've received, the way people generally treat me, who takes me seriously, how often people seem to want to back away from me or tune me out when I start talking, etc etc). 
  3. A bunch of speculators, with relatively very little evidence about either, come out with very strong takes on both of the above, and don't seem to want to take into account EITHER of the above facts, but instead find it super easy to dismiss any of the evidence that comes from people with the relevant data. Because of said 'possibility they are crazy'. 

And so there is almost no way out of this stupid box,; this does not incline me to try to share any evidence I have, and in general, reasonable people advise me against it. And I'm of the same opinion. It's a trap to try.

It is both easy and an attractor for ppl to take anything I say and twist it into more evidence for THEIR biased or speculative ideas, and to take things I say as somehow further evidence that I've just been brainwashed. And then they take me less seriously. Which then further disinclines me to share any of my evidence. And so forth. 

This is not a sane, productive, and working epistemic process? As far as I can tell? 

Literally I was like "I have strong evidence" and Ben's inclination was to say "strong evidence is easy to come by / is everywhere" and links to a relevant LW article, somehow dismissing everything I said previously and might say in the future with one swoop. It effectively shut me down. 

And I'm like.... 

what is this "epistemic process" ya'll are engaged in

[Edit: I misinterpreted Ben's meaning. He was saying the opposite of what I thought he meant. Sorry, Ben. Another case of 'better wrong than vague' for me. 😅]

To me, it looks like [ya'll] are using a potentially endless list of back-pocket heuristics and 'points' to justify what is convenient for you to continue believing. And it really seems like it has a strong emotional / feeling component that is not being owned. 

[edit: you -> ya'll to make it clearer this isn't about Oliver] 

I sense a kind of self-protection or self-preservation thing. Like there's zero chance of getting access to the true Alief in there. That's why this is pointless for me.

Also, a lot of online talk about MAPLE is sooo far from realistic that it would, in fact, make me sound crazy to try to refute it. A totally nonsensical view is actually weirdly hard to counter, esp if the people aren't being very intellectually honest AND the people don't care enough to make an effort or stick through it all the way to the end. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-14T15:56:26.573Z · LW · GW

Anonymized paraphrase of a question someone asked about me (reported to me later, by the person who was being asked the question): 

I have a prior about people who go off to monasteries sometimes going nuts, is Renshin nuts?

The person being asked responded "nah" and the question-asker was like "cool" 

I think this sort of exchange might be somewhat commonplace or normal in the sphere. 

I personally didn't feel angry, offended, or sad to hear about this exchange, but I don't feel the person asking the question was asking out of concern or care for me, as a person. But rather to get a quick update for their world model or something. And my "taste" about this is a "bad" taste. I don't currently have time to elaborate but may later. 

Comment by Unreal on Announcing Dialogues · 2023-10-09T13:23:19.014Z · LW · GW

Ideas I'm interested in playing with:

  • experiment with using this feature for one-on-one coaching / debugging; I'd be happy to help anyone with their current bugs... (I suspect video call is the superior medium but shrug, maybe there will be benefits to this way)
  • talk about our practice together (if you have a 'practice' and know what that means) 

Topics I'd be interested in exploring:

  • Why meditation? Should you meditate? (I already meditate, a lot. I don't think everyone should "meditate". But everyone would benefit from something like "a practice" that they do regularly. Where 'practice' I'm using in an extremely broad sense and can include rationality practices.)
  • Is there anything I don't do that you think I should do? 
  • How to develop collective awakening through technology
  • How to promote, maintain, develop, etc. ethical thoughts, speech, and action through technology
Comment by Unreal on Closing Notes on Nonlinear Investigation · 2023-09-18T18:17:59.207Z · LW · GW

I think the thing I'm attempting to point out is:

If I hold myself to satisfying A&C's criterion here, I am basically:

a) strangleholding myself on how to share information about Nonlinear in public
b) possibly overcommitting myself to a certain level of work that may not be worth it or desirable
c) implicitly biasing the process towards coming out with a strong case against Nonlinear (with a lower-level quality of evidence, or evidence to the contrary, being biased against) 

I would update if it turns out A&C was actually fine with Ben coming to the (open, public) conclusion that A&C's claims were inaccurate, unfounded, or overblown, but it didn't sound like that was okay with them based on the article above, and they weren't open to that sort of conclusion. It sounded like they needed the outcome to be a pretty airtight case against Nonlinear. 

Anyway that's ... probably all I will say on this point. 

I am grateful for you, Ben, and the effort you put into this, as it shows your care, and I do think the community will benefit from the work. I am concerned about your well-being and health and time expenditure, but it seems like you have a sense for how to handle things going forward. 

I am into setting firm boundaries and believe it's a good skill to cultivate. I get that it is not always a popular option and may cause people to not like me. :P 

Comment by Unreal on Closing Notes on Nonlinear Investigation · 2023-09-16T19:55:58.604Z · LW · GW

it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible.

 

This is a horrible constraint to put on an epistemic process. You cannot, ever, guarantee the reaction to these claims, right? Isn't this a little like writing the bottom line first? 

If it were me in this position, I would have been like: 

Sorry Alice & Chloe, but the goal of an investigation like this is not to guarantee a positive reaction for your POV, from the public. The goal is to reveal what is actually true about the situation. And if you aren't willing to share your story with the public in that case, then that is your choice, and I respect that. But know that this may have negative consequences as well, for instance, on future people who Nonlinear works with. But if it turns out that your side of the story is false or exaggerated or complicated by other factors (such as the quality of your character), then it would be best for everyone if I could make that clear as well. It would not serve the truth to go into this process by having already 'chosen a winner' or 'trying to make sure people care enough' or something like this. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-16T16:23:15.304Z · LW · GW

Neither here nor there: 

I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and for rescuers to have something to do. But I still see the value in going through a 'cancelling' process, for everyone involved, and so I'm not saying to avoid it either. It just sucks, and I get it.

That said, the people who are 'cancelled' tend to be stubborn hard-heads about it, and their own obstinacy tends to lead further to an even more extreme downfall. It's like some suicidal part of them kicks in, and drives the knife in deeper without anyone's particular help. 

I agree it's good to never just give into mob justice, but for your own souls to not take damage, try not to clench. It's not worth protecting it, whatever it happens to be. 

Save your souls. Not your reputation. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-16T16:09:50.246Z · LW · GW

After reading more of the article, I have a better sense of this context that you mention. It would be interesting to see Nonlinear's response to the accusations because they seem pretty shameful, as is. 

I would actively advise against anyone working with Kat / Emerson, not without serious demonstration of reformation and, like, values-level shifts. 

If Alice is willing to stretch the truth about her situation (for any reason) or outright lie in order to enact harsher punishment on others, even as a victim of abuse, I would be mistrustful of her story. And so far I am somewhat mistrustful of Alice and very mistrustful of Kat / Emerson. 

Also, even if TekhneMakre's take is what in fact happened, it doesn't give Alice a total pass in that particular situation, to me. I get that it's hard to be clear-headed and brave when faced with potentially hostile or adversarial people, but I think it's still worth trying to be. I don't expect anyone to be brave, but I also don't treat anyone as totally helpless, even if the cards are stacked against them. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-08T00:22:44.567Z · LW · GW

These texts have weird vibes from both sides. Something is off all around.  

That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse. 

This is called playing the victim. I don't buy it. 

I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these. 

Comment by Unreal on Fundamental question: What determines a mind's effects? · 2023-09-03T21:22:42.766Z · LW · GW

this is a good question 

thanks for asking it 

curious how much you know about koan traditions and practices. this article is like an interesting mix of koan practice and analytical meditation. 

Comment by Unreal on Rest Days vs Recovery Days · 2023-08-31T18:22:54.571Z · LW · GW

I have way more to say on this subject now. It's actually a huge topic, with many parts. 

If I were going to teach a curriculum about this, that curriculum would include:

  1. Why do I not want to get out of bed in the mornings?
  2. Shoulding vs Serving
  3. Eros vs Ego
  4. What am I identifying with? How does that affect my life?
  5. Blue pill vs Red pill moment - continuity break
  6. Initiation into the unknown
  7. What am I here to serve? What am I here to live according to? Personal life missions
  8. Vow work
  9. Resting vs Collapsing
  10. Places where I don't honor myself 
  11. What's a boundary? 
  12. Forgiveness
  13. Internal vs External (Spiritual orientation vs material orientation)
  14. Am I willing to change my life? - continuity break
  15. "Having a practice" - My life as Training
  16. Possible paths forward - intro to different spiritual lineages and paths 
  17. Finding a teacher, if you want
  18. Using everyone and everything as a teacher 
  19. Creating the new vow

I am probably not fully qualified to teach this curriculum, but if you are interested, let me know. I think I could do one-on-one work with a few people. Everyone's gotta start somewhere. 

Comment by Unreal on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-21T15:53:05.524Z · LW · GW

I have different hypotheses / framings. I will offer them. If you wish to discuss any of them in more detail, please reach out to me via email or PM. Happy to converse. ! 

// 

Mythical/Archetypal take: 

There are large-scale, old, and powerful egregores fighting over the minds of individuals and collectives. They are not always very friendly to human interests or values. In some cases, they are downright evil. (I'd claim the Marxist egregore is a pretty destructive one.)

The damage done by these egregores is multigenerational. It didn't start with just THIS generation. Shit started before any of us was born. 

It's kind of like the Iliad. Petty, powerful gods fighting over some nonsense; humans get caught in the hurricane-sized effects; chaos ensues. Sometimes humans become willing to sacrifice their souls to these egregores in exchange for the promise of power, wealth, security, sex, etc. 

It's not just unusual people like Ziz who might do this. Pretty normal-seeming, happy-seeming people have sacrificed their souls to certain egregores (e.g. progressivism, humanism, etc) and become mouthpieces for the egregores' values and agendas. When you try to have conversations with these people, it's like they're not speaking from their true beliefs or actual internal experience. They've become living propaganda. 

The rationalist / EA community attracts plenty of 'egregore activity' because of their concentration of intelligent, resourceful, good-hearted people. They are valuable to control.  They're also near the center of the actual narrative, the fight for humanity's soul and evolutionary direction. They are particularly vulnerable to egregore activity because of high rates of trauma and disembodiment and strong ideological bent, making them relatively easy to manipulate. 

OK, but how does one properly defend against these huge forces, tossing humans around like rag dolls? 

The main trick is embodiment. Being totally in the body, basically all the time. An integrated person, integration between heart, mind, body, and soul. Resolving and overcoming any addictions to anything, including seemingly innocuous ones. Resolving and healing trauma as much as possible (which is an ongoing journey). Finding a deeper, more fundamental happiness and peace that can't be disturbed by any external circumstance (thus, no longer being subject to temptations for power, wealth, security, relationship, or sense pleasure). 

//

Main other story:

Some people made some really bad choices. 

Integrity isn't something that happens to you. You have to choose it and choose it again. If you fail to choose it, and then fail to recognize and reconcile the error... and keep failing to choose it... that path leads to more slippage and things can spiral out of control. 

Ziz made certain choices, and that had consequences. Ziz didn't reform. Ziz didn't apologize. Ziz kept digging that hole. That negative-spiral path erodes one's ability to see what's happening and can lead to deep insanity. 

A person's moral system does not thrive under a guilty conscience. It gets unbearable. And you just have to keep hurting more people to justify it to yourself, and to temporarily escape the pain. This is what happens when one isn't willing to feel remorse, grieve, and acknowledge the damage they've caused. It becomes hell on earth for you, and you create hell on earth for others. 

Whatever hypothesis you come up with, don't absolve the individuals of their moral responsibility to avoid evil actions. It gets this bad when people fuck up that badly. Not due merely to external circumstances or internal drives like 'wanting to belong'. Regardless of any of that, they made choices, and they didn't have to make those choices. 

//

I could probably find more hypotheses, but I will stop there! :0 Thanks for reading. 

Comment by Unreal on Frame Bridging v0.8 - an inquiry and a technique · 2023-06-21T02:34:09.505Z · LW · GW

This question points at the mystery of this phenomenon! That's part of my investigation. I'm not totally sure what exactly would be lost, but it sure seems like there is something. 

Hard to go into it further without a live, real-time example. But maybe you'll run into some examples in your own life. 

Try observing when two people seem to be in an intractable communication. Or in a long-term estrangement. Why are they unwilling to take the perspective of the other person? Unwilling to even try to model them? 

In some cases, it's just that they're holding too tightly onto their own frame (e.g. taking something as personal that wasn't personal), in a way that is obviously unhelpful and counterproductive. And you may see how their frame is wrong or misguided.

But in the more interesting cases, it's like a deeper philosophical or existential clash? Again they might be holding their frame more tightly than they strictly need to, but it seems like something critical would be lost if they let go and tried on the other person, even for a second. What is this? This is my inquiry. 

Comment by Unreal on Frame Bridging v0.8 - an inquiry and a technique · 2023-06-21T02:25:25.288Z · LW · GW

Your attempt to break down the Circling skill is creating a good example of the original problem at hand. 

I notice a fracture between my frame of Circling and your frame of it. The gap is not all that bad, as far as I can tell. Like, bridging seems fairly plausible (rather than totally impossible, as it seems in certain situations). But also, it seems like it might take quite a bit of work to bridge with appropriate fidelity.

IMO a sloppy, imprecise, or agreeable person might be willing to interpret your version as a 'basically good enough' translation of Circling practice into rationalist terminology... and consider them basically describing the same thing in the end. 

But I'm not willing to compromise the significant assumptions underneath my frame, and I imagine the other person shouldn't either... at least as long as we're both keeping track of something Real. Compromising on this seems good for coordination and harmony purposes but not good for getting to the bottom of what each of us really sees, believes, and acts in accord with. 

So here, you and I might actually be able to try the technique and see what happens. ! :0 

My own attempt to Taboo Circling... here is the skill breakdown: 

  • Staying at the level of sensation. This is one of the 5 Principles in the Circling Europe school. So, the ability to not automatically go into stories but to stay present with the arising and passing of phenomena, especially based in the physical, emotional, and energetic bodies. 
  • Sincerity. Or maybe meta-sincerity, if you like. Showing up without pretense, as much as possible. Authentic, real-time expression. (Obviously, some amount of insincerity or pretense is fine, esp if this can be held and revealed sincerely.) 
  • Holding one's own experiences without putting it on the other person, aka owning one's experience, and being responsible for your nervous system responses. Avoiding victim consciousness / getting swallowed by the drama triangle. You can hold your own wounded inner children, trauma responses, etc. You don't fall into total nervous system dysregulation when triggered. You can stay conscious and present even while activated. You take responsibility for your own reactions and don't resort to merely blaming external circumstances. 
  • Ability to go beyond ego. If the ego drives, then instead of just being with what is happening, the ego will attempt to do things like... turn everything that is happening into a story about the self. Or get something it wants, like validation or a sense of belonging or a pleasant experience or a novel idea. One of the main issues with the ego driving is that you will fail to see the other person as a person, instead trying to use them as a means to an end. While assuming that you ARE seeing them and treating them like a full person. 
  • Bonus skill: You are very perceptive about what's going on, at multiple levels. And you can turn that into articulate words. ! But if you yourself aren't good at this, it seems fine... a third party can do this part. 
Comment by Unreal on Frame Bridging v0.8 - an inquiry and a technique · 2023-06-21T01:18:50.008Z · LW · GW

If you find yourself talking with someone who seems to have a very different worldview or way of looking at a situation, and your attempts to communicate seem to be falling completely flat (i.e. you don't understand the things they're trying to say and they don't seem to be understanding yours)...

...and both of you are fairly invested in actually bridging the communication gap 

(or, at least you're pretty invested, and they're at least willing to continue talking to you even if they don't seem super invested? It was unclear to me which people needed to have the prerequisites) 

...then here's a process you think will help bridge that gap. It is fairly skill-intensive. You think it'd work reasonably well for people who have the skill-prequisites. The process doesn't seem perfect but it's the best one you've got.


Mostly accurate. 

A lot of the time, these 'fractures' result in major disconnections between people, such that they stop talking to each other for a year or more. Or if they need to work together, it becomes really difficult for them to work together in any functional or healthy way. This fracture can spread to others or cause group-wide fractures. 

A rationalist-relevant example might be... say, inability of certain prominent rationalist leaders from being able to coordinate or even have reasonable / good conversations with one another. Sometimes requiring extensive mediation. Or sometimes causing bigger community-wide conflicts. I'm sure you can come up with at least 3 examples. 

Sorry for not including this context in the post. I wasn't trying to make the post very good. :P 

OK, but let's say! YOU are experiencing this kind of fracture with someone and are at least willing to try to bridge. (However, unwilling / unable to drop your frame or try to adopt theirs.) This process is an attempt to find a way to start the conversation without either party needing to drop their frame. So the frames get to 'meet'. 

Comment by Unreal on Frame Bridging v0.8 - an inquiry and a technique · 2023-06-21T01:09:52.671Z · LW · GW

I haven't tried this technique explicitly yet. 😅 So more like 'best guess at what to try'. But I also have some evidence this will yield interesting results, based on previous pseudo-attempts at something like this process. 

My teacher and I totally failed to communicate in my opinion, but this wasn't made explicit between us. It's my own story in my head. 

Comment by Unreal on Frame Bridging v0.8 - an inquiry and a technique · 2023-06-21T01:05:21.930Z · LW · GW

No, you misunderstand. Part of the issue is that neither person is quite able to 'try on the other person's perspective'. But not because of a lack of cognitive empathy skill. But because something very important would be lost by doing so. 

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-28T18:33:35.473Z · LW · GW

No, it's definitely not about being depressed. That's very far from it. But I also don't want to argue about the claims here. Seems maybe beside the point.

I think I could reword my original argument in a way that wouldn't be a problem. I just wasn't careful in my languaging, but I personally think it's fine? I think you might be reading a lot into my usage of the word "So". 

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-27T19:29:01.288Z · LW · GW

I dunno if I was clear enough here about what it means to feel persecuted. 

So the way I'm using that phrase, 'feeling persecuted' is not desirable whether you are actually being persecuted or not. 

'Feeling persecuted' means feeling helpless, powerless, or otherwise victimized. Feeling like the universe is against you or your tribe, and that things are (in some sense) inherently bad and may forever be bad, and that nothing can be done. 

If, indeed, you are part of a group that has fewer rights and privileges than the dominant groups, you can acknowledge to yourself "my people don't have the same rights as other people" but you don't have to feel any sense of persecution around that. You can just see that it is true and happening, without feeling helpless and like something is inherently broken or that you are inherently broken. 

Seeing through the egregore would help a person realize that 'oh there is an egregore feeding on my beliefs about being persecuted but it's not actually a fundamental truth about the world; things can actually be different; and I'm not defined by my victimhood. maybe i should stop feeding this egregore with these thoughts and feelings that don't actually help anything or anyone and isn't really an accurate representation of reality anyway.' 

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-10T11:58:51.855Z · LW · GW

"Learning to run workshops where people often "wake up" and are more conscious/alive/able-to-reflect-and-choose, for at least ~4 days or so and often also for a several-month aftermath to a lesser extent" 

I permanently upgraded my sense of agency as a result of CFAR workshops. Wouldn't be surprised if this happened to others too. Would be surprised if it happened to most CFAR participants. 

//

I think CFAR's effects are pretty difficult to see and measure. I think this is the case for most interventions? 

I feel like the best things CFAR did were more like... fertilizing the soil and creating an environment where lots of plants could start growing. What plants? CFAR didn't need to pre-determine that part. CFAR just needed to create a program, have some infrastructure, put out a particular call into the world, and wait for what shows up as a result of that particular call. And then we showed up. And things happened. And CFAR responded. And more things happened. Etc. 

CFAR can take partial credit for my life starting from 2015 and onwards, into the future. I'm not sure which parts of it. Shrug. 

Maybe I think most people try to slice the cause-effect pie in weird, false ways, and I'm objecting to that here.

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-10T03:40:35.416Z · LW · GW

Right. 

I think a careful and non-naive reading of your post would avoid the issues I was trying to address. 

But I think a naive reading of your post might come across as something like, "Oh CFAR was just not that good at stuff I guess" / "These issues seem easy to resolve." 

So I felt it was important to acknowledge the magnitude of the ambition of CFAR and that such projects are actually quite difficult to pull off, especially in the post-modern information age. 

//

I wish I could say I was speaking from an interest in tackling the puzzle. I'm not coming from there. 

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-10T02:32:26.238Z · LW · GW

The main ones are: 

  • modern capitalism / the global economy
    • So if we look at the egregore as having a flavor of agency and intention... this egregore demands constant extraction of resources from the earth. It demands people want things it doesn't need (consumer culture). It disempowers or destroys anything that manages to avoid it or escape it (e.g. self-sufficient villages, cultures that don't participate) - there's an extinction of hunter-gatherer lifestyles going on; there's legally mandated taking of children from villages in order to indoctrinate them into civilization (in Malaysia anyway; China is doing a 'nicer' version). There's energy-company goons that go into rainforests and chase out tribes from their homes in order to take their land. This egregore does not care about life or the planet. 
    • You are welcome to disagree of course, this is just one perspective. 
  • I dunno what to call this one, but it's got Marxist roots
    •  There's an egregore that feeds off class division. So right now, there's a bunch of these going on at once. The following are 'crudely defined' and I don't mean them super literally, but just trying to point at some of the dividing lines, as examples: Feminists vs privileged white men. Poor blacks vs white cops. The 99% vs the 1%. Rural vs urban. This egregore wants everyone to feel persecuted. All these different class divisions feed into the same egregore. 
    • Do the rationalists feel persecuted / victimized? Oh yeah. Like, not literally all of them, but I'd say a significant chunk of them. Maybe most of them. So they haven't successfully seen through this one.
  • power-granting religion, broadly construed
    • Christianity is historically the main example of a religious egregore. But a newer contender is 'scientism'. Scientism is not the true art of science and doesn't resemble it at all. Scientism has ordained priests that have special access to journals (knowledge) and special privileges that give them the ability to publish in those esoteric texts. Governments, corporations, and the egregores mentioned above want control over these priests. Sometimes buying their own. 
    • Obviously this egregore doesn't benefit from ordinary people having critical thinking skills and the ability to evaluate the truth for themselves. It dissuades people from trying by creating high barriers to entry and making its texts hard or time-consuming to comprehend. It gets away with a lot of shit by having a strong brand. The integrity behind that brand has significantly degraded, over the decades. 

These three egregores benefit from people feeling powerless, worthless, or apathetic (malware). Basically the opposite of heroic, worthy, and compassionate (liberated, loving sovereignty). Helping to start uninstalling the malware is, like, one of the things CFAR has to do in order to even start having conversations about AI with most people. 

And, unfortunately... like... often, buying into one of these egregores (usually this would be unconsciously done) actually makes a person more effective. Sometimes quite 'successful' according to the egregore's standards (rich, powerful, well-respected, etc). The egregores know how to churn out 'effective' people. But these people are 'effective' in service to the egregore. They're not necessarily effective outside of that context. 

So, any sincere and earnest movement has to contend with this eternal temptation: 

  • Do we sell out? By how much? 

The egregore tempts you with its multitude of resources. To some extent, I think you have to engage. Since you're trying to ultimately change the direction of history, right? 

Still, ahhh, tough. Tough call. Tricky. 

Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-09T21:43:56.194Z · LW · GW

I probably don't have the kinds of concepts you're interested in, but... 

Some significant conceptual pieces in my opinion are:

  • "As above, so below." Everything that happens in the world can be seen as a direct, fractal-like reflection of 'the mind' that is operating (both individual and collective). Basically, things like 'colonialism' and 'fascism' and all that are external representations of the internal. (So, when some organization is having 'a crisis' of some kind, this is like the Shakespeare play happening on stage... playing out something that's going on internal to the org, both at the group level and the individual level.) Egregores, therefore, are also linked inextricably to 'the mind', broadly construed. They're 'emergent' and not 'fixed'. (So whatever this 'rationality' thing is, could be important in a fundamental way, if it changes 'the mind'.) Circling makes this tangible on a small scale.
  • My teacher gave a talk on "AI" where he lists four kinds of processes (or algorithms, you could say) that all fit onto a spectrum. Artificial Intelligence > Culture > Emotions / Thoughts > Sense perception. Each of these 'algorithms' have 'agendas' or 'functions'. And these functions are not necessarily in service of truth. ('Sense perception' clearly evolved from natural selection, which is keyed into survival and reproduction. Not truth-seeking aims. In other words, it's 'not aligned'.) Humans 'buy in' to these algorithms and deeply believe they're serving our betterment, but 'fitness' (ability to survive and reproduce) is not necessarily the result of 'more truth-aligned' or 'goodness aligned'. So ... a deeper investigation may be needed to discern what's trustworthy. Why do we believe what we believe? Why do we believe the results of AI processes... and then why do we believe in our cultural ideologies? And why do I buy into my thoughts and feelings? Being able to see the nature of all four of these processes and seeing how they're the same phenomena on different scales / using different mediums is useful. 
  • Different people have different 'roles' with respect to the egregores. The obvious role I see is something like 'fundamentalist priest'? Rationality has 'fundamentalist priests' too. They use their religion as a tool for controlling others. "Wow you don't believe X? You must be stupid or insane." To be more charitable though, some people just 'want to move on' from debating things that they've already 'resolved' as 'true'. And so they reify certain doctrines as 'true doctrine' and then create platforms, organizations, and institutions where those doctrines are 'established truth'. From THERE, it becomes much easier to coordinate. And coordination is power. By aligning groups using doctrines, these groups 'get a lot done'. "Getting a lot done" here includes taking stuff over... ideological conquest, among other forms of conquest. This is the pattern that has played out for thousands of years. We have not broken free of this at all, and rationality (maybe moreso EA) has played right into this. And now there's a lot of incentive to maintain and prop up these 'doctrines' because a lot has been built on top of them. 
  • Why do humans keep getting captured? Well we're really easy to manipulate. I think the Sequences covers a lot of this... but also, things like 'fear of death, illness, and loss of livelihood' is a pretty reliable thing humans fall prey to. They readily give away their power when faced with these fears. See: COVID-19. 
  • Because we are afraid of various forms of loss, we desperately build and maintain castles on top of propped up, false doctrines... so yeah, we're scheduling our own collapse. That shit is not gonna hold. Everything we see happening in this world, we ourselves created the conditions for. 
Comment by Unreal on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-09T13:33:06.931Z · LW · GW

The hypotheses listed mostly focus on the internal aspects of CFAR.

This may be somewhat misleading to a naive reader. (I am speaking mainly to this hypothetical naive reader, not to Anna, who is non-naive.) 

What CFAR was trying to do was extremely ambitious, and it was very likely going to 'fail' in some way. It's good FOR CFAR to consider what the org could improve on (which is where its leverage is), but for a big picture view of it, you should also think about the overall landscape and circumstances surrounding CFAR.  And some of this was probably not obvious at the outset (at the beginning of its existence), and so CFAR may have had to discover where certain  major roadblocks were, as they tried to drive forward. This post doesn't seem to touch on those roadblocks in particular, maybe because they're not as interesting as considering the potential leverage points. 

But if you're going to be realistic about this and want the big-picture sense, you should consider the following:

  • OK, so CFAR's mission under Pete's leadership was to find/train people who could be effective responders to x-risk, particularly AI risk. 
  • There is the possibility that most of the relevant 'action' on CFAR's part is on 'finding' the right people, with the right starting ingredients, whatever those may be. But maybe there just weren't that many good starting ingredients to be found. That limiting factor, if indeed it was a limiting factor, would have hampered CFAR's ability to succeed in its mission. 
  • Hard problems around this whole thing also include: How do you know what the right starting ingredients even are? What do these 'right people' even look like? Are they going to be very similar to each other or very different? How much is the training supposed to be customized for the individual? What parts of the curriculum should be standardized? 
  • Additional possibility: Maybe the CFAR training wouldn't bear edible fruit for another ten years after that person's initial exposure to CFAR? (And like, I'm leaning on this being somewhat true?) If this is the case, you're just stuck with slow feedback loops. (Additionally, consider the possibility that people who seem to be progressing 'quickly' might be doing this in a misleading way or your criteria for judging are quite wrong, causing you to make changes to your training that lead you astray.) 
  • Less hard problem but adds complexity: How do you deal with the fact that people in this culture, esp rationalists?, get all sensitive around being evaluated? You need to evaluate people, in the end, because you don't have the ability to train everyone who wants it, and not everyone is ready or worth the investment. But then people tend to get all fidgety and triggered when you start putting them in different buckets, especially when you're in a culture that believes strongly in individualism ("I am special, I have something to offer") and equality ("Things should be fair, everyone should have the same opportunities."). And also you're working with people who were socialized from a young age to identify with their own intelligence as a major part of their self-worth, and then they come into your community, feeling like they've finally found their people, only to be told: "Sorry you're not actually cut out for this work. It's not about you."

Also:

  • The egregores that are dominating mainstream culture and the global world situation are not just sitting passively around while people try to train themselves to break free of their deeply ingrained patterns of mind. I think people don't appreciate just how hard it is to uninstall the malware most of us are born with / educated into (and which block people from original thinking). These egregores have been functioning for hundreds of years. Is the ground fertile for the art of rationality? My sense is that the ground is dry and salted, and yet we still make attempts to grow the art out of that soil. 
  • IMO the same effects that have led us to current human-created global crises are the same ones that make it difficult to train people in rationality. So, ya'll are up against a strong and powerful foe. 

Honestly my sense is that CFAR was significantly crippled by one or more of these egregores (partially due to its own cowardice). But that's a longer conversation, and I'm not going to have it out here. 

//

All of this is just to give a taste of how difficult the original problems were that CFAR was trying to resolve. We're not in a world that's like, "Oh yeah, with your hearts and minds in the right place, you'll make it through!" Or even "If you just have the best thoughts compared to all the other people, you'll win!" Or even "If you have the best thoughts, a slick and effective team, lots of money, and a lot of personal agency and ability, you'll definitely find the answers you seek." 

And so the list of hypotheses + analyses above may make it sound like if CFAR had its shit more 'together', it would have done a better job. Maybe? How much better though? Realistically? 

As we move forward on this wild journey, it just seems to become clearer how hard this whole situation really is. The more collective clarity we have on the "actual ground-level situation" (versus internal ideas, hopes, wishes, and fears coloring our perspective of reality), ... honestly the more confronting it all is. The more existentially horrifying. And just touching THAT is hard (impossible?) for most people. 

(Which is partially why I'm training at a place like MAPLE. I seem to be saner now about x-risk. And I get that we're rapidly running out of time without feeling anxious about that fact and without needing to reframe it in a more hopeful way. I don't have much need for hope, it seems. And it doesn't stop me from wanting to help.)

Comment by Unreal on Gracefully correcting uncalibrated shame · 2022-05-16T10:21:33.474Z · LW · GW

Just noting here that Elizabeth wasn't at one of MAPLE's retreats (from what I understand; I'd never set foot on MAPLE at the time of her visit). MAPLE hosts a silent meditation week about once a month. The rest of the weeks are called Responsibility Weeks. While the residents are expected to meditate throughout the day during these Weeks (but it's really hard to because they have to use computers and stuff), guests are not expected to. Guests can just experience a different way of living and being. 

MAPLE has a handful of 'jock hippies'. Jock hippies believe things turn out all right generally. Their visceral experience is embodied. They often experience pleasurable sensations. They're happy despite a lot of turmoil. They like walking barefoot through nature, doing vigorous forms of exercise, and interacting with strangers. 

Elizabeth was on the phone with one such person, who explained things to her in a way that failed to comprehend a more typical rationalist way of experiencing the world. 

But it was good of Elizabeth to come and teach MAPLE something new, and MAPLE is always learning how to better engage with their guests. There are heated debates about this where people get passionate about giving guests a more comfortable experience vs. giving guests a more monastic experience. There is always a tension here, but I do think it's worth MAPLE understanding how to treat different people and know where they're coming from. 

MAPLE's 'demographic' is one of the most diverse (culturally) that I have seen (for something that is super niche and not mainstream or well-funded), and it brings up a lot of complex scenarios. Each different cultural demographic uses language and communication in different ways, and so lots of communication errors are possible. I believe trial and error learning is needed to grow in this area. 

But it would be nice if there were a way to feel more resolution with Elizabeth in particular. I will consider it myself, but, Elizabeth, if you wanted to let me know what would be beneficial for making things right, that would also be helpful. 

Comment by Unreal on Unreal's Shortform · 2022-01-10T21:37:28.436Z · LW · GW

I'm in favor of totally resolving human greed and hatred, but this doesn't seem tractable to me either. (It is literally possible to do it within an individual, through a particular path, but it's not the path most choose.) 

Instead it seems more tractable to create and promote systems and cultures that heavily incentivize moderating greed and hate. 

Comment by Unreal on Unreal's Shortform · 2022-01-05T23:16:57.664Z · LW · GW

Yeah, it seems like ... the rationalization might be sort of a cover-story for certain bad habits or patterns that they don't want to fix in themselves. shrug. I'm not a huge fan. 

Comment by Unreal on Unreal's Shortform · 2022-01-05T23:09:10.380Z · LW · GW

We are also the closest we've ever been in the history of the world to potentially destroying all life. 😐 I view our current situation as much worse than it has ever been. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:50:54.362Z · LW · GW

And we also had no system for handling the work that people didn’t want to do (I didn’t buy groceries because it was my job; I went to the store because the fridge was empty and I wanted to help) or how to handle people saying that they were going to do something and then not following through.

I feel vaguely frustrated that this is such a common problem. 

Lack of Hufflepuff virtues. Shakes head. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:38:17.702Z · LW · GW

I had explored enough career paths at this point to recognize that most of my opportunities to positively affect the world were limited by the organizations or institutions that were already established or by my own ability to affect them from within, and I didn’t like my prospects. Government agencies, nonprofits with limited scope, politicians, think tanks -- I couldn’t find any employers that matched my level of ambition while also being self-reflective and self-critical and thus willing and able to adjust and pivot as they proactively learned more about the shape of the problems in the world (there are a lot of constraints out there).

An important paragraph. 

I haven't personally looked around, but it's not very surprising to me that someone would say this. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:21:27.962Z · LW · GW

There are a few abnormally effective people: Elon Musk is the celebrity poster child of effectiveness -- love him or hate him, it’s really impressive what he’s been able to accomplish.

Yeah I just disagree. 

I think you should only count as effective if you're helping the problems of the world. I'm pretty sure he's only making them worse. There are lots of institutions and organizations that are highly effective... at making things worse. This is not that impressive to me. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:19:43.152Z · LW · GW

Prospective funders reported that the bottleneck was a lack of promising projects. As far as we could tell, the lack of good projects is a result of a lack of effective, benevolent, and coordinated people.

It doesn’t seem like there’s a lack of benevolence or altruism -- there are plenty of people who want to solve these problems, but either they can’t figure out what to do, or what they try never really works, or they settle for tackling smaller issues that they think they can actually resolve.

This is extremely well stated. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:18:33.360Z · LW · GW

Elon Musk challenging the UN to show how they could end world hunger with $6B.

Given what I'm learning about Modern Slavery, whenever more resource is injected into corrupt systems, the bottom is always just exploited for even more. You can give these people money, but if they're inside a deeply corrupt system, everything they have will be taken almost immediately. 

The root cause is human greed and hatred. Without solving these, interventions will always be bandaid solutions. 

Comment by Unreal on Unreal's Shortform · 2022-01-03T13:11:11.691Z · LW · GW

Short and immediate responses as I read some of this post about Leverage:

https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things

When I first encountered the group, it was clear that something different was going on. They were pretty much crap at any of the conservation behaviors that in my circles meant that you cared about improving the state of the world: they didn’t recycle, they’d turn on the kitchen sink and then walk away to go get something, they’d hold the fridge door wide open while they tried to carefully answer a question about the limited types of goals they’d seen in people’s motivational setups -- I don’t even know if they were registered to vote! But that was almost symbolic of their determination to not flinch away from the size and complexity of the problems that humanity doesn’t seem to be on track to solving: they weren’t pretending that taking public transport and reusing shopping bags would handle problems of such magnitude, they weren’t resigned to never solving them or in denial that they existed -- they were spending all their attention genuinely trying to figure out whether there might be any counterintuitive ways that they could end up successfully addressing these very real and very hard problems.

I appreciate this reframing of this behavior as being especially determined. 

I also think we might end up turning such behaviors into a kind of social signal that reads to our ingroup as focused, determined, and doing it for real. Which would be equally bad as using recycling as a social signal (since we've lost the actual good thing in favor of signaling). But this would also actually be slightly worse imo because you're hitting the 'wastefulness' button as a favorable signal, so it becomes better to waste 'noticeably' or excessively. 

What we do with our minds matters at every level. I think conspicuous wastefulness is not a good thing for a mind to train itself to do.