Posts

A Path out of Insufficient Views 2024-09-24T20:00:27.332Z
Dishonorable Gossip and Going Crazy 2023-10-14T04:00:35.591Z
Frame Bridging v0.8 - an inquiry and a technique 2023-06-20T19:46:39.502Z
Post-COVID Integration Rituals 2021-04-12T16:54:53.557Z
3 Cultural Infrastructure Ideas from MAPLE 2019-11-26T18:56:48.921Z
Unreal's Shortform 2019-08-03T21:11:22.475Z
Dependability 2019-03-26T22:49:37.402Z
Rest Days vs Recovery Days 2019-03-19T22:37:09.194Z
Active Curiosity vs Open Curiosity 2019-03-15T16:54:45.389Z
Policy-Based vs Willpower-Based Intentions 2019-02-28T05:17:55.302Z
Moderating LessWrong: A Different Take 2018-05-26T05:51:40.928Z
Circling 2018-02-16T23:26:54.955Z
Slack for your belief system 2017-10-26T08:19:27.502Z
Being Correct as Attire 2017-10-24T10:04:10.703Z
Typical Minding Guilt/Shame 2017-10-24T09:39:35.498Z

Comments

Comment by Unreal on How I started believing religion might actually matter for rationality and moral philosophy · 2024-10-01T18:53:21.662Z · LW · GW

Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. ("inferior" meaning less capable of coordinating a collective intelligence needed to build nukes.)

You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt. 

But it's a straightforward comparison. 

Medieval "dark" ages = almost no technological progress, very little risk of blowing up the planet in any way; relatively, not inspiring, but still - kudos for keeping us from hurtling toward extinction, and at this point, we're fine with rewarding this even though it's such a "low bar"

Today = massive, exponential technological progress, nuclear war could already take us all out, but we have a number of other x-risks to worry about. And we're so identified with science and tech that we aren't willing to stop, even as we admit OUT LOUD that it could cause extinction-level catastrophe. This is worse than the Crusades by a long shot. We're not talking about sending children to war. We're talking about the end of children. Just no more children. This is worse than suicide cults that claim we go to heaven as long as we commit suicide. We don't even think what we're doing will necessarily result in heaven, and we do it anyway. We have no evidence we can upload consciousnesses at all. Or end aging and death. Or build a friendly AI. At least the Catholics were convinced a very good thing would happen by sending kids to war. We're not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics? 

Comment by Unreal on A Path out of Insufficient Views · 2024-10-01T18:01:13.958Z · LW · GW

are you putting forward that something about worldviews sometimes relies on faster than light signaling?

OK this is getting close. I am saying worldviews CANNOT EVER be fast enough, and that's why the goal is to drop all worldviews to get "fast enough". Which the very idea of "fast enough" is in itself 'wrong' because it's conceptual / limited / false. This is my worst-best attempt to point to a thing, but I am trying to be as literal as possible, not poetic. 

No response can be immediate in a physical universe

Yeah, we're including 'physical universe' as a 'worldview'. If you hold onto a physical universe, you're already stuck in a worldview, and that's not going to be fast enough.

The point is to get out of this mental, patterned, limited ideation. It's "blockheaded", as you put. All ideas. All ways of looking. All frameworks and methodologies and sense-making. All of it goes. Including consciousness, perception. 

When all of it goes, then you don't need 'response times' or 'sense data' or 'brain activity' or 'neurons firing' or 'speed of light' or whatever. All of that can still 'operate' as normal, without a problem. 

We're getting to the end of where thinking or talking about it is going to help. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T23:29:46.717Z · LW · GW

I don't know if I fully get you, but you also nailed it on the head.

In such situation, I think the one weird trick would be to invent a belief system that actively denies being one. To teach people a dogma that would (among other things) insist that there is no dogma, you just see the reality as it is (unlike all the other people, who merely see their dogmas). To invent rituals that consist (among other things) of telling yourself repeatedly that you have no rituals (unlike all the other people). To have leaders that deny being leaders (and yet they are surrounded by followers who obey them, but hey that's just how reality is).

So, basically... science.

Science is the best cult because it convincingly denies being one at all, disguising itself as truth itself. 

I think it's worth admiring science and appreciating it for all the good things it has provided.

And I think it has its limitations, and people should start waking up soon to the fact that if the world is destroyed, humans all destroyed, etc. then science played an instrumental and causal role in that, and part of that is the insanity and evil within that particular cult / worldview. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T23:24:21.594Z · LW · GW

Or why can't you have a worldview that computes the best answer to any given "what should I do" question, to arbitrary but not infinite precision?

I am not talking about any 'good enough' answer. Whatever you deem 'good enough' to some arbitrary precision.

I am talking about the correct answer every time. This is not a matter of sufficient compute. Because if the answer comes even a fraction of a second AFTER, it is already too late. The answer has to be immediate. To get an answer that is immediate, that means it took zero amount of time, no compute is involved.

Is it something that your reader should be able to infer from this post, or from their own experience of life (assuming they're paying attention?)

Not unless they are Awakened. But my intended audience is that which does not currently walk a spiritual path.

Is this something that you think you know mainly because of your personal experience with pervious worldviews failing you? Some other way?

A mix of my own practice, experience, training, insight, and teachings I've received from someone who knows. 

Merely having worldviews failing you is not sufficient to understand what I am saying. You also have to have found a relative solution to the problem. But if you are sick of worldviews failing you or of failing to live according to truth, then I am claiming there's a solution to that. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T23:07:58.240Z · LW · GW

I am saying things in a direct and more or less literal manner. Or at least I'm trying to. 

I did use a metaphor. I am not using "poetry"? When I say "Discard Ignorance" I mean that as literally as possible. I think it's somewhat incorrect to call what I'm saying phenomenology. That makes it sound purely subjective.

Am I talking down to you? I did not read it that way. Sorry it comes across that way. I am attempting to be very direct and blunt because I think that's more respectful, and it's how I talk. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T13:14:37.715Z · LW · GW

First paragraph: 3/10. The claim is that something was already more natural to begin with, but you need deliberate practice to unlock the thing that was already more natural. It's not that it 'comes more naturally' after you practice something. What 'felt' natural before was actually very unnatural and hindered, but we don't realize this until after practicing. 

2nd, 3rd, 4th paragraph: 2/10. This mostly doesn't seem relevant to what I'm trying to offer.

...

It's interesting trying to watch various people try to repeat what I'm saying or respond to what I'm saying and just totally missing the target each time. 

It suggests an active blind spot or a refusal to try to look straight at the main claim I'm making. I have been saying it over and over again, so I don't think it's on my end. Although the thing I am trying to point at is notoriously hard to point at, so there's that. 

...

Anyway the cruxy part is here, and so to pass my ITT you'd have to include this:

"It's not a meta-process. It's not metacognition. It's not intelligence. It's also not intuition or instinct. This wisdom doesn't get better with more intelligence or more intuition. " 

"I'm more guided by a wisdom that is not based in System 1 or System 2 or any "process" whatsoever. "

"The "one weird trick" to getting the right answers is to discard all stuck, fixed points. Discard all priors and posteriors. Discard all aliefs and beliefs. Discard worldview after worldview. Discard perspective. Discard unity. Discard separation. Discard conceptuality. Discard map, discard territory. Discard past, present, and future. Discard a sense of you. Discard a sense of world. Discard dichotomy and trichotomy. Discard vague senses of wishy-washy flip floppiness. Discard something vs nothing. Discard one vs all. Discard symbols, discard signs, discard waves, discard particles. 

All of these things are Ignorance. Discard Ignorance."

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T13:01:32.060Z · LW · GW

Just respond genuinely. You already did.

Comment by Unreal on A Path out of Insufficient Views · 2024-09-30T13:00:20.461Z · LW · GW

I don't know how else to phrase it, but I would like to not contradict interdependent origination. While still pointing toward what happens when all views are dropped and insight becomes possible. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-26T09:59:54.334Z · LW · GW

I appreciate this attempt... but no it is not it. 

What I'm talking about is not the skill to combine S1 and S2 fluidly as needed. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-26T01:07:54.777Z · LW · GW

I will respond to this more fully at a later point. But a quick correction I wish to make:

What I'm referring to is not about System 1 or System 2 so much. It's not that I rely more on System 1 to do things. System 1 and System 2 are both unreliable systems, each with major pitfalls. 

I'm more guided by a wisdom that is not based in System 1 or System 2 or any "process" whatsoever. 

I keep trying to point at this, and people don't really get it until they directly see it. That's fine. But I wish people would at least mentally try to understand what I'm saying, and so far I'm often being misinterpreted. Too much mental grasping at straws. 

The wisdom I refer to is able to skillfully use either System 1 or System 2 as appropriate. It's not a meta-process. It's not metacognition. It's not intelligence. It's also not intuition or instinct. This wisdom doesn't get better with more intelligence or more intuition. 

It's fine to not understand what I'm referring to. But can anyone repeat back what I'm saying without adding or subtracting anything? 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-25T18:08:23.640Z · LW · GW

Yes we agree. 👍🌻

I think I mention this in the essay too. 

If we are just changing at the drop of a hat, not for truth, but for convenience or any old reason, like most people are, ... 

or even under very strenuous dire circumstances, like we're about to die or in excruciating pain or something...

then that is a compromised mind. You're working with a compromised, undisciplined mind that will change its answers as soon as externals change.

Views change. Even our "robust" principles can go out the window under extreme circumstances. (So what's going on with those people who stay principled under extreme circumstances? This is worth looking into.) 

Of course views are usually what we have, so we should use them to the best extent we are able. Make good views. Make better views. Use truth-tracking views. Use ethical views. Great. 

AND there is this spiritual path out of views altogether, and this is even more reliable than solely relying on views or principles or commandments. 

I will try out a metaphor. Perhaps you've read The Inner Game of Tennis.

In tennis, at first, you need to practice the right move deliberately over and over again, and it feels awkward. It goes against your default movements. This is "using more ethical views" over default habits. Using principles. 

But somehow, as you fall into the moves, you realize: This is actually more natural than what I was doing before. The body naturally wants to move this way. I was crooked. I was bent. I was tense and tight. I was weak or clumsy. Now that the body is healthier, more aligned, stronger... these movements are obviously more natural, correct, and right. And that was true all along. I was lacking the right training and conditioning. I was lacking good posture. 

It wasn't just that I acquired different habits and got used to them. The new patterns are less effortful to maintain, better for for me, and they're somehow clearly more correct. I had to unlearn, and now I somehow "know" less. I'm holding "less" patterning in favor of what doesn't need anything "held onto." 

This is also true for the mind. 

We first have to learn the good habits, and they go against our default patterning and conditioning. We use rules, norms, principles. We train ourselves to do the right thing more often and avoid the wrong thing. This is important.

Through training the mind, we realize the mind naturally wishes to do good, be caring, be courageous, be steadfast, be reliable. The body is not naturally inclined to sit around eating potato chips. We can find this out just by actually feeling what it does to the body. And so neither is the mind naturally inclined to think hateful thoughts, lie to itself and others, or be fed mentally addictive substances (e.g. certain kinds of information). 

To be clear:

"More natural" does not mean more in line with our biology or evo-psych. It does not mean lazier or more complacent. It does not mean less energetic, and in some way it doesn't even mean less "effort". But it does mean less holding on, less tension, less agitation, less drowsiness, less stuckness, less hinderance.

"More natural" is even more natural than biology. And that's the thing that's probably going to trip up materialists. Because there's a big assumption that biology is more or less what's at the bottom of this human-stack. 

Well it isn't. 

There isn't a "the bottom."

It's like a banana tree. 

When you peel everything away, what is actually left? 

Well it turns out if you were able to PEEL SOMETHING AWAY, it wasn't the Truth of You. So discard it. And you keep going.

And that's the path. 

Comment by Unreal on How I started believing religion might actually matter for rationality and moral philosophy · 2024-09-25T14:59:50.578Z · LW · GW

I would also argue against the claim religious institutions are "devoid of moral truths". I think this is mostly coming from secularist propaganda. In fact these institutions are still some of the most charitable, generous institutions that provide humanitarian aid all over the world. Their centuries-old systems are relatively effective at rooting out evil-doing in their ranks. 

Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of the US, they're doing less violence and harm to the planet. They did not invent nuclear weapons. They are not striving to build AGI. Furthermore, I doubt they would

When spiritual teachers were asked about creating AI versions of themselves, they were not interested, and one company had to change their whole business model to creating sales bots instead. (Real story. I won't reveal which company.) 

I'm sad about all the corruption in religious institutions, still. It's there. Hatred against gay people and controlling women's bodies. The crusades. The jihads. Using coercive shame to keep people down. OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.) 

But our modern world has become nihilistic, amoral, and vastly more okay with killing large numbers of living beings, ecosystems, habitats, the atmosphere, etc. Pat ourselves on the back for civil rights, yes. Celebrate this. But who's really devoid of moral truths here? When we are the ones casually destroying the planet and even openly willing to take 10%+ chances at total extinction to build an AGI? The Christians and the Buddhists and even the jihadists aren't behind this. 

Comment by Unreal on How I started believing religion might actually matter for rationality and moral philosophy · 2024-09-25T14:56:58.950Z · LW · GW

My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:

 

This is easy to research. 

I will name a few ways the Buddha was ahead of his time in terms of 'humanitarian moral values' (which I do not personally buy into, and I don't claim the Buddha did either, but if it helps shed light on some things): 

  • He cared about environmentalism and not polluting shared natural resources, such as forests and rivers. I don't have specific examples in mind about how he advocated for this, but I believe the evidence is out there. 
  • Soon upon getting enlightened, his vow included a flourishing women monastic sangha. For the time, this was completely unheard of. People did not believe women could get enlightened or become arhats or do spiritual practice. With the Buddha's blessing and support, his mother and former wife started a for-women, women-led monastic sangha. It was important that men did not lead this group, and he wisely made that clear to people. The nuns in this sangha had their lives threatened continuously, as what they were doing was so against the times. 
    • Someone who digs into this might find places where things were not 'equal' for women and men and bring those up as a reason to doubt. But from my own investigation into this, I think a lot of reasonable compromises had to be made. A delicate balancing between fitting into the current social structures while ensuring the ability of women to do spiritual practice in community.
    • I do not personally buy into 'equality' in the way progressive Westerners do, and I think our current takes on women/men are "off" and I don't advocate comparing all of our social norms and memes with the Buddha's implementation of a complex, context-dependent system. I do not think we have "got it right"; we are still in the process of working this out. 
  • The Buddha's followers were extremely ethical people, and there are notes of people being surprised and flabbergasted about this from his time, including various kings and such. Ethical here means non-violent, non-lying, non-stealing, well-behaved, calm, heedful, caring, sober, etc. 
  • Also extremely, extremely taboo for his time, the Buddha ordained from the slave caste. Ven. Upali is the main example. He became one of the Buddha's main disciples. The Buddha firmly stood on grounds that people are not to be judged by their births. Race, class, gender, etc. There are some inspiring stories around this. 
  • I think it can be reasonably argued that Buddhists continue to be fairly ethical, relatively speaking. The Buddha did a good job setting things up. 
    • Unfortunately, Jesus died soon after he started teaching. The Buddha had decades to set things up for his followers. But I would also claim Jesus just wouldn't have done as good a job as the Buddha, even with more time. Not throwing shade at Jesus though. Setting things up well is just extremely difficult and requires unimaginable spiritual power and wisdom. 

There are also amazing stories about Christians. 

Comment by Unreal on How I started believing religion might actually matter for rationality and moral philosophy · 2024-09-25T14:16:03.158Z · LW · GW

I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths. 

This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing". 

In my view, moral codes are intrinsically subjective. There is no factual disagreement between Harry and Professor Quirrell which they could hope to overcome through empiricism, they simply have different utility functions.

 

I don't claim to be a moral realist or any other -ist that we currently have words for. I do follow the Buddha's teachings on morals and ethics. So I will share from that perspective, which I have reason to believe to be true and beneficial to take on, for anyone interested in becoming more ethical, wise, and kind. 

"Don't eat pork" is something I'd call an ethical rule, set for a specific time and place, which is a valid manifestation of morality.

"Avoiding killing" and "Avoid stealing" (etc) are held, in Buddhism, as "ethical precepts." They aren't rules, but they're like... 

a) Each precept is a game in and of itself with many levels

b) It is generally considered good to use this life and future lives to deepen one's practice of each of the precepts (to take on the huge mission of perfecting our choices to be more in alignment with the real thing these statements are pointing at). It's also friendly to help others do the same.

c) It's not about being a stickler to the letter of the law. The deeper you investigate each precept, you actually have to let go of your ideas of what it means to "be doing it right." It's not about getting fixated on rules, heuristics, or norms. There's something more real and true being pointed to that cannot be predicted, pre-determined, etc. 

Moral codes are not intrinsically subjective. But I would also not make claims about them being objective. We are caught in a sinkhole dichotomy between subjectivity and objectivity. Western thinking needs to find a way out of this. Too many philosophical discussions get stuck on these concepts. They're useful to a degree, but we need to be able to discard them when they become useless.

"Killing is wrong" is a true statement. It's not subjectively true; it's not objectively true. It's true in a sense that doesn't neatly fit into either of those categories. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-25T12:48:33.816Z · LW · GW

I use the mind too. I appreciate the mind a lot. I wouldn't choose to be less smart, for instance. 

But we are still over-using the mind to solve our problems. 

Intelligence doesn't solve problems. It creates problems. 

Wisdom resolves problems. 

This is what's really hard to convey. But I am trying, as you say.

Comment by Unreal on A Path out of Insufficient Views · 2024-09-25T12:46:16.124Z · LW · GW

Yes non-attachment points in the same direction. 

Another way of putting it is "negate everything." 

Another way of putting it is "say yes to everything." 

Both of these work toward non-attachment. 

Comment by Unreal on A Path out of Insufficient Views · 2024-09-24T21:32:22.170Z · LW · GW
Comment by Unreal on A Path out of Insufficient Views · 2024-09-24T21:30:38.076Z · LW · GW

Hm, if by "discovering" you mean 
Dropping all fixed priors 
Making direct contact with reality (which is without any ontology) 
And then deep insight emerges
And then after-the-fact you construct an ontology that is most beneficial based on your discovery

Then I'm on board with that

And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end. 

Comment by Unreal on you should probably eat oatmeal sometimes · 2024-08-26T00:52:03.533Z · LW · GW

we serve oatmeal every breakfast where i live 
love oatmeal

Comment by Unreal on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-19T19:35:44.689Z · LW · GW

Hm, you know I do buy that also. 

The task is much harder now, due to changing material circumstances as you say. The modern culture has in some sense vaccinated itself against certain forms of wisdom and insight. 

We acknowledge this problem and are still making an effort to address them, using modern technology. I cannot claim we're 'anywhere close' to resolving this? We're just firmly GOING to try, and we believe we in particular have a comparative advantage, due to a very solid community of spiritual practitioners. We have AT LEAST managed to get a group of modern millienials + Gen-Zers (with all the foibles of this group, with their mental hang-ups and all -- I am one of them)... and successfully put them through a training system that 'unschools' their basic assumptions and provides them the tools to personally investigate and answer questions like 'what is good' or 'how do i live' or 'what is going on here'. 

There's more to say, but I appreciate your engagement. This is helpful to hear. 

Comment by Unreal on Suffering Is Not Pain · 2024-06-19T19:24:19.414Z · LW · GW

no anyone can visit! we have guests all the time. feel free to DM me if you want to ask more. or you can just go on the website and schedule a visit. 

Alex Flint is still here too, altho he lives on neighboring land now. 

'directly addressing suffering' is a good description of what we're up to? 

Comment by Unreal on Suffering Is Not Pain · 2024-06-18T19:17:16.222Z · LW · GW

if you have any interest in visiting MAPLE, lmk. ? (monasticacademy.org) 

Comment by Unreal on Suffering Is Not Pain · 2024-06-18T19:16:10.629Z · LW · GW

wow thanks for trying to make this distinction here on LessWrong. admirable. 

i don't seem to have the patience to do this kind of thing here, but i'm glad someone is trying. 

Comment by Unreal on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-18T14:41:22.537Z · LW · GW

We have a significant comparative advantage to pretty much all of Western philosophy. I know this is a 'bold claim'. If you're further curious you can come visit the Monastic Academy in Vermont, since it seems best 'shown' rather than 'told'. But we also plan on releasing online content in the near future to communicate our worldview. 

We do see that all the previous efforts have perhaps never quite consistently and reliably succeeded, in both hemispheres. (Because, hell, we're here now.) But it is not fair to say they have never succeeded to any degree. There have been a number of significant successes in both hemispheres. We believe we're in a specific moment in history where there's more leverage than usual, and so there's opportunity. We understand that chances are slim and dim. 

We have been losing the thread to 'what is good' over the millenia. We don't need to reinvent the wheel on this; the answers have been around. The question now is whether the answers can be taught to technology, or whether technology can somehow be yoked to the good / ethical, in a way that scales sufficiently. 

Comment by Unreal on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-14T17:01:01.957Z · LW · GW

Thank you for pointing this out, as it is very important. 

The morality / ethics of the human beings matters a lot. But it seems to matter more than just a lot. If we get even a little thing wrong here, ...

But we're getting more than just a little wrong here, imo.  Afaict most modern humans are terribly confused about morality / ethics. As you say "what is even good"

I've spoken with serious mathematicians who believe they might have a promising direction to the AI alignment problem. But they're also confused about what's good. That is not their realm of expertise. And math is not constrained by ethics; you can express a lot of things in math, wholesome and unwholesome. And the same is so with the social dynamics, as you point out. 

This is why MAPLE exists, to help answer the question of what is good, and help people describe that in math. 

But in order to answer the question for REAL, we can't merely develop better social models. Because again, your point applies to THAT as well. Developing better social models does not guarantee 'better for humanity/the planet'. That is itself a technology, and it can be used either way. 

We start by answering "what is good" directly and "how to act in accord with what is good" directly. We find the true 'constraints' on our behaviors, physical and mental. The entanglement between reality/truth and ethics/goodness is a real thing, but 'intelligence' on its own has never realized it. 

Comment by Unreal on CFAR Takeaways: Andrew Critch · 2024-02-27T20:08:37.510Z · LW · GW

Rationality seems to be missing an entire curriculum on "Eros" or True Desire.

I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught. 

I think maybe Rationality should be more open to sending people to different places for different trainings and stop trying to do everything on its own terms. 

It has been way better for me to learn how to enter/exit different frames and worldviews than to try to make everything fit into one worldview / frame. I think some Rationalists believe everything is supposed to fit into one frame, but Frames != The Truth. 

The world is extremely complex, and if we want to be good at meeting the world, we should be able to pick up and drop frames as needed, at will. 

Anyway, there are three main curricula: 

  1. Eros (Embodied Desire) 
  2. Intelligence (Rationality)
  3. Wisdom (Awakening) 

Maybe you guys should work on 2, but I don't think you are advantaged at 1 or 3. But you could give intros to 1 and 3. CFAR opened me up by introducing me to Focusing and Circling, but I took non-rationalist trainings for both of those. As well as many other things that ended up being important. 

Comment by Unreal on If Clarity Seems Like Death to Them · 2023-12-31T00:17:00.948Z · LW · GW

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore that, and we should actually treat this like a triage situation. Or at least a call to some kind of intervention. And it would be sick to treat this like everything is normal, and that you are sane, and I am sane, and we're just chatting about stuff and oh isn't the weather nice today. 

LessWrong is the wrong place for this to happen. This kind of "prioritization" sanity does not flourish here. 

Not-sane people get stuck on LessWrong in order to stay not-sane because LW actually reinforces a kind of mental unwellness and does not provide good escape routes. 

If you're going to write stuff on LW, it might be better to write a journal about what the various personal, lifestyle interventions you are making to get out of the personal, unwell hole you are in. A kind of way to track your progress, get accountability, and celebrate wins. 

Comment by Unreal on Here's the exit. · 2023-12-30T23:03:26.238Z · LW · GW

Musings: 

COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment. 

I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas." 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T20:45:29.088Z · LW · GW

I don't interpret that statement in the same way. 

You interpreted it as 'lied to the board about something material'. But to me, it also might mean 'wasn't forthcoming enough for us to trust him' or 'speaks in misleading ways (but not necessarily on purpose)' or it might even just be somewhat coded language for 'difficult to work with + we're tired of trying to work with him'. 

I don't know why you latch onto the interpretation that he definitely lied about something specific. 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T16:57:23.135Z · LW · GW

I was asked to clarify my position about why I voted 'disagree' with "I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO." 

I'm putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on "for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way." And given that OpenAI has ~800 employees, telling them would basically count as a 'public' announcement. (I would update significantly on the claim if it was only a select group of trusted employees, rather than all of them.)

To me, people seem too-biased in the direction of "this info should be public"—maybe with the assumption that "well I am personally trustworthy, and I want to know, and in fact, I should know in order to be able to assess the situation for myself." Or maybe with the assumption that the 'public' is good for keeping people accountable and ethical. Meaning that informing the public would be net helpful. 

I am maybe biased in the direction of: The general public overestimates its own trustworthiness and ability to evaluate complex situations, especially without most of the relevant context. 

My overall experience is that the involvement of the public makes situations worse, as a general rule. 

And I think the public also overestimates their own helpfulness, post-hoc. So when things are handled in a public way, the public assesses their role in a positive light, but they rarely have ANY way to judge the counterfactual. And in fact, I basically NEVER see them even ACKNOWLEDGE the counterfactual. Which makes sense because that counterfactual is almost beyond-imagining. The public doesn't have ANY of the relevant information that would make it possible to evaluate the counterfactual. 

So in the end, they just default to believing that it had to play out in the way it did, and that the public's involvement was either inevitable or good. And I do not understand where this assessment comes from, other than availability bias?

The involvement of the public, in my view, incentivizes more dishonesty, hiding, and various forms of deception. Because the public is usually NOT in a position to judge complex situations and lack much of the relevant context (and also aren't particularly clear about ethics, often, IMO), so people who ARE extremely thoughtful, ethically minded, high-integrity, etc. are often put in very awkward binds when it comes to trying to interface with the public. And so I believe it's better for the public not to be involved if they don't have to be.

I am a strong proponent of keeping things close to the chest and keeping things within more trusted, high-context, in-person circles. And to avoid online involvement as much as possible for highly complex, high-touch situations. Does this mean OpenAI should keep it purely internal? No they should have outside advisors etc. Does this mean no employees should know what's going on? No, some of them should—the ones who are high-level, responsible, and trustworthy, and they can then share what needs to be shared with the people under them.

Maybe some people believe that all ~800 employees deserve to know why their CEO was fired. Like, as a courtesy or general good policy or something. I think it depends on the actual reason. I can envision certain reasons that don't need to be shared, and I can envision reasons that ought to be shared. 

I can envision situations where sharing the reasons could potentially damage AI Safety efforts in the future. Or disable similar groups from being able to make really difficult but ethically sound choices—such as shutting down an entire company. I do not want to disable groups from being able to make extremely unpopular choices that ARE, in fact, the right thing to do. 

"Well if it's the right thing to do, we, the public, would understand and not retaliate against those decision-makers or generally cause havoc" is a terrible assumption, in my view. 

I am interested in brainstorming, developing, and setting up really strong and effective accountability structures for orgs like OpenAI, and I do not believe most of those effective structures will include 'keep the public informed' as a policy. More often the opposite.

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T03:06:22.108Z · LW · GW

Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T03:01:29.727Z · LW · GW

The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org. 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T02:41:56.933Z · LW · GW

The partnership between Microsoft and OpenAI is a net negative for AI safety. And: What can we do about that? 

Comment by Unreal on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T02:40:29.878Z · LW · GW

We should consider other accountability structures than the one OpenAI tried (i.e. the non-profit / BoD). Also: What should they be?

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-19T17:39:48.933Z · LW · GW

I would never have put it as either of these, but the second one is closer. 

For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don't expect most people do, but I've developed this as a practice, and I am guessing most people can, with some effort or practice. 

I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like "I'm just saying something to say something" or "I just said something off/false/inauthentic" or "I didn't quite mean what I just said or am saying". 

Although, the motivations to really look out for are like "I want someone else to hurt" or "I want to hurt myself" or "I hate" or "I'm doing this out of fear" or "I covet" or "I feel entitled to this / they don't deserve this" or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get 'blended' with these without realizing we're blended, and then act out of them. 

Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn't noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from "I wanted to just gossip" to "Now I want to do what I can to help." It does not work like that. It's more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation. 

What does it mean to take personal responsibility? 

I mean, for one example, if I later find out something I did caused harm, I would try to 'take responsibility' for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing. 

And for another thing, I try to realize that my actions have consequences and that it's my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time. 

I almost never use a calculating frame to try to think about this. I think that's inadvisable and can drive people onto a dark or deluded path 😅

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-18T23:08:50.908Z · LW · GW

I'm fine with drilling deeper but I currently don't know where your confusion is. 

I assume we exist in different frames, but it's hard for me to locate your assumptions. 

I don't like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-16T04:50:41.772Z · LW · GW

Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns. 

RE: The bullet point on compassion... maybe just strike that bullet point.  It doesn't really affect the rest of the points. 

It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness before they can be asked. 

I think I get what you're getting at. And I feel in agreement with this sentiment. I don't want well-intentioned people to hamstring themselves. 

I certainly am not claiming ppl should make a model justify its usefulness in a specific way. 

I'm more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn't be cavalier about it.... but especially they should not delude themselves into believing they have good intentions for info when they do not. 

And so to casually ask about Alice's sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice's or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:48:08.421Z · LW · GW

Oh, okay, I found that a confusing way to communicate that? But thanks for clarifying. I will update my comment so that it doesn't make you sound like you did something very dismissive. 

I feel embarrassed by this misinterpretation, and the implied state of mind I was in. But I believe it is an honest reflection about something in my state of mind, around this subject. Sigh. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:32:54.140Z · LW · GW

But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of.  There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.  

There is a chance we don't have a disagreement, and there is a chance we do. 

In brief, to see if there's a crux anywhere in here:

  • Don't need ppl to boot up 'care as a friend' module. 
  • Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real. 
  • So it matters if the convo is like (A) "I care about the world, and doing good in the world, and knowing about Renshin's sanity is about that, at the base. I will use this information for good, not for evil." Ideally the info is relevant to something they're responsible for, so that it's somewhat plausible the info would be useful and beneficial. 
  • Versus (B) "I'm just idly curious about it, but I don't need to know and if it required real effort to know, I wouldn't bother. It doesn't help me or anyone to know it. I just want to eat it like I crave a potato chip. I want satisfaction, stimulation, or to feel 'I'm being productive' even if it's not truly so, and I am entitled to feel that just b/c I want to. I might use the info in a harmful way later, but I don't care. I am not really responsible for info I take in or how I use info." 
  • And I personally think the whole endeavor of modeling the world should be for the (A) motive and not the (B) motive, and that taking in any-and-all information isn't, like, neutral or net-positive by default. People should endeavor to use their intelligence, their models, and their knowledge for good, not for evil or selfish gain or to feed an addiction to feeling a certain way. 
  • I used a lot of 'should' but that doesn't mean I think people should be punished for going against a 'should'. It's more like healthy cultures, imo, reinforce such norms, and unhealthy cultures fail to see or acknowledge the difference between the two sets of actions. 
Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T03:13:51.394Z · LW · GW

I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating. 

The shorter version:

His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous. 

His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine whether they're lying (either to others or to themselves). I would be surprised if Oliver would actually find Alice and Bob both saying "trust me i'm fine" would be 'totally flat' data, given he probably has to discern deception on a regular basis.

Also I'm not exactly the 'trust me i'm fine' type, and anyone who knows me would know that about me, if they bothered trying to remember. I have both the skill of introspection and the character trait of frankness. I would reveal plenty about my motives, aliefs, the crazier parts of me, etc. So paragraph 2 sounds like a flimsy excuse to be avoidant? 

But the IMPORTANT thing is... I don't want to argue. I wasn't interested in that. I was hoping for something closer to perspective-taking, reconciliation, or reaching more clarity about our relational status. But I get that I was sounding argumentative. I was being openly frustrated and directing that in your general direction. Apologies for creating that tension. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T02:59:10.887Z · LW · GW

The 'endless list' comment wasn't about you, it was a more 'general you'. Sorry that wasn't clear. I edited stuff out and then that became unclear. 

I mostly wanted to point at something frustrating for me, in the hopes that you or others would, like, get something about my experience here. To show how trapped this process is, on my end.

I don't need you to fix it for me. I don't need you to change. 

I don't need you to take me for my word. You are welcome to write me off, it's your choice. 

I just wanted to show how I am and why. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T02:05:37.990Z · LW · GW

FTR, the reason I am engaging with LW at all, like right now... 

I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys. 

But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere. 

I concern myself with your norms, your ethical conduct, etc. I wish well for you, and wish you to do right by yourselves, each other, and the world. The way you conduct yourselves has big implications. Big implications for impacts to me, my friends, the world, the future of the world. 

You've chosen a certain level of global-scale responsibility, and so I'm going to treat you like you're AT THAT LEVEL. The highest possible levels with a very high set of expectations. I hold myself AT LEAST to that high of a standard, to be honest, so it's not hypocritical. 

And you can write me off, totally. No problem. 

But in my culture, friends concern themselves with their friends' conduct. And I see you as friends. More or less. 

If you write me off (and you know me personally), please do me the honor of letting me know. Ideally to my face. If you don't feel you are gonna do that / don't owe me that, then it would help me to know that also. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-15T01:32:45.384Z · LW · GW

so okay i'm actually annoyed by a thing... lemme see if i can articulate it. 

  1. I clearly have orders of magnitude more of the relevant evidence to ascertain a claim about MAPLE's chances of producing 'crazy' ppl as you've defined—and much more even than most MAPLE people (both current and former). 
  2. Plus I have much of the relevant evidence about my own ability to discern the truth (which includes all the feedback I've received, the way people generally treat me, who takes me seriously, how often people seem to want to back away from me or tune me out when I start talking, etc etc). 
  3. A bunch of speculators, with relatively very little evidence about either, come out with very strong takes on both of the above, and don't seem to want to take into account EITHER of the above facts, but instead find it super easy to dismiss any of the evidence that comes from people with the relevant data. Because of said 'possibility they are crazy'. 

And so there is almost no way out of this stupid box,; this does not incline me to try to share any evidence I have, and in general, reasonable people advise me against it. And I'm of the same opinion. It's a trap to try.

It is both easy and an attractor for ppl to take anything I say and twist it into more evidence for THEIR biased or speculative ideas, and to take things I say as somehow further evidence that I've just been brainwashed. And then they take me less seriously. Which then further disinclines me to share any of my evidence. And so forth. 

This is not a sane, productive, and working epistemic process? As far as I can tell? 

Literally I was like "I have strong evidence" and Ben's inclination was to say "strong evidence is easy to come by / is everywhere" and links to a relevant LW article, somehow dismissing everything I said previously and might say in the future with one swoop. It effectively shut me down. 

And I'm like.... 

what is this "epistemic process" ya'll are engaged in

[Edit: I misinterpreted Ben's meaning. He was saying the opposite of what I thought he meant. Sorry, Ben. Another case of 'better wrong than vague' for me. 😅]

To me, it looks like [ya'll] are using a potentially endless list of back-pocket heuristics and 'points' to justify what is convenient for you to continue believing. And it really seems like it has a strong emotional / feeling component that is not being owned. 

[edit: you -> ya'll to make it clearer this isn't about Oliver] 

I sense a kind of self-protection or self-preservation thing. Like there's zero chance of getting access to the true Alief in there. That's why this is pointless for me.

Also, a lot of online talk about MAPLE is sooo far from realistic that it would, in fact, make me sound crazy to try to refute it. A totally nonsensical view is actually weirdly hard to counter, esp if the people aren't being very intellectually honest AND the people don't care enough to make an effort or stick through it all the way to the end. 

Comment by Unreal on Dishonorable Gossip and Going Crazy · 2023-10-14T15:56:26.573Z · LW · GW

Anonymized paraphrase of a question someone asked about me (reported to me later, by the person who was being asked the question): 

I have a prior about people who go off to monasteries sometimes going nuts, is Renshin nuts?

The person being asked responded "nah" and the question-asker was like "cool" 

I think this sort of exchange might be somewhat commonplace or normal in the sphere. 

I personally didn't feel angry, offended, or sad to hear about this exchange, but I don't feel the person asking the question was asking out of concern or care for me, as a person. But rather to get a quick update for their world model or something. And my "taste" about this is a "bad" taste. I don't currently have time to elaborate but may later. 

Comment by Unreal on Announcing Dialogues · 2023-10-09T13:23:19.014Z · LW · GW

Ideas I'm interested in playing with:

  • experiment with using this feature for one-on-one coaching / debugging; I'd be happy to help anyone with their current bugs... (I suspect video call is the superior medium but shrug, maybe there will be benefits to this way)
  • talk about our practice together (if you have a 'practice' and know what that means) 

Topics I'd be interested in exploring:

  • Why meditation? Should you meditate? (I already meditate, a lot. I don't think everyone should "meditate". But everyone would benefit from something like "a practice" that they do regularly. Where 'practice' I'm using in an extremely broad sense and can include rationality practices.)
  • Is there anything I don't do that you think I should do? 
  • How to develop collective awakening through technology
  • How to promote, maintain, develop, etc. ethical thoughts, speech, and action through technology
Comment by Unreal on Closing Notes on Nonlinear Investigation · 2023-09-18T18:17:59.207Z · LW · GW

I think the thing I'm attempting to point out is:

If I hold myself to satisfying A&C's criterion here, I am basically:

a) strangleholding myself on how to share information about Nonlinear in public
b) possibly overcommitting myself to a certain level of work that may not be worth it or desirable
c) implicitly biasing the process towards coming out with a strong case against Nonlinear (with a lower-level quality of evidence, or evidence to the contrary, being biased against) 

I would update if it turns out A&C was actually fine with Ben coming to the (open, public) conclusion that A&C's claims were inaccurate, unfounded, or overblown, but it didn't sound like that was okay with them based on the article above, and they weren't open to that sort of conclusion. It sounded like they needed the outcome to be a pretty airtight case against Nonlinear. 

Anyway that's ... probably all I will say on this point. 

I am grateful for you, Ben, and the effort you put into this, as it shows your care, and I do think the community will benefit from the work. I am concerned about your well-being and health and time expenditure, but it seems like you have a sense for how to handle things going forward. 

I am into setting firm boundaries and believe it's a good skill to cultivate. I get that it is not always a popular option and may cause people to not like me. :P 

Comment by Unreal on Closing Notes on Nonlinear Investigation · 2023-09-16T19:55:58.604Z · LW · GW

it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible.

 

This is a horrible constraint to put on an epistemic process. You cannot, ever, guarantee the reaction to these claims, right? Isn't this a little like writing the bottom line first? 

If it were me in this position, I would have been like: 

Sorry Alice & Chloe, but the goal of an investigation like this is not to guarantee a positive reaction for your POV, from the public. The goal is to reveal what is actually true about the situation. And if you aren't willing to share your story with the public in that case, then that is your choice, and I respect that. But know that this may have negative consequences as well, for instance, on future people who Nonlinear works with. But if it turns out that your side of the story is false or exaggerated or complicated by other factors (such as the quality of your character), then it would be best for everyone if I could make that clear as well. It would not serve the truth to go into this process by having already 'chosen a winner' or 'trying to make sure people care enough' or something like this. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-16T16:23:15.304Z · LW · GW

Neither here nor there: 

I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and for rescuers to have something to do. But I still see the value in going through a 'cancelling' process, for everyone involved, and so I'm not saying to avoid it either. It just sucks, and I get it.

That said, the people who are 'cancelled' tend to be stubborn hard-heads about it, and their own obstinacy tends to lead further to an even more extreme downfall. It's like some suicidal part of them kicks in, and drives the knife in deeper without anyone's particular help. 

I agree it's good to never just give into mob justice, but for your own souls to not take damage, try not to clench. It's not worth protecting it, whatever it happens to be. 

Save your souls. Not your reputation. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-16T16:09:50.246Z · LW · GW

After reading more of the article, I have a better sense of this context that you mention. It would be interesting to see Nonlinear's response to the accusations because they seem pretty shameful, as is. 

I would actively advise against anyone working with Kat / Emerson, not without serious demonstration of reformation and, like, values-level shifts. 

If Alice is willing to stretch the truth about her situation (for any reason) or outright lie in order to enact harsher punishment on others, even as a victim of abuse, I would be mistrustful of her story. And so far I am somewhat mistrustful of Alice and very mistrustful of Kat / Emerson. 

Also, even if TekhneMakre's take is what in fact happened, it doesn't give Alice a total pass in that particular situation, to me. I get that it's hard to be clear-headed and brave when faced with potentially hostile or adversarial people, but I think it's still worth trying to be. I don't expect anyone to be brave, but I also don't treat anyone as totally helpless, even if the cards are stacked against them. 

Comment by Unreal on Sharing Information About Nonlinear · 2023-09-08T00:22:44.567Z · LW · GW

These texts have weird vibes from both sides. Something is off all around.  

That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse. 

This is called playing the victim. I don't buy it. 

I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these.