My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms

post by Kaj_Sotala · 2018-03-08T07:37:54.532Z · LW · GW · 138 comments

Contents

  Meditation as cognitive defusion practice
  Understanding suffering
  So what’s all this “look up” and “get out of the car” stuff?
  The three marks
  On why enlightenment may not be very visible in one’s behavior
None
138 comments

Epistemic status: pretty confident. Based on several years of meditation experience combined with various pieces of Buddhist theory as popularized in various sources, including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees; also discussions with other people who have practiced meditation, and scatterings of cognitive psychology papers that relate to the topic. The part that I’m the least confident of is the long-term nature of enlightenment; I’m speculating on what comes next based on what I’ve experienced, but have not actually had a full enlightenment. I also suspect that different kinds of traditions and practices may produce different kinds of enlightenment states.

While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective. I agree that such an explanation can’t produce the necessary mental changes that the explanation is talking about. But it seemed wrong to me to claim that all of this would be somehow intrinsically mysterious and impossible to explain on such a level that would give people at least an intellectual understanding of what Looking and enlightenment and all that are. Especially not after I spoke to Val and realized that hey, I actually do know how to Look, and that thing he’s calling kensho, that’s happened to me too.

(Note however that kensho is a Zen term and I'm unfamiliar with Zen; I don't want to use a term which might imply that I was going with whatever theoretical assumptions Zen might have, so I will just talk about “my experience” when it comes up.)

So here is my attempt to give an explanation. I don’t know if I’ve succeeded, but here goes anyway.

----

One of my key concepts is going to be cognitive fusion.

Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too).

In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.

Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

On the other hand, even when doing math, it can sometimes be useful to question the axioms you’re using. In programming, taking the guarantees of your abstractions as literal axioms can also lead to trouble. And while it is useful to perceive something as objectively life-threatening and out to get you, that perception is going to get you in a lot of trouble if it’s actually false. Such as if you get into a fight with your romantic partner and assume that they actively want to hurt you, when they’re just feeling hurt over something that you said.

Cognitive fusion trades flexibility for focus. You will be strongly driven and capable of focusing on just the thing that’s your in mind, at the cost of being less likely to notice when that thing is actually wrong.

Some simple defusion techniques suggested by ACT include things like noticing when you’re thinking something bad about yourself, and prefacing it with “I’m having the thought that”. So if you find yourself thinking “I am a terrible person”, you can change that into “I’m having the thought that I am a terrible person”. Or you can repeat the word “terrible” a hundred times, until it stops having any meaning. Or you can see if you can manipulate the way that the thought sounds like in your head, such as turning it into a comical whine that sounds like it’s from a cartoon, until you can no longer take it seriously. (Eliezer’s cognitive trope therapy should also be considered as a cognitive defusion technique.) In one way or the other, all of these highlight the fact that the thought or emotion is just a mental construct, making it easier to question its truthfulness.

However, managing to defuse from a thought that is actively bothering you, is a relatively superficial level of defusion. We must go deeper.

Meditation as cognitive defusion practice

While there are many different forms of meditation, many of them could be reasonably characterized as practicing the skill of intentional cognitive defusion.

One of the most basic forms of meditation is to just concentrate on your breath - or on any other focus that you have happened to choose. Soon, a distraction will come up in your mind - something that says that there’s a more important thing to do, or that you are bored, or that this isn’t leading anywhere.

If you start engaging with the content of that distraction, you’re already failing to keep your focus. That is, if a thought comes to you saying that there’s a more important thing to do, and you start arguing with yourself and trying to make a logical case for why meditation is actually the most important thing, then you’ve already been distracted from whatever it was that you were supposed to be focusing on. On some level, you have bought into the internal logic of the distraction, and into the belief that the argument must be beaten on its own terms.

What you must do instead, is to disregard the content of the distraction. Instead of becoming fused with its contents, defuse and redirect your attention back towards your focus. Whenever a new distraction rises, do this again.

As your skill improves and your attention becomes more reliably anchored on the focus, you can start learning additional skills. If you are doing something like the meditation program outlined in e.g. The Mind Illuminated, one of the next steps is to develop an awareness of distractions that are just on the edge of your consciousness, which are not yet distracting you but are going to steal your attention any moment now. By cultivating a sensitivity to those subtle movements of your mind, you are increasing your ability to notice lower-level details of what’s going on in your consciousness, in a way which helps with cognitive defusion by making you more aware of the ways in which your experience is constructed.

As an example of such increased sensitivity, some time back I was doing concentration meditation, using an app which plays the sound of something hitting a woodblock, 50 times per minute. As I was concentrating on listening to the sound, I noticed that what had originally been just one thing in my experience - a discrete sound event - was actually composed of many smaller parts. The beginning and end of the sound were different, so there were actually two sound sensations; and there was a subtle visualization of something hitting something else; and a sense of motion accompanying that visualization. I had not previously even been fully aware that my mind was automatically creating a mental image of what it thought that the sound represented.

Continuing to observe those different components, I became more aware of the fact that my visualization of the sound changed over time and between meditation sessions, in a rather arbitrary way. Sometimes my mind conjured up a vision of a hammer hitting a rock in a dwarven mine; sometimes it was two wooden sticks hitting each other; sometimes it was drops of water falling on the screen of my phone.

By itself, this would mostly just be a curiosity. However, developing the kind of mental precision that actually lets you separate your experience into these kinds of small subcomponents, seems like a prerequisite for slicing your various mental outputs in a way which lets you see what they’re made of.

Last summer, I noticed myself having the thought that I couldn’t be happy, which made me feel bad. And then I noticed that associated with that thought, was a mental image of what a happy person was like - that image was of a young, cheerful, outgoing and extraverted girl.

In other words, my prototypical concept of a happy person included not just happiness, but extraversion and high energy as well. And so my mind was comparing my self-concepts with this concept of happiness, noticing that I wasn’t that kind of a person, and so concluding that I couldn’t be happy. Realizing that my concept of a “happy person” was uselessly narrow allowed me to fix the problem.

But if we break down what happened with the dysfunctional “happiness concept” into slightly smaller steps, something like this seems to have happened:

1) me feeling unhappy -> 2) mental image of a happy person -> 3) thought that I can’t be happy

Notice that this has a similarity with the way my mind automatically produced a visualization for the woodblock sound:

1) sensation of the woodblock sound -> 2) mental image of two woodblocks hitting each other -> 3) thought of “oh, it’s two woodblocks hitting each other”

In both cases, some stimulus seemed to have produced a subtle mental image as a preliminary interpretation of what the stimulus meant, which then translated into a higher-level abstract concept. In both cases, something was off about the middle step. In the case of the happiness example, I had a too narrow view of what happy people are like. With the sound, the problem was that my mind was making up various interpretations of what was making the sound, despite having too little data to actually determine what it was.

Having developed the ability to notice those earlier steps in my mental processes, allowed me to notice a potential problem, as opposed to only being aware of the final output of the process.

I believe that this kind of thing is what Valentine means when he talks about Looking: being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

This seems like a core rationality skill, since seeing slightly earlier stages of your cognitive process helps question its validity, which is to say it makes it easier for you to engage in cognitive defusion when desired. (If the process seems valid, you can still choose to fuse with it if that provides a benefit.) And being able to apply selective cognitive defusion means being able to not believe everything that you think, which is an essential requirement for things like actually changing your mind.

Understanding suffering

Understanding suffering is a special case of Looking, but a sufficiently important one that it deserves to be briefly discussed in some detail.

Usually, most of us are - on some implicit level - operating off a belief that we need to experience pleasant feelings and need to avoid experiencing unpleasant feelings. In a sense, thinking about getting into an unpleasant or painful situation may feel almost like death: if we think that the experience would be unpleasant enough, then no matter how brief it might be, we might do almost anything to avoid ending up there.

There’s a sense in which this is absurd. After all, a moment of discomfort is just that - a moment of discomfort. By itself, it won’t do us any lasting damage, and trying to avoid can produce worse results even on its own terms.

For instance, consider the person who keeps putting off making a doctor’s appointment because they suspect that there’s something wrong with them. If there really is something seriously wrong, then the best thing would be to get a diagnosis as fast as possible. And even if it is something harmless, it would still be better to find out about that earlier rather than later, so as to stop feeling the nervous about it. Not going to the doctor, and continuing to feel nervous about it, is about the worst possible outcome - even if you cared about avoiding discomfort.

On a conscious level, we realize that this kind of behavior is absurd. Then we go on doing it.

You might say that it’s because there’s a part of us that remains cognitively fused with the alief that all painful experiences need to be avoided, and that there’s something vaguely death-like about them.

Typically, if we are only talking about relatively mild discomfort, then that alief doesn’t manifest itself very strongly. We are okay with the thought of facing mild discomfort. But just as it’s easy to remain calm and defused from feelings of anger as long as there isn’t anything strongly upsetting going on, on some level we will tend to experience cognitive fusion with the “pain is death” alief more and more strongly the worse we expect the pain to be.

The general way by which incorrect aliefs are changed is by giving the part of your brain holding them, experiences about what the world is really like. If you have a dog phobia, you might do desensitization therapy, gradually exposing yourself to dogs in controlled circumstances. Eventually, seeing that you have encountered dogs many times and that it’s safe, your brain updates and ceases to have the phobia.

Similarly, if you Look at the process of yourself flinching away from thoughts of painful experiences, you will come to directly experience the fact that it’s the flinching away from them that actually produces suffering, and that the thoughts would be harmless by themselves.

The dog doesn’t hurt you: it’s your own fear that hurts you. Similarly, pain isn’t bad by itself, but turns into suffering when we come to believe that we need to avoid it. Seeing this, the parts of your mind that have been doing the flinching away, will gradually start updating towards not habitually flinching away.

When I say that it is the automatic flinching away that actually produces suffering, I don’t mean that just in the sense of “putting off painful experiences causes us to experience more pain in the long run”. I mean that the processes involved with the flinching away are literally what turns pain into suffering: if you can get the flinching away to stop, pain (whether physical or emotional) will still be present as an attention signal that flags important things into your awareness. But neither the experience of pain, nor the thought of experiencing pain in the future, will be experienced as aversive anymore. The alief / belief of “pain is death” will not be active.

Now, Looking at your process-of-flinching-away in order to stop flinching away, is a long and slow process. We can again compare it with getting desensitized to a phobia: even after you have learned to be okay with a mild phobia trigger (say, a toy dog in the same room with you), you will continue to be freaked out by worse versions of the trigger (such as a real dog). It’s very possible to have setbacks if a dog attacks you or if your life just generally gets more stressful, and sometimes you might show up at a session and get freaked out by things you thought you were already desensitized to. Learning to Look at suffering in order to reduce it is similar.

So what’s all this “look up” and “get out of the car” stuff?

Here’s an analogy.

Suppose that one day, you happen to run into a complete stranger. You don’t think very much about needing to impress them, and as a result, you come off as relaxed and charming.

The next day, you’re going on a date with someone you’re really strongly attracted to. You feel that it’s really really important for you to make a good impression, and because you keep obsessing about this thought, you can’t relax, act normal, and actually make a good impression.

Suppose that you remember all that stuff about cognitive fusion. You might (correctly) think that if you managed to defuse from the thought of this being an important encounter, then all of this would be less stressful and you might actually make a good impression.

But this brings up a particular difficulty: it can be relatively easy to defuse from a thought that you on some level believe is, or at least may be, false. But it’s a lot harder to defuse from a thought which you believe on a deep level to actually be true, but which it’s just counterproductive to think about.

After all, if you really are strongly interested in this person, but might not have an opportunity to meet with them again if you make a bad impression... then it is important for you to make a good impression on them now. Defusing from the thought of this being important, would mean that you believed less in this being important, meaning that you might do something that actually left a bad impression on them!

You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself. In trying to reject the belief that making a good impression is important, and trying to do this with the motive of making a good impression, you just reinforce the belief that this is important. If you want to actually defuse from the belief, your motive for doing so has to come from somewhere else than the belief itself.

The general form of this thing is what makes big green bats complain that you’re still not getting out of the car. Or people who are aware of their cell phones, that you’re still not looking up. You are fused with some belief or conceptual system while trying to use that very same belief or conceptual system to defuse yourself from it, which keeps you trapped in it. Instead, you could just stop using it, and then you’d be free.

Of course, this is easier said than done. Even if you know that this is what you’re doing, knowing it isn’t enough to stop doing it. Essentially, you have to somehow distract yourself from the belief you’re caught up with… but if your belief is that this thing is really important, then before you could distract yourself from it, you’d need to distract yourself from it, so as to stop worrying about the potential consequences of having distracted yourself from it.

Yeah.

All of this particularly applies for trying to overcome suffering. Because remember, suffering is caused by a belief that pain is intrinsically bad. That belief is what causes you to try to flinch away from pain in a way which, by itself, creates the suffering.

So if you are experiencing some really powerful emotion that’s causing you a lot of suffering, making you want to defuse from it so that you could stop feeling those bad things?

Well, then you are trying to be okay with feeling bad things, so that you could stop feeling bad things. Again, your motive for wanting to defuse from a belief, is digging you deeper into the belief.

On the surface, this would seem to suggest that you can only use Looking to stop suffering in cases of relatively mild pain, where you don’t really even care all that much about whether you’re in pain or not. Looking would only help you feel better in the cases when you’d need it the least anyway.

And to be honest, a lot of the time it does feel that way.

Fortunately, there is a solution.

The three marks

I previously mentioned that there’s something absurd about the belief that pain would need to be avoided: after all, if something really painful happens, then that won’t kill us: usually it only means that, well, something really painful has happened. We might be left traumatized, but that trauma is by itself also just more pain.

It’s as if a deep part of our minds is deluded about just how world-ending the pain is in the first place.

Buddhist theory states that that delusion arises from deep parts of our minds being wrong about some fundamental aspects of existence, traditionally called the three marks: impermanence, unsatisfactoriness, and no-self. If we can make ourselves curious about the true nature of existence, and Look deeply enough into just how our mind works, we can eventually witness things about how our mind works which contradict those delusions.

Do that often and deep enough, and the delusions shatter.

This allows us to actually overcome suffering, because in order to explore the nature of the self, we do not need to always be motivated by a desire to make the suffering stop. Rather, we can be motivated by things like curiosity or a desire to help other people, and explore the workings of our mind during times when we are not in terrible pain.

There will be a time when this happens on a sufficiently deep level that a person becomes convinced of full enlightenment being possible. Typically, the first time will be enough to let them get a taste of what it’s like to live without delusions; but their insights are not yet deep enough to cause a permanent change, and the delusions will soon regenerate themselves.

Still, the delusions will not regenerate entirely: something will have shifted permanently, in a way that makes it easier to make further progress on dissolving them.

While it is impossible to use words to convey the experience of getting insight into the three marks of existence, it is possible to offer a third-person perspective on what exactly it is that our minds are mistaken about. Of the three marks, no-self may be the easiest to explain in these terms.

In the book The Mind Illuminated, the Buddhist model of psychology is described as one where our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness. There's one particular subagent, the 'narrating mind' which takes these percepts and binds them together by generating a story of there existing one single agent, an I, to which everything happens. The fundamental delusion is when this fictional construct of an I is mistaken for an actually-existing entity, which needs to be protected by acquiring percepts with a positive emotional tone and avoiding percepts with a negative one.

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept "I" as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

On why enlightenment may not be very visible in one’s behavior

In the comments of the kensho post, cousin_it mentioned having read several reports of people claiming enlightenment… yet not seeming to really demonstrate it by having better emotional skills. A paper also reported on various people having achieved some kinds of advanced meditative states… but still not being all that different when viewed from the outside:

There seemed to be a clear distinction between a PNSE participant’s personality and their underlying sense of having an individualized sense of self. When the latter is absent, the former seems to be able to continue to function relatively unabated. There are exceptions. For example, the change in well-being in participants who were depressed prior to the onset of PNSE was obviously spotted by those around them. Generally, however, the external changes were not significant enough to be detected, even by those closest to the participant.

Based on how I experienced things when I had the experience that made enlightenment seem within reach, something like a lack of noticeable change is in fact exactly what I would expect from many people who become enlightened.

Remember, enlightenment means that you no longer experience emotional pain as aversive. In other words, you continue to have “negative” emotions like fear, anger, jealousy, and so on - you just don’t mind having them.

This does end up changing some of your emotional landscape. My experience was that since feeling crappy felt like an okay thing to happen, the thought of having negative experiences in the future no longer stressed me out. This brought with it a sense of calm, since I knew that I was in some sense "invulnerable" to anything that might happen. But the state of calmness was more of a result of everything being okay - a consequence of there no longer being anything that would be a genuine threat - rather than a permanent emotional state.

That emotion of calm could still be momentarily replaced by other emotional states as normal, it was just that one particular source of negative feelings (the fear of future negative feelings) was eliminated. I would still feel sadness about the things I normally feel sad about, anger about the things I normally feel angry about, and so on. And because those emotions no longer felt aversive, I didn’t have a reason to invest in not feeling those things - unless I had some other reason than the intrinsic aversiveness of an emotion to do so.

My model here is that enlightenment doesn’t automatically make you a good person, nor particularly emotionally balanced, or anything like that. If you were a jealous wreck before, but felt like it was totally justified and right for you to behave jealously… then seeing through the illusion of the self isn’t going to clear those cognitive structures from your head. It can help you defuse from them enough to see that your justifications are essentially arbitrary - but at the same time, you may also have defused from any cognitive structures that say that there’s something bad about having essentially arbitrary justifications.

To put it differently: one way of describing my experience was that it felt like an extreme moment of cognitive defusion, where I defused from my entire motivational system, and could just watch its operation from the outside.

But the thing is, if you truly step outside your entire motivational system, then that leaves the part that just stepped out with no motivational system, leaving the existing one operating as normal.

Suppose that you are thinking something like, “aha! stepping outside my whole motivational system means that I’m finally free to do thing X, which stupid internal conflicts have been blocking me from doing so far!”

But if you are thinking that, then you are still working inside a motivational system where it’s important to achieve X. (Still not stepping out of the car.) If you have truly defused from your motivational system, then you have no particular desire to change the things in your mind that influence whether you are going to achieve X or not.

Even if you manage to step outside the system, the system is still going to keep doing various things - like taking your body to the store to get food - that it has learned to do: being defused from a motivation doesn’t mean that the motivation would necessarily disappear or stop influencing your behavior. It just means that you can examine its validity as it goes on.

And if you see yourself going to the store to get some food, well, why not go along with that? After all, to stop acting as you always have, would require some special motivation to do so. All of your motivations exist within the system. If you previously had a motivation to change something about your own behavior, but also had underlying psychological reasons why you hadn’t changed your behavior yet, then enlightenment may leave that balance of competing motivations basically unaltered. You may still have mental processes struggling against each other and you may experience internal conflict as normal: the only difference is that you won’t suffer from that internal conflict.

Does this contradict the people who say that meditation will make you actively happy?

No: it only means that Looking at the nature of suffering might not make you actively happy (in the sense of experiencing lots of positive emotions). Remember that there are many things that you can Look at: meditation is essentially focusing your attention on something, and what you focus on makes a major difference.

I think in terms of meditative practices that work within an existing system (of pleasure and pain), versus ones that try to move you outside the system entirely. Some traditions focus on working inside the system, and may involve things like conditioning your mind for constant pleasure. Some systems combine the two, involving both practices which increase the amount of pleasure you’ll experience, while also helping you be okay even with experiencing less pleasure. The Mind Illuminated takes this approach, for example.

And if enlightenment leaves your existing personality remains mostly intact, does it mean that Looking and meditation are useless for improving your rationality after all?

No. Again, it only means that Looking at the things which cause suffering, will not change your behavior as much as you might expect. Again, there are many different things about the functioning of your mind that you can Look at. And getting to the point where're you're enlightened, requires training up a lot of mental precision which you can then use to Look at various things.

Even if you do manage to defuse from everything that causes you suffering, your existing personality and motivational system will still be in charge of what it is that you Look at in the future. If all you cared about was ceasing to suffer, well, you’re done! You might not have the motivation to do any more Looking on top of that, since it already got you what you wanted. You’ll just go on living as normal, with your existing personality.

But if you cared about things like saving the world, then you will still continue to work on saving the world, and you will be Looking at things which will help you save the world - including ones that increase your rationality.

It’s just that if the world ends up ending, it won’t feel like the end of the world.

Of course, you will still feel intense grief and disappointment and everything that you’d expect to feel about the world ending.

Intense grief and disappointment just won’t be the end of the world.

[Edited to add: for my more detailed, later explanation of this topic, see the series of posts starting from A non-mystical explanation of insight meditation and the three characteristics of existence [LW · GW].]

138 comments

Comments sorted by top scores.

comment by dxu · 2018-03-08T10:18:05.933Z · LW(p) · GW(p)
While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective. I agree that such an explanation can’t produce the necessary mental changes that the explanation is talking about. But it seemed wrong to me to claim that all of this would be somehow intrinsically mysterious and impossible to explain on such a level that would give people at least an intellectual understanding of what Looking and enlightenment and all that are.

Speaking as someone who's more or less avoided participating in the kensho discussion (and subsequent related discussions) until now, I think the quoted passage pretty much nails the biggest reservation I had with respect to the topic: the language used in those threads tended to switch back and forth between factual and metaphorical with very little indication as to which mode was being used at any particular moment, to the point where I really wanted to just say, "Okay, I sort of see what you're gesturing at and I'd love to discuss this with you in good faith, but before we get started on that, can we quickly step out of mythic mode/metaphor land/narrative thinking for a moment, just to make sure that we are all still on the same page as far as basic ontology goes, and agree that, for instance, physics and mathematics and logic are still true?"

But when other people in those threads (such as, for example, Said Achmiz) asked essentially the same question, it seemed to me (as in System-1!seemed) that Val and others would simply respond with "It doesn't matter what basic ontology you're using unless that ontology actually helps you Look." Which, okay, fine, but I don't really want to start trying to Look until I can confirm the absence of some fairly huge epistemic issues that typically plague this region of thought-space.

All of which is to say, I'm glad this post was made. ;-)

(although there is a part of me that can't help but wonder why this post or something like it wasn't the opener for this topic, as opposed to something that was only typed up after a couple of huge demon threads spawned)

Replies from: PeterBorah, Qiaochu_Yuan
comment by PeterBorah · 2018-03-08T17:40:44.341Z · LW(p) · GW(p)

I am really happy that this post was written, and mildly annoyed by the same things you're annoyed by.

To explain rather than excuse, there's a good reason that meditation teachers historically avoid giving clear answers like this. That's because their goal is not to help you intellectually understand meditation, but rather to help you do meditation.

It's very easy to mentally slip from "I intellectually understand what sort of thing this is" to "I understand the thing itself", and so meditation teachers hit this problem with a hammer by just refusing to explain it, so you're forced to try it instead. This problem is what the "get out of the car" section is talking about.

I have some worry that this post will make it easier for people to make errors like:

"I'm angry, because X is a jerk. Aha, I should try the thing Kaj was talking about, and notice that feeling angry is not helping me with my goal of utterly destroying X."

(This is exaggerated, but mistakes of this shape are really, really easy to make.)

I think it's definitely worth the cost, but it is a cost.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-08T20:32:34.550Z · LW(p) · GW(p)

In particular, I would also add this warning: it's (mildly) dangerous to try to convince yourself of this no-self stuff too deeply on a purely intellectual level.

There was one point where I had read intellectual descriptions of the no-self thing, but hadn't had the experience of it. But I figured that maybe if I really thought it through and used a lot of compelling arguments, I could convince myself of it - after all, the intellectual argument seemed reasonable, but I clearly wasn't believing it on an emotional level, so maybe if I tried really hard to make the intellectual argument sink in?

This does not work. (At least, it didn't work for me, and I doubt it works for the average person.) The "no-self" thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating. What I ended up with was some kind of a notion, temporarily and imperfectly believed on an emotional level, that every second of existence involved me dying and a new entity being created, and that every consciousness-moment would be my last.

That was not a healthy state of mind to be in; fortunately, my normal thinking patterns pretty quickly overrode it and I went back to normal. That is also not what the "kensho" experience that I described felt like. That experience felt calming and liberating, with none of the kind of discomfort that you'd get if you tried to force the assumption of no self existing into an ontology which always presupposed the existence of a self.

Replies from: Valentine
comment by Valentine · 2018-03-14T01:29:48.919Z · LW(p) · GW(p)
The "no-self" thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating.

This.

I'll finish reading the other comments and then, time permitting, I'll add my own.

I'll just note for now that there's a kind of "being clear" that I think is dangerous for rationality, in a way analogous to what you describe here about no-self. The sketch is something like: if an epistemology is built on top of an ontology, then that epistemology is going to have a hard time with a wide swath of ontological updates. Getting around this seems to require Looking at one's ontologies and somehow integrating Looking into one's epistemology. Being required to explain that in terms of a very specific ontology seems to give an illusion of understanding that often becomes sticky.

comment by Qiaochu_Yuan · 2018-03-08T20:41:58.701Z · LW(p) · GW(p)
but before we get started on that, can we quickly step out of mythic mode/metaphor land/narrative thinking for a moment, just to make sure that we are all still on the same page as far as basic ontology goes, and agree that, for instance, physics and mathematics and logic are still true?"

Now it seems to me like there was some straight up miscommunication in that thread. My recollection is that everywhere I saw this question explicitly asked, it was explicitly answered "yes" (e.g. JenniferRM asked it at some point). I don't remember Said asking this question.

comment by DanielFilan · 2019-12-12T22:28:44.641Z · LW(p) · GW(p)

As far as I can tell, this post successfully communicates a cluster of claims relating to "Looking, insight meditation, and enlightenment". It's written in a quite readable style that uses a minimum of metaphorical language or Buddhist jargon. That being said, likely due to its focus as exposition and not persuasion, it contains and relies on several claims that are not supported in the text, such as:

  • Many forms of meditation successfully train cognitive defusion.
  • Meditation trains the ability to have true insights into the mental causes of mental processes.
  • "Usually, most of us are - on some implicit level - operating off a belief that we need to experience pleasant feelings and need to avoid experiencing unpleasant feelings."
  • Flinching away from thoughts of painful experiences is what causes suffering, not the thoughts of painful experiences themselves, nor the actual painful experiences.
  • Impermanence, unsatisfactoriness, and no-self are fundamental aspects of existence that "deep parts of our minds" are wrong about.

I think that all of these are worth doubting without further evidence, and I think that some of them are in fact wrong.

If this post were coupled with others that substantiated the models that it explains, I think that that would be worthy of inclusion in a 'Best of LW 2018' collection. However, my tentative guess is that Buddhist psychology is not an important enough set of claims that a clear explanation of it deserves to be signal-boosted in such a collection. That being said, I could see myself being wrong about that.

Replies from: Raemon, Chris_Leong
comment by Raemon · 2019-12-12T22:47:41.634Z · LW(p) · GW(p)

 I think that all of these are worth doubting without further evidence, and I tihnk that some of them are in fact wrong.

I'd be interested in you going into the details of which claims seem wrong and why.

Replies from: DanielFilan
comment by DanielFilan · 2019-12-12T23:06:54.572Z · LW(p) · GW(p)

Well, I'm significantly more confident that at least one is wrong than about any particular one being wrong. That being said:

  • It seems wrong to claim that meditation tells people the causes of mental processes. You can often learn causal models from observations, but it's tricky, and my guess is that people don't do it automatically.
  • I don't think that most people implicitly act like they need to avoid mental experiences.
  • I don't know if 'suffering' is the right word for what painful experiences cause, but it sure seems like they are bad and worth avoiding.
  • My guess is that unsatisfactoriness is not a fundamental aspect of existence.

That being said, there's enough wiggle room in these claims that the intended meanings would be things that I'd agree with, and I also think that there's a significant shot that I'm wrong about all of the above.

comment by Chris_Leong · 2024-07-27T15:54:31.059Z · LW(p) · GW(p)

Reducing the extent to which people with various world views talk past each other seems core to rationality.

comment by Qiaochu_Yuan · 2018-03-08T08:47:09.135Z · LW(p) · GW(p)

I like this. The terminology I was exposed to for what you're calling cognitive fusion is being "subject to" something (I think it comes from Kegan but I learned it from Pete Michaud), and defusion is taking the thing "as object." (Actually these might not quite line up; if someone who's familiar with both terms can explain any possible differences to me I'd appreciate it.) And the practice I've been using for getting experience with this is circling.

Example: I spent most of the last year being subject to aliefs along the lines of "if X happens then that means I'm bad, which means that nobody will ever love me," which constantly surfaced in most of the circles I was in. The point of working with this alief in a circle was 1) to get exposed to situations that might trigger the alief, 2) to notice when other people started being confused about what I was saying because it no longer made sense to them, and 3) possibly to get actual experiences that contradict the alief (people still loving me even though X had happened). I was gradually able to take this alief as object but it took awhile; it had a very, very strong grip on me. Then it disappeared entirely after I did some other stuff (Tantra workshops, talking to a friend who tweaked my self-concept, cutting carbs out of my diet, drinking pedialyte, CFAR workshop), but I think the circling was foundational for the other stuff having the effect that it did.

This alief was preventing me from doing a lot of things out of fear of being judged (as bad), and I feel much more free to do whatever I want now that it's gone, which so far has included initiating the process of leaving graduate school to work for MIRI, organizing an event at the CFAR office to explore teaching people how to construct gears models, writing about vulnerable topics like emotional pain and trauma on Facebook, visiting gardens around Berkeley, hanging out with birds, not playing video games or watching anime or reading manga, weight lifting, looking people in the eyes more, asking for and offering more hugs, singing and dancing while walking down the street, writing this list even though it feels like bragging...

In other comments when I've talked about thorny emotional bugs, this is the sort of thing I was talking about. My experience is that most people come to CFAR workshops with at least one bug like this (edit: which they don't know they have! Blindspots!), which is seriously holding them back; I don't think it's uncommon at all.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-08T11:56:24.873Z · LW(p) · GW(p)

Reading this comment made me feel really happy for you.

From what you've been writing here and on Facebook, I feel like I can relate to a lot of the stuff you've been going through and fixing. I'm glad that you're getting through it.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-08T20:53:57.936Z · LW(p) · GW(p)

Thanks. I've really appreciated your writing about what you've gone through as well, especially the core transformation stuff and the self-concept stuff above.

comment by Valentine · 2018-03-14T22:42:01.108Z · LW(p) · GW(p)

I like this. I largely agree.

I'd like to pinpoint a few differences I notice. I hope the collective here takes this as me coming from a spirit of "Here's the delta I see" rather than "I disagree and here's why." By and large I really like the clarity Kaj has brought to this.

First, a meta thing:

While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective.

I didn't mean to convey that it can't be explained this way. I now think I was combining a few different things in a way that accidentally made it hard to understand:

  • One key thing I now see is that Looking doesn't require self-reference — but most of the interesting applications of Looking that I'm aware of do require navigating self-reference. An example of this is the "get out of the car" problem. (I'll have more to say about Kaj's interpretation of that in a bit.)
  • The main thrust of what I'm poking at is a collection of results of Looking at ontology (whereas here Kaj focuses mostly on Looking at suffering). If we drag in an ontology to say "Okay, here's what Looking is, and now we've nailed it down", and then you use that definition of Looking instead of the phenomenological skill, then it's going to be enormously hard to Look at the ontology used to define Looking. And in this particular case, people seem to be prone to not noticing when they've made this error. (Again with the car/phone analogies.) I'm particularly concerned here because the culture around LW-style rationality seems to emphasize a very specific and almost mathematically precise ontology in a way that is often super useful but that I don't think is a necessary consequence of the epistemic orientation. That made me really hesitant to put a bunch of effort into spelling out what Looking might be within that favored ontology, since the whole point is to notice restrictions on epistemic strength imposed by ontological rigidity. I was (and am) concerned about early attempts to explain this stuff locking out a collective ability to understand.
  • With that said, my personal impression had been that it's actually quite easy to see what Looking is and how one might translate it into reductionist third-person perspectives. But my personal experience had been that whenever I tried to share that translation, I'd bounce off of weird walls of misunderstanding. After a while I noticed that the nature of the bounce had a structure to it, and that that structure has self-reference. (Once again, analogies along the lines of "get out of the car" and "look up from your phone" come to mind.) After watching this over several months and running some informal tests, and comparing it to things CFAR has been doing (successfully, strugglingly, or failing to do) over the six years I've been there, it became obvious to me that there are some mental structures people run by default that actively block the process of Looking. And for many people, those structures have a pretty strong hold on what they say and consciously think. I've learned to expect that explaining Looking to those structures simply will never work. (There are other structures in human mind design, though. And I claim there's a pretty reliable back door to such self-referential architectures. But explaining that explicitly seems to run into the same communication problem… which is why I'm writing this meta-ontology sequence.)

So… I'm happy to go with the main thrust of what I receive Kaj as expressing here. I just also want to add an asterix saying something like "Beware, we've now entered a realm where the illusion of safety has become stronger, and I fear this will make what's coming that much more painful by comparison and thus harder to understand."

I believe that this kind of thing is what Valentine means when he talks about Looking: being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

Yep.

…with a caveat that I'm pretty sure Kaj gets and I think even said but that I don't know if the casual reader will reliably catch:

On the inside, before you Look, the thing you're about to Look at doesn't look on the inside like "high-level cognitive content". It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don't know of any coherent way of talking about this.)

Understanding suffering is a special case of Looking, but a sufficiently important one that it deserves to be briefly discussed in some detail.

I want to highlight this. I quite agree, suffering is a really important special case, and I'm delighted with what Kaj did with it. And also, it's a special case. Nearly all discussion of enlightenment-flavored stuff I've encountered has been about the alleviation of suffering or the cultivation of happiness, and I think these are great and important things to emphasize… and I think there's something else in this domain that's more central to the rationality project here. (Although I do think that the path of alleviating suffering via e.g. Looking at the nature of the self does result in a bunch of the right kind of epistemic updates. I just suspect it's insufficient.)

So what’s all this “look up” and “get out of the car” stuff? […] You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself.

This comes across to me as a great explanation of a special case. Kaj might mean the general thing, but I'm not sure.

I'm going to claim there are two kinds of problems that this "get out of the car" thing is pointing at:

  • Structural self-reference in defusion. In other words, if a belief you're fused with is being used somehow in the effort to defuse from it, then the defusion is likely to fail. Kaj gives one type of example of this. Another one is if the belief provides the framework by which you're orienting to the possibility of Looking at the belief in the first place. I have a post planned about two or three out that will go into more detail about this, but to (maybe dangerously?) gesture at the thing: Starting from reductionist materialism to frame the question of what our brains are doing when we Look at reductionist materialism yields a strange loop that often causes the process to glitch (and can reinforce an impression of real-ness that's strange the way believing in objectively existing objects is strange).
  • Orthogonality. If I try to argue with a paperclip maximizer about how maximizing paperclips isn't all there is to life, it will care to listen only to the extent that listening will help it maximize paperclips. I claim that by default, human mind design generates something analogous to a bunch of paperclip maximizers. If I'm stuck talking to one of someone's paperclip maximizers, then even if I see that there are other parts of their mind that would like to engage with what I'm saying, I'm stuck talking to a chunk of their mind that will never understand what I'm saying. (I'll have more to say about this in my next post (or the one after it if I need to split them again).)

The second case is pretty straightforward to bypass by Looking. The first is much trickier, but I think might be doable if you track a kind of phenomenological feedback loop that the self-reference generates and use that as a warning sign. (Unfortunately, I think there are structural reasons why the warning sign can't say anything much more specific than "Do something different." In short, the part of the mind that's trying to work out what to do is almost always using the belief in question, so no amount of instruction is going to help it get a meaningfully useful answer.)

All of this particularly applies for trying to overcome suffering. Because remember, suffering is caused by a belief that pain is intrinsically bad. That belief is what causes you to try to flinch away from pain in a way which, by itself, creates the suffering.

I like this. I hadn't quite thought of it this explicitly.

It also suggests some hope in approaching the domain from the angle of a desire for good epistemics instead, which is roughly where I've been coming from. I haven't yet noticed any self-referential glitch, instead finding things like the Litany of Tarski.

…but knowing the rhythm of this domain, I suspect this is just a description of my current ignorance.

But if you cared about things like saving the world, then you will still continue to work on saving the world, and you will be Looking at things which will help you save the world - including ones that increase your rationality.

This.

I've come to learn that the communities that talk most about enlightenment-related things are very particular about the word "enlightenment" and seemed to bristle at how I used it. So I'll add an adjustment to language (but not to what I have been meaning to convey) and clarify that I don't mean to imply that I am fully enlightened. I still suffer, I still usually operate under the delusion that I have a self (though I've seen through that one twice so far), and I haven't Looked carefully at impermanence or unsatisfactoriness as yet.

And with that said, I strongly resonate with this sentiment.

I'm writing what I'm writing, and I continue to teach at CFAR, and do all the things I'm doing, because I care to do some world-saving things.

And I see some things, via Looking, that I think are very important to share.

(It just takes a while to build a scaffold that might work for sharing it. And much appreciation for people like Kaj who build better scaffolds than I've managed so far!)

Replies from: Wei_Dai, David_Chapman, Qiaochu_Yuan, vanessa-kosoy
comment by Wei Dai (Wei_Dai) · 2018-03-15T06:54:48.256Z · LW(p) · GW(p)

On the inside, before you Look, the thing you’re about to Look at doesn’t look on the inside like “high-level cognitive content”. It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don’t know of any coherent way of talking about this.)

This sounds very familiar. To quote from How An Algorithm Feels From Inside:

Because we don't instinctively see our intuitions as "intuitions", we just see them as the world. When you look at a green cup, you don't think of yourself as seeing a picture reconstructed in your visual cortex—although that is what you are seeing—you just see a green cup. You think, "Why, look, this cup is green," not, "The picture in my visual cortex of this cup is green."

And in the same way, when people argue over whether the falling tree makes a sound, or whether Pluto is a planet, they don't see themselves as arguing over whether a categorization should be active in their neural networks. It seems like either the tree makes a sound, or not.

In the same post, Eliezer also wrote:

It takes a deliberate effort to visualize your brain from the outside—and then you still don't see your actual brain; you imagine what you think is there, hopefully based on science, but regardless, you don't have any direct access to neural network structures from introspection.

It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures. If this is a correct understanding, it seems perfectly compatible with LW's current approach to ontology, or at least the approach laid out in Eliezer's Sequences (with one caveat being that I think we should be careful/skeptical about whether someone purporting to be Looking is really introspecting parts of their neural network structures, or merely doing some form of epistemic wireheading). Do you agree?

Replies from: Valentine
comment by Valentine · 2018-03-15T17:36:46.166Z · LW(p) · GW(p)
It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures. If this is a correct understanding, it seems perfectly compatible with LW's current approach to ontology, or at least the approach laid out in Eliezer's Sequences (with one caveat being that I think we should be careful/skeptical about whether someone purporting to be Looking is really introspecting parts of their neural network structures, or merely doing some form of epistemic wireheading). Do you agree?

Hmm. I need to answer this in two pieces simultaneously:

  • The short and slightly deceptive answer is "Yes I agree." A more careful answer: From within LW's current approach to ontology, the restriction of Looking to that ontology works perfectly well, although there are some things (like what Eric S. Raymond refers to in Dancing With the Gods) that will at best make sense while remaining largely inaccessible.
  • Your very first sentence here presupposes the standard LW ontology: "It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures." The structure of your question then goes on to ask about Looking's compatibility with that ontology from within that ontology. The answer has to be "yes", because the question makes sense within the ontology. This generates a "Get out of the car" problem. This isn't a huge problem right here and now, but it will be a problem down the road when I start more explicitly pointing at some results of Looking at ontologies.
Replies from: Wei_Dai, gjm
comment by Wei Dai (Wei_Dai) · 2018-03-15T19:03:00.404Z · LW(p) · GW(p)

Hmm... So going back to the paragraph I was responding to:

On the inside, before you Look, the thing you’re about to Look at doesn’t look on the inside like “high-level cognitive content”. It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don’t know of any coherent way of talking about this.)

Are you saying that LW's approach to ontology has a different problem from this (which causes it to not be able to create an ontology that captures everything that's important about Looking)? (In other words, this paragraph wasn't meant to apply to LW; LW has a different problem.) Or is it something more like, LW's approach appreciates "we create objects in order to think" on an intellectual level but not on a practical level?

Replies from: Valentine
comment by Valentine · 2018-03-15T22:00:24.033Z · LW(p) · GW(p)
Or is it something more like, LW's approach appreciates "we create objects in order to think" on an intellectual level but not on a practical level?

That one.

Though to be clear, I'm not trying to talk specifically about the "there are no objects" thing exactly. I was using that as an example of something seen via Looking that I imagine sounds kind of crazy or nonsensical.

But I do mean that LW culture occurs to me as being subject to its ontology, and to the extent that there's discussion of this, that discussion is pretty reliably done within that ontology. This gives the illusion of it being justified (when that's actually just a consistency check) and makes the ontology's blindspots incredibly difficult to point out.

comment by gjm · 2018-03-15T19:21:48.349Z · LW(p) · GW(p)

I don't think that sentence exactly presupposes the standard LW ontology. Rather, Wei_Dai is saying: "It currently looks to me as if this Looking stuff is compatible with standard LW ontology, and here's what it looks like; if that's wrong, please explain how".

I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding. Of course they may well be right; perhaps explaining what we're wrong about really is futile because we just need to Get Out Of The Car but nothing they tell us will help us know what a "car" is or in what way we're "in" one or how to "get out", until enlightenment strikes and we see -- sorry, See -- for ourselves. That doesn't stop it being frustrating, though. Still, I continue to harbour some hope that Valentine's future articles may be, um, enlightening.

[1] I don't mean to imply (1) that the people in question have achieved True Enlightenment, whatever that may be, or (2) that they think they have, or (3) that their alleged enlightenment is real. Though in any given case, any subset of those things might be the case.

[2] I don't mean to imply (1) that the people in question are genuinely lacking some valuable insight or ability, or (2) that any particular other person thinks they have, or (3) that any particular other person thinks them inferior if they have. Though in any given case, any subset of those things might be the case.

(One specific possibility relevant to those footnotes is worth being explicit about: it could be that the Enlightened have genuine insights that they have gained through their Enlightenment -- but that some of the Unenlightened have some of the same insights too, but it's difficult to recognize that one insight is the same as the other. E.g., one thing Enlightened people sometimes report is a discovery that in some sense the "self" is unreal; some philosophers, neuroscientists, etc., have reached somewhat similar conclusions by very different routes; perhaps these are all correct discoveries of a single underlying truth, expressed in different terms.)

Replies from: Kaj_Sotala, Valentine, Valentine
comment by Kaj_Sotala · 2018-03-15T19:40:48.843Z · LW(p) · GW(p)

I'm confused. You seem to be expressing frustration at not getting a clear explanation of how Looking is incompatible with standard LW ontology, but Val just said that it is compatible with it?

Of course they may well be right; perhaps explaining what we're wrong about really is futile because we just need to Get Out Of The Car but nothing they tell us will help us know what a "car" is or in what way we're "in" one or how to "get out",

... my post was an attempt to explain some of exactly that? A "car" is any belief or conceptual structure that you're fused with; how you get out of it depends on the exact nature of the belief. For things like simple emotional reactions, just managing to distract yourself may be enough; for deeper conceptual structures, developing skill at being able to break down your cognitive processes into more basic building blocks is typically required.

Replies from: gjm
comment by gjm · 2018-03-15T20:01:54.908Z · LW(p) · GW(p)

I don't think Valentine did quite say that (his notion of) Looking is compatible with standard LW ontology. He speaks of "the restriction of Looking to that ontology" and indicates that from within the standard LW ontology other things will "remain largely inaccessible". He says that what Wei_Dai is saying "presupposes the standard LW ontology" and that this produces a "Get out of the car" problem. (While, yes, conceding that within that ontology "yes, it's compatible" is the best available answer.)

I agree that your post is an attempt to explain those things. (And my slightly snarky comments about what "the Enlightened" are and aren't willing to do was -- I should have been explicit about this, sorry -- not meant to apply to you: your clarity and explicitness on this stuff is extremely welcome.) But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/... that it doesn't (and maybe can't) cover.

Obvious disclaimer: I am not Valentine, and I may very well be misunderstanding him.

Replies from: Valentine, Kaj_Sotala
comment by Valentine · 2018-03-15T23:05:06.378Z · LW(p) · GW(p)
But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/... that it doesn't (and maybe can't) cover.

Doesn't: yes, for sure.

Can't: mmm, maybe? I expect that by the end of the sequence I'm writing, we'll return to Kaj's interpretation of Looking and basically just use it as a given — but it'll mean something slightly different. Right now, I expect that if we just assume Kaj's interpretation, we're going to encounter a logjam when we apply Looking to the favored LW ontology, and the social web will have a kind of allergic reaction to the logjam that prevents collective understanding of where it came from. Once we collectively understand the structure of that whole process, we can smash face-first into the logjam, notice the confusion that results, and then make some meaningful progress on making our epistemic methods up to tackling serious meta-ontological challenges. At that point I think it'll be just fine to say "Yep, we can think of Looking as compatible with the standard LW ontology." Just not before.

Replies from: gjm
comment by gjm · 2018-03-16T02:41:04.579Z · LW(p) · GW(p)

Interesting. Let's see what the sequence holds...

comment by Kaj_Sotala · 2018-03-15T20:03:46.215Z · LW(p) · GW(p)

Got it. Apology accepted and appreciated. :)

comment by Valentine · 2018-03-15T22:59:16.894Z · LW(p) · GW(p)
I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding.

I really am trying. When I talk about paperclip-maximizer-like subagents or ontological self-reference, it's not my intent to say "You can't understand because of XYZ." I'm trying to say something more like, "I'd like you to notice the structure of XYZ and how it interferes with understanding, so that you notice and understand XYZ's influence while we talk about the thing."

Right now there's too large of an inferential gap for me to answer the "how" question directly, and I can see specific ways in which my trying will just generate confusion, because of XYZs. But I really am trying to get there. It's just going to take me a little while.

One specific possibility relevant to those footnotes is worth being explicit about: it could be that the Enlightened have genuine insights that they have gained through their Enlightenment -- but that some of the Unenlightened have some of the same insights too, but it's difficult to recognize that one insight is the same as the other.

Strong agreement.

Replies from: Valentine
comment by Valentine · 2018-03-15T23:01:43.958Z · LW(p) · GW(p)

Meta: Okay, I'm super confused what just happened. The webpage refreshed before I submitted my reply and from what I could tell just erased it. Then I wrote this one, submitted it, and the one I had thought was erased appeared as though I'd posted it.

(And also, I can't erase either one…?)

comment by Valentine · 2018-03-15T22:25:49.848Z · LW(p) · GW(p)
I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding.

I really am sincerely trying. In this case there's a pretty epic inferential gap, and I'm working on bridging that gap… and it requires first talking about paperclip-maximizing-like mechanisms and illusions created by self-reference within ontologies that one is subject to. Then I can point at the Gödelian loophole, and we can watch our minds do summersaults, and we'll recognize the summersaults and can step back and talk coherently about what the existence of the ontological wormhole might mean for epistemology.

Or at least that's the plan.

And… I recognize it's frustrating in the middle. And if I were more clever and/or more knowledgeable, I might have seen a way to make it less frustrating. I'd rather not create that experience for y'all.

FWIW, I don't think the Unenlightened[2] can't understand where I'm going. I just need some conceptual structures, like the social web thing, to make where I'm going even possible to say — at least given my current skill with expressing this stuff.

Still, I continue to harbour some hope that Valentine's future articles may be, um, enlightening.

Ha! :-)

I hope so too.

comment by David_Chapman · 2018-03-16T00:38:16.448Z · LW(p) · GW(p)

Quote from Richard Feynman explaining why there are no objects here.

I've begun a STEM-compatible attempt to explain a "no objectively-given objects" ontology in "Boundaries, objects, and connections." That's supposed to be the introduction to a book chapter that is extensively drafted but not yet polished enough to publish.

Really glad you are working on this also!

comment by Qiaochu_Yuan · 2018-03-15T00:59:49.919Z · LW(p) · GW(p)
I'm particularly concerned here because the culture around LW-style rationality seems to emphasize a very specific and almost mathematically precise ontology in a way that is often super useful but that I don't think is a necessary consequence of the epistemic orientation. That made me really hesitant to put a bunch of effort into spelling out what Looking might be within that favored ontology, since the whole point is to notice restrictions on epistemic strength imposed by ontological rigidity.

Yeah, this thing. In another comment I used the phrase "LW epistemic game" to describe the pattern in the local social web around deciding who's epistemically trustworthy and what flavors of arguments are epistemically acceptable. I'm not sure how to get people to Look at the game without looking like I'm defecting in the game.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T10:26:14.633Z · LW(p) · GW(p)

Alright, but, it is actually true that some flavors of arguments are acceptable (i.e. serve as evidence for truth) whereas other flavors of arguments are not acceptable (i.e. don't serve as evidence for truth). A lot of rationalist wisdom revolves precisely around distinguishing one from the other. Your comment sounds like someone who is comparing science and religion and saying that, both are just "patterns in a social web" that deem different sort of arguments as acceptable. However, they are not really symmetric. One of them is more correct than the other. So, saying that Looking cannot be explained by the sort of arguments that rationalists tend to accept does not strike me as a point in favor of Looking as a useful concept? I might be misunderstanding your intent.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-18T17:45:45.493Z · LW(p) · GW(p)
acceptable (i.e. serve as evidence for truth)

I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate "acceptable," which is fundamentally a social notion, and "serve[s] as evidence for truth." The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.

Your comment sounds like someone who is comparing science and religion and saying that, both are just "patterns in a social web" that deem different sort of arguments as acceptable.

Yes, that's a sort of thing I might say.

However, they are not really symmetric. One of them is more correct than the other.

I get the feeling that you think you've said a simple thing, but actually the thing you've said is very complicated and deserves to be unpacked in much greater detail. In short: more correct about what? And why does being correct about those things matter?

The scientific frame is not just a collection of methods for finding truths, it's also a collection of implicit values about what sorts of truths are worth finding out. Science clearly beats religion at finding out truths about things like how to predict the weather or make computers. But it's much less clear that it beats religion at finding out truths about things like how to live a good life or make good communities of humans that actually work.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T20:17:07.547Z · LW(p) · GW(p)
I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate "acceptable," which is fundamentally a social notion, and "serve[s] as evidence for truth."

I am not conflating them, I am using the word "acceptable" to mean "the sort of argument that would move a rational agent" = "serves as evidence for truth". Of course our community standards are likely to fall far short from the hypothetical standards of ideal rational agents. However, it seems not useful to point it out without saying precisely which error is being made. When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.

I get the feeling that you think you've said a simple thing, but actually the thing you've said is very complicated and deserves to be unpacked in much greater detail.

I don't know why you feel that I think that I said a simple thing (we're breaking Yudkowsky's Law of Ultrafinite Recursion here). I do think I said something that is relatively uncontroversial in this community. But alright, let's start unpacking.

Science clearly beats religion at finding out truths about things like how to predict the weather or make computers. But it's much less clear that it beats religion at finding out truths about things like how to live a good life or make good communities of humans that actually work.

The opposite is also not clear. Religion is responsible for many, many atrocities, both on a grand scale (witch hunts, crusades, jihads, pogroms etc.) and on a moderate scale (e.g. persecution of sexual minorities, upholding patriarchal social orders and justifying authoritarian regimes). These atrocities were ameliorated in the modern age to a large extent thanks to the disillusionment with religion brought about by the advent of science. Moreover, religion doesn't really consciously set out to find truths about making good communities. It seems to be more a side effect of religion, since a group of people with common beliefs and traditions is naturally more cohesive than a group of people without such. I think that if we do set out to find these truths, we would be well advised to use the methods of science (e.g. empiricism and mathematical models) rather than the methods of religion (i.e. dogma).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-19T05:23:46.196Z · LW(p) · GW(p)
"the sort of argument that would move a rational agent" = "serves as evidence for truth".

I think these are not at all the same, and that using the word "acceptable" makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?

In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we're all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we're rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I'm making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.

However, it seems not useful to point it out without saying precisely which error is being made.

There's at least one error I can point to easily because someone else already did the hard work of pinning it down: I think the error that is being committed in the Hero Licensing dialogue is a default outcome of the LW epistemic game.

When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.

The changes I want to make are not random, and I don't believe that the LW epistemic game is highly optimized for the right thing.

The opposite is also not clear.

So, let's back up. The reason we started talking about this science vs. religion thing is because you objected to my description of the LW epistemic game. I think we got a little lost and meandered too far from this objection. The point I understood you to try to be making was that the LW epistemic game is not just a game, it's also supposed to help us be truth-seeking. And the point I was trying to make in response is that to the extent that it's not perfectly truth-seeking (which it is certainly not), this fact is worth pointing out. Are we on the same page about all that?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-21T20:39:13.918Z · LW(p) · GW(p)

I agree that you can make arguments that appeal to people that have a particular intuition and don't appeal to people that don't have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I'm not sure whether the essence of your claim is that people on LW take ill to that kind of arugments?

I admit that I haven't read the entire "hero licensing" essay but my impression was that is hammering home the same thesis that already appears in "inadequate equilibria", namely that "epistemic modesty" as often practiced is a product of status games rather than a principle of rationality. But I don't really understand why you think it's "a default outcome of the LW epistemic game". Can you expand?

Yes, the "LW epistemic games" is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it's mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.

comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-16T22:25:41.178Z · LW(p) · GW(p)
Starting from reductionist materialism to frame the question of what our brains are doing when we Look at reductionist materialism yields a strange loop that often causes the process to glitch

I don't understand why should it happen. Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning. It does cause confusion, which is the reason for debates surrounding "free feel" et cetera, but this confusion is solvable. I think that I no longer have this confusion, so why should Looking be an exception?

If I try to argue with a paperclip maximizer about how maximizing paperclips isn't all there is to life, it will care to listen only to the extent that listening will help it maximize paperclips. I claim that by default, human mind design generates something analogous to a bunch of paperclip maximizers. If I'm stuck talking to one of someone's paperclip maximizers, then even if I see that there are other parts of their mind that would like to engage with what I'm saying, I'm stuck talking to a chunk of their mind that will never understand what I'm saying.

The reason the paperclip maximizer won't listen is because it doesn't care, not because it doesn't understand what you're saying. So, this allegory would only make sense if, some parts of our mind don't care about the benefits of Looking while other parts do care. It still shouldn't be an impediment to understand what Looking is.

Replies from: Valentine, Kaj_Sotala
comment by Valentine · 2018-03-17T02:35:45.757Z · LW(p) · GW(p)
Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning.

I'm not disagreeing with that. I'm saying that:

  • It's pretty normal to miss the confusion in this case.
  • Looking isn't reasoning.

The reason the paperclip maximizer won't listen is because it doesn't care, not because it doesn't understand what you're saying. So, this allegory would only make sense if, some parts of our mind don't care about the benefits of Looking while other parts do care. It still shouldn't be an impediment to understand what Looking is.

…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-17T20:16:52.590Z · LW(p) · GW(p)

How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge? Or, is Looking some kind of infohazard that can damage a mind just via the knowledge itself? In either case it makes Looking sound like something very dangerous.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-17T21:51:36.913Z · LW(p) · GW(p)
How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge?

This is roughly the kind of thing that can happen. For example, suppose that it's an important feature of your identity / self-concept that you're a good, kind person, such that seeing strong evidence that you're not such a person would be psychologically devastating to you - you wouldn't be able to trust yourself to interact with other people, or something, and so you'd hole up in your room and just be depressed, or at least some part of you is afraid that something like this is possible. Then that part of you will be highly motivated to ignore evidence that you're not a good, kind person, and avoid situations or thoughts that might lead you to see such evidence.

My experience is that many or even most people have a thing like this (and don't know it). At CFAR we use the term "load-bearing bug" to refer to a bug you have that actively resists being solved, because some part of you is worried that solving it might be devastating in this way. For me the study of rationality doesn't really begin until you encounter your first such bug.

So yes, you're correct that Looking can be dangerous, in loosely the same way that telling your parents the truth about something might be dangerous if they're not ready to handle it and might respond by e.g. kicking you out of the house. But that's mostly a fact about your parents, and mostly not a fact about the nature of truth-telling.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T10:15:13.707Z · LW(p) · GW(p)

It's certainly true that there are truths about which people are lying to themselves. However, I'm confused about this being an explanation for why Looking is so difficult to explain. My impression from the "phone" allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don't want to acknowledge it because it would disrupt some self-deception.

Replies from: Valentine
comment by Valentine · 2018-03-18T17:18:10.245Z · LW(p) · GW(p)
My impression from the "phone" allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don't want to acknowledge it because it would disrupt some self-deception.

People don't need to already know it in order for this dynamic to play out. All that's required is that the person have some kind of idea of what type of impact it'll have on their mental architecture — and that "some kind of idea" needn't be accurate.

This gets badly exacerbated if the concept is hard to understand. See e.g. "consciousness collapses quantum uncertainty" type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.

But to illustrate in a little more detail how this might make Looking more difficult to understand, here's a slightly fictionalized exchange I've had with many, many people:

  • Them: "Give me an example of Looking."
  • Me: "Okay. If you Look at your hand, you can separate the interpretation of 'hand' and 'blood flow' and all that, and just directly experience the this-ness of what's there…"
  • Them: "That sounds like woo."
  • Me: "I'm not sure what you mean by 'woo' here. I'm inviting you to pay attention to something that's already present in your experience."
  • Them: "Nope, I don't believe you. You're trying to sell me snake oil."

After a few months of exploring this, I gathered that the problem was that Looking didn't have a conceptual place to land in their framework that didn't set off "mystical woo" alarm bells. Suddenly I'm talking to their epistemic immunization maximizer, which has some sense that whatever "Looking" is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem's need to justify its predetermined rejection of attempts to understand what I'm saying.

Certainly not everyone does this particular one. I'm just offering one specific example of a type.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T19:54:22.521Z · LW(p) · GW(p)

Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being "innocuous" (e.g. X-risk vs. "doomsday prophecies" and rationalist community vs. "cult"). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the "pathogens" against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the "this-ness" etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).

Replies from: Valentine
comment by Valentine · 2018-03-19T03:00:34.252Z · LW(p) · GW(p)
Alright, I think I now understand much better what you mean, thank you.

Great. :-)

[…]these immune responses are there for a reason.

Of course. As with all other systems.

Specifically in the case of Looking, what rings my alarm bells is not so much the "this-ness" etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).

The following has been said many times already, but I'll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.

comment by Kaj_Sotala · 2018-03-16T22:43:42.317Z · LW(p) · GW(p)
It still shouldn't be an impediment to understand what Looking is.

Except if the parts that listen to the explanation, don't care about actually understanding it.

I don't know whether this is the thing that Val means, but I've certainly had times of finding things that were kind of like Looking, and getting all excited about them... but my mind was mostly interested in understanding them on an intellectual level and then showing off that "understanding", rather than actually doing the things that they were all about.

And I think that, on some level there was a small voice at the back of my mind, pointing out that I was missing the point... all while the paperclipper in charge ignored it and ran things the way it wanted to run them.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-17T20:03:18.624Z · LW(p) · GW(p)

Alright, but it seems completely reasonable to want to understand something on an intellectual level before actually doing the something. If you don't understand it on an intellectual level then how can you know whether it's worth doing?

Replies from: Kaj_Sotala, Qiaochu_Yuan, Hazard
comment by Kaj_Sotala · 2018-03-17T20:31:42.373Z · LW(p) · GW(p)

Sure, there's a sense in which you may want to get some intellectual understanding of what something is before you start doing it. But I wasn't just developing an intellectual understanding of the things in order to figure out whether they were worth doing: I was already convinced that they were doing. Rather I was focusing on the intellectual understanding of the thing as a substitute for actually doing the thing.

Suppose I wanted to become a musician, so I spent all my time reading about biographies about musicians, studying research on the psychological benefits of learning music, and following discussions on forums for musicians. But not spending any time actually practicing the act of playing music, nor doing things like learning to read musical notation.

Yes, there may be some benefit to be had with the stuff that I'm doing. Maybe it will be useful for helping me determine whether or not I really want to become a musician. But if I decide that I do want to become one, and then think that by spending all my time doing these things I'm making major progress towards being a musician, then I'm just deluding myself.

Edited to add: And also, I might note... if the part of me that wants me to stay deluded realizes that by learning this "Looking" skill I might figure out the delusion, then it has an incentive to apply the same trick to Looking, and keep me busy reading about Looking and thinking about it intellectually, so that I never actually do it, and keep happily thinking that I'm both learning to Look and learning to become a musician. I'm not sure whether this is a thing that happens - it feels a bit too strategic and forward-looking for our internal parts to anticipate this kind of thing in advance of it - but figuring out this kind of thing after you've started to Look for a bit and the part then starting to sabotage it when it sees where Looking is leading to, that kind of thing I'm pretty sure happens. See: all the reports of people finding self-help techniques super-useful and exciting... and then mysteriously just not using them anymore.

comment by Qiaochu_Yuan · 2018-03-17T21:55:05.478Z · LW(p) · GW(p)
If you don't understand it on an intellectual level then how can you know whether it's worth doing?

You can have intuitions and trust them. This is most of how I learned math: I had strong intuitions about what I wanted to learn at any given point, I didn't have an intellectual understanding of why I wanted to learn those things as opposed to other things, and I followed my intuitions and they led me to great places (which is why I kept trusting them).

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T10:09:36.522Z · LW(p) · GW(p)

Intuitions are great and they are a core part of human intelligence, or even the core part of human intelligence. However, the essence of the rationalist project is that our intuitions are biased, so often we need to re-examine them, question them, try to understand whether they are well-calibrated in this particular case etc. Also, intuitions are good for individual, but, since intuitions are (almost by definition) very hard to communicate, they are not very useful for social coordination.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-18T17:53:57.463Z · LW(p) · GW(p)
However, the essence of the rationalist project is that our intuitions are biased

Why do you trust your explicit intellectual reasoning any more than your intuitions?

Also, intuitions are good for individual, but, since intuitions are (almost by definition) very hard to communicate, they are not very useful for social coordination.

I don't really understand what point you're trying to argue for with this. Is the conclusion "...and therefore we shouldn't talk about them?" or "...and therefore we shouldn't use them?" or what?

I agree that if I go around making a lot of decisions based on my intuitions it will be harder to explain those decisions to other people. There are situations in which I want to optimize very hard for making decisions that are explicable in this way (e.g. if I'm a business manager), but there are situations where I don't, and if I behave as if I'm always in my-decisions-need-to-be-explicable mode then I am missing opportunities to grasp a lot of power.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T20:23:15.661Z · LW(p) · GW(p)
Why do you trust your explicit intellectual reasoning any more than your intuitions?

Firstly, there are cases where you can definitely trust your explicit reasoning more than your intuitions. For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.

For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision. Moreover, if you manage to analyze your intuition and understand its source, your know much better to which extent you should trust it for the question at hand. Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).

I don't really understand what point you're trying to argue for with this.

The point I'm trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.

Replies from: Qiaochu_Yuan, query
comment by Qiaochu_Yuan · 2018-03-19T05:50:32.810Z · LW(p) · GW(p)
For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.

I think the situation is much more complicated than this, at least for experts. Cf. Terence Tao's description of the pre-rigorous, rigorous, and post-rigorous stages of mathematical development. Mathematical papers often have incorrect proofs of correct statements (and the proofs are often fixable), because mathematicians' intuitions about mathematics are so well-developed that they lead them to correct conjectures even when attempts to write down proofs go awry because in a long proof there are many opportunities to make mistakes. My experience has definitely been that the longer a proof / computation gets the more I trust my intuitions if they happen to disagree. (But of course I trained my intuitions on many previous proofs / computations.)

For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision

Why do you believe this? Have you actually tried? As query says, in many situations (say, social skills), adding explicit reasoning to your intuitions can make you worse off, at least at first.

This is also not the position you started off with (not that I'm asking you for consistency, just noting that we started somewhere different than this and that's how we got here). You asked:

If you don't understand it on an intellectual level then how can you know whether it's worth doing?

This seems to imply a fairly different cognitive algorithm than "combine your intuition and your explicit reasoning" (which, to be clear, is a thing I actually do, but probably a different way than you), namely "let your explicit reasoning veto anything it doesn't think is worth doing." Why do you think this is a good idea? In my experience this is an opportunity for various parts of you to find clever explicit arguments for not doing things that they're trying to avoid for unrelated reasons (e.g. fear of social judgment).

Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).

Again, why do you believe that this works?

The point I'm trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.

Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself. For now I am basically content to have people either decide that Looking is worth trying to understand and trying to understand it, or decide that it isn't. But I get the sense that this would be unsatisfying to you in some way.

Replies from: Wei_Dai, vanessa-kosoy
comment by Wei Dai (Wei_Dai) · 2018-03-19T18:45:04.607Z · LW(p) · GW(p)

With explicit intellectual reasoning, there's a chance for error correction. If someone's initial reasoning is wrong, others can point it out or they can eventually realize it on their own with further reasoning, and it seems possible to make progress towards the truth over time this way. (See science, math, and philosophy.) I'm worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won't be able to back out of that epistemic state.

I'm also worried that LW as a whole will get into such a state and not be able to back out of it, which makes me want to also discourage other people from trying Looking without first having an explicit understanding of its epistemic nature. I want to have answers to questions like:

  1. How does Looking work (especially on questions that are not confined to the internals of one's own mind)?
  2. How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)
  3. If Looking gives systematically wrong answers to certain questions (i.e., most people get the same wrong answer via Looking), how will that eventually get corrected?

Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself.

Here's my prior:

  • P(hearing explicit reasoning about X makes it harder to learn to do X | X is a useful epistemic tool) is low
  • P(X claims that hearing explicit reasoning about X makes it harder to learn to do X | X is a memetic virus trying to evade my epistemic immune system) is high

So such a claim makes me update towards thinking that X is a memetic virus. This kind of reasoning unfortunately makes me less likely to be able to learn X in the (hopefully rare) situation where X was actually a useful epistemic tool, but I think that's just a price I have to pay to maintain my epistemic hygiene?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-19T19:34:13.987Z · LW(p) · GW(p)
I'm worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won't be able to back out of that epistemic state.

There are lots of ways to find out that you're wrong about something. Instead of doing explicit reasoning you can make predictions and run experiments. Looking doesn't mean not being on board with beliefs having to pay rent.

Example: when I Look at people, I get a sense of what's going on with them to cause them to behave in certain ways, and I can test that sense by using it to make predictions and running experiments to check them (e.g. asking them a certain kind of question to see their response), in addition to doing explicit reasoning and seeing if the explicit reasoning comes to similar conclusions. Looking is not meant to displace explicit reasoning, but it is a different tool than explicit reasoning, and sometimes I want to use one or the other or both.

Subexample: I met a guy recently at a circling workshop, and after he had said about 10 words I was highly confident, based on how I was reading his tone of voice and body language (which manifested as a feeling of distrust in my guts), and also based partly on his actual words, that he was doing a thing I would describe as "fake circling" (which I also used to do). My explicit reasoning agreed, especially once I learned more about his life circumstances (loosely, he was lonely in a way I expected to cause him to want to do "fake circling" as a way to feel connected to people).

I circled with him and told him I distrusted him twice, the second time when he was doing the fake circling thing, and the circle digested that for a bit. I didn't tell him my hypothesis. Then he was in another circle that I wasn't in where the circle independently revealed, and he agreed, that he was doing the thing I strongly suspected he was doing (but in a bit less detail than I had, I think). And he partially learned to stop, and my guts felt less distrust when he did. Then I told him my hypothesis in more detail and he agreed.

(If I were making predictions they would have been things like "I predict he's not going to make any progress on the thing he came here to fix until he changes such that my guts stop distrusting him." It's tricky to score this prediction though.)

How does Looking work (especially on questions that are not confined to the internals of one's own mind)?

What's unsatisfying about Kaj's original post above as an answer to this question?

How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)

The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.

If Looking gives systematically wrong answers to certain questions (i.e., most people get the same wrong answer via Looking), how will that eventually get corrected?

We make predictions and run experiments.

Something also feels off to me about the framing of this question. Looking is not a monolithic thing. People's minds are different, and some people will be able to use it well and some people won't. There are supplementary skills it's useful to have in addition to just being able to Look (for example, precisely the sort of epistemic skills that LWers already have). The question feels a bit like asking about whether reading books gives systematically wrong answers to certain questions. Well, it depends on what books you're reading and how good you are at interpreting the contents of what you read!

So such a claim makes me update towards thinking that X is a memetic virus. This kind of reasoning unfortunately makes me less likely to be able to learn X in the (hopefully rare) situation where X was actually a useful epistemic tool, but I think that's just a price I have to pay to maintain my epistemic hygiene?

This seems fine and sensible to me.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-19T20:13:33.041Z · LW(p) · GW(p)
What’s unsatisfying about Kaj’s original post above as an answer to this question?

I think it's a step in the right direction, but I'm not sure if his explanation is correct, or that different people are even talking about the same thing when they say "Looking".

The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.

Take this example of Looking:

The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.

I had interpreted this to mean that you were getting the answer of "playing it well is Goodharting on truth-seeking" directly out of Looking. If that's not the case, can you explain what the data was, and how that lead you to the conclusion of "playing it well is Goodharting on truth-seeking"? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn't be too hard to find, via our normal observations, intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)

We make predictions and run experiments.

But I'm not seeing people say "Here's some data I gathered via Looking, which leads to hypothesis X and predictions Y; let's test it by doing these experiments." Instead they just say "I think X because of Looking." like in the sentence I quoted above, or Val's "one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta".

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-19T20:54:03.813Z · LW(p) · GW(p)
I had interpreted this to mean that you were getting the answer of "playing it well is Goodharting on truth-seeking" directly out of Looking. If that's not the case, can you explain what the data was, and how that lead you to the conclusion of "playing it well is Goodharting on truth-seeking"? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn't be too hard to find, via our normal intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)

I don't have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I'm also somewhat more confident than I was that I'm doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)

Okay, so here's an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like "gift-giving is immoral" (I also gave people gifts, and refused to notice this discrepancy) and feeling very smug. At some point it... wore off? And then I looked back on my behavior, and now that there wasn't this "I said an objectivist thing which meant it was the best thing" thing getting in the way, I thought to myself, what the fuck have I been doing? Then I decided I was too incompetent to do philosophy and resolved to not try doing it again until I got more life experience or something.

The moment of objectivism wearing off is a bit like what it feels like to Look at the LW epistemic game. I'm seeing the same things I always saw, in some sense, but there's a distorting thing that was getting in the way that's gone now (according to me), which changes the frame I'm using to process and verbally label what I'm seeing. Those verbal labels, which I assign in a separate cognitive step that takes place after the Looking, are something like "oh, look, we're a bunch of monkeys slinging words around while being terrified that some of the words will cause us to have false beliefs or something, whatever that even means, and meanwhile the set of monkeys most worried about this is essentially disjoint from the set of monkeys posting updates about what they're actually doing in the world with their beliefs."

Getting slightly closer to the data itself, I've been seeing examples of people making arguments that feel to me like motivated reasoning (this is not the Looking step, the Looking facilitates feeling this way but it is not the same thing as feeling this way) in a way that feels similar to when people give fake justifications for their behavior in circles, and when I introspect on the flavor of the motivated reasoning I get "optimizing for accepting arguments that are outside-view defensible instead of optimizing for truth-seeking." This is again not Looking, but it's a thought I've been having since reading Inadequate Equilibria and the Hero Licensing dialogue in particular.

Then I check all this against my explicit reasoning, which agrees that Goodharting is easy and the default outcome in situations like this. The obvious problem, according to my explicit reasoning, is that there's no easy way to gain status on LW by being really right about things - for example, if a prediction market was explicitly a big and important part of LW culture - and instead the way you gain status is by getting other LWers to agree with you, or maybe writing impressively in a certain way, which is very different.

The point is less that I couldn't have arrived at this conclusion without Looking, and more that without Looking, it may never have occurred to me to even try (because maybe some part of me is worried that if I Look at the LW epistemic game it will become harder for me to play it, so I might lose status on LW, or something like that).

But I'm not seeing people say "Here's some data I gathered via Looking, which leads to hypothesis X and predictions Y; let's test it by doing these experiments." Instead they just say "I think X because of Looking." like in the sentence I quoted above, or Val's "one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta".

Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone's going to make that choice all the time.

There's a thing that can happen after you Look that you might call a "flash of insight"; you suddenly realize something in a way that feels similar to the way proofs-by-picture can cause you to suddenly see the truth of a mathematical fact. Of course this is an opaque process and you'd be justified in not trusting it in yourself and others, but in the same way that you'd be justified in not trusting your intuitions or the intuitions of others generally. That's not specific to Looking.

"Everyone has bodhicitta," to the extent that I understand what that means, does seem to me to be a hypothesis with testable predictions, although those predictions are somewhat subtle. Val does describe a few things after your quote that can be interpreted as such predictions. It's also something else that's less of a belief and more of a particular way of orienting towards people, again as far as I understand it.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-19T22:06:36.007Z · LW(p) · GW(p)

I'm still not sure what exactly was the data that you got from Looking. You said previously "What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all." In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?

Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.

What's the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn't want to use that frame, what should I do instead?

Val does describe a few things after your quote that can be interpreted as such predictions.

I'm not seeing anything that look like testable predictions. Can you spell them out?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-19T22:40:42.007Z · LW(p) · GW(p)
In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right?

You can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words.

If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?

I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels for the felt senses. I probably didn't do the best job but I don't feel up to putting in the level of effort that feels like it would be necessary to do substantially better.

Here's another analogy: if you're face-blind, you're getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people's minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as "this face with this kind of eyes and this nose and this hair" where someone else would see the same face and process it as "Bob's face."

Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it's unclear how one would go about communicating what's different about your mind when this happens, other than "now it's immediately clear to me that that's Bob's face, whereas before I would have had to use explicit reasoning to figure that out."

What's the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn't want to use that frame, what should I do instead?

Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)

Edit: I misunderstood Wei Dai's question; see below.

I don't have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly.

(There's an additional thing of just not worrying about epistemic errors as such very much. Tennis players don't spend a lot of time asking themselves "but what if all of my beliefs about tennis are wrong tho?" because they just play a bunch of tennis and notice what works and what doesn't instead, without ever explicitly thinking about their epistemics at all. This isn't to say it might not benefit them to think about epistemics every once in awhile, but it's not the mode they primarily operate in.)

I'm not seeing anything that look like testable predictions. Can you spell them out?

What about this does not look like a testable prediction to you:

This is why people know to build beautiful monuments to honor lost loved ones, and to be respectful while in them, across vast cultural and religious belief differences.
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-20T01:04:04.098Z · LW(p) · GW(p)

Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive /​ risky to try.)

In practice, doesn't that just translate to "shut up and don't question it"?

(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)

I guess it depends on what field you're working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren't very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.

In principle we could have different sets of norms for different subject areas on LW, and "shut up and don't question it" (or perhaps more charitably, "shut up and just try it") could be acceptable for certain areas but not others. If that ends up happening I definitely want social epistemology itself to be an area where we worry a lot about epistemic errors.

What about this does not look like a testable prediction to you:

I was asking about how epistemic errors caused by Looking can be corrected. I think in that context "prediction" has to literally mean prediction, of a future observation, and not something that's already known like people building monuments to honor lost loved ones.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-20T01:40:17.015Z · LW(p) · GW(p)
In practice, doesn't that just translate to "shut up and don't question it"?

This seems really uncharitable, by far the least charitable you've been in this conversation so far (where I've generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don't like being punished for answering the question you asked in the way you requested I answer it.

Edit: The above was based on a misunderstanding of Wei Dai's question about what he should do instead; see below.

Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.

In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.

Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-20T02:53:53.572Z · LW(p) · GW(p)

I have not asked you to shut up and I have not asked you not to question anything.

To clarify, I wasn't saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say "shut up and don't question it", namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don't think you're doing this now, just proposing it as something that should be acceptable.)

Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.

Some examples please? I honestly can't think of anything I know that can only be transmitted in person.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-20T16:53:34.036Z · LW(p) · GW(p)
My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say "shut up and don't question it", namely that it would make it very hard to question certain conclusions and correct potential errors.

I don't know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me "what would I have to do to understand the ineffable nature of redness?" I might say "go over to the next village and ask to see their red thing."

Some examples please? I honestly can't think of anything I know that can only be transmitted in person.

Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn't get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace.

I am consistently surprised at how different my intuitions on this topic are from the people I've been disagreeing with here. My prior is pretty strongly that most interesting skills can only be taught to a high level of competence in person, and that appearances to the contrary have been skewed by the availability heuristic because of school, etc. This seems to me like a totally unobjectionable point and yet it keeps coming up, possibly as a crux even.

There seems to be a related thing about people consistently expecting inferential / experiential distances to be short, when again my prior is that there's no reason to expect either of these things to be true most of the time. And a third related thing where people keep expecting skill at X to translate into skill at explaining X.

To be very, very clear about this: I am in fact not asking you to update strongly in favor of any of the claims I or others have made about Looking or related topics, because I in fact think not enough evidence has been produced for such strong updates, and that the strongest such evidence can really only be transmitted in person (or rather, that I currently lack the skill to produce satisfying evidence in any way other than in person). I view what I've been doing as proposing hypotheses that people can consider, experiment with, or reject in whatever way they want, and also defending the ability of other people to consider, experiment with, etc. these hypotheses without being labeled epistemically suspect.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-20T17:54:41.244Z · LW(p) · GW(p)

I don’t know that I was proposing an epistemic norm.

In that case there was a misunderstanding somewhere. Here's my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don't tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said "Meet him in person and ask him to show you the way in which everyone has bodhicitta." I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.

I guess the misunderstanding happened when I asked you "what should I do instead?" and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn't want to use the frame of data, hypothesis and prediction. I think "Meet him in person and ask him to show you the way in which everyone has bodhicitta." would not serve my purpose because 1) in most cases nobody would be willing to do that so most new ideas would go unchallenged and 2) it wouldn't accomplish the goal of error correction if Looking causes most people to make the same errors.

Hopefully that clears up the misunderstanding, in which case do you want to try answering my question again?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-20T18:22:43.860Z · LW(p) · GW(p)
I guess the misunderstanding happened when I asked you "what should I do instead?" and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn't want to use the frame of data, hypothesis and prediction.

Oh. Yes, that's exactly what happened. Thanks for writing down that summary.

I don't really have a good answer to this question (if I did, it would be "try to encourage Val to use the frame of data, hypothesis and prediction, just don't expect him to do it all the time") so I'll just say some thoughts. In my version of the frame Val is using there's something a bit screwy about thinking of "everyone has bodhicitta" as a belief / hypothesis that makes testable predictions. That's not quite the data type of that assertion; it's a data type imported over from the LW epistemic frame and it's not entirely natural here.

Here's a related example that might be easier to think about: consider the assertion "everyone wants to be loved." Interpreted too literally, it's easy to find counterexamples: some people will claim to be terrified of the idea of being loved (for example, because in their lives the people who love them, like their parents, have consistently hurt them), and other people will claim to not care one way or the other, and on some level they may even be right. But there's a sense in which these are defensive adaptations built on top of an underlying desire to be loved, which is plausibly a human universal for sensible evo-psych reasons (if your tribe loves you they won't kick you out, they'll take care of you even if you stop contributing temporarily because of sickness or injury, etc). And there's an additional sense in which thinking in terms of this evo-psych model, while helpful as a sanity check, misses the point, because it doesn't really capture the internal experience of being a human who wants to be loved, and seeing that internal experience from the outside as another human.

So one way to orient is that "everyone wants to be loved" is partially a hypothesis that makes testable predictions, suitably interpreted, but it's also a particular choice of orienting towards other humans: choosing to pay attention to the level at which people want to be loved, as opposed to the level at which people will make all sorts of claims about their desire to be loved.

A related way of orienting towards it is that it's a Focusing label for a felt sense, which is much closer to the data type of "everyone has bodhicitta" as I understand it. Said another way, it's poetry. That doesn't mean it doesn't have epistemic content - a Val who realizes that everyone has bodhicitta anticipates somewhat different behavior from his fellow humans than a Val who doesn't - but it does mean the epistemic content may be difficult to verbally summarize.

comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-21T21:09:59.358Z · LW(p) · GW(p)
I think the situation is much more complicated than this, at least for experts.

I agree that the situation is more complicated, I disagree that it is "much more complicated". Yes, mathematicians rely on intuition to feel in the gaps in proofs and to seek out the errors in proofs. And yet, it is uncontroversial that having a proof should make you much more confident in a mathematical statement than just having an intuition. In reality, there is a spectrum that goes roughly "intuition that T is correct" -> "informal argument for T" -> "idea for how to prove T" -> "sketch of a proof of T" -> "unvetted proof of T" -> "vetted, peer-reviewed proof of T" -> "machine verifiable formal proof of T".

Why do you believe this? Have you actually tried?

Have I actually tried what? As to why I believe this, I think I already gave an "explicit reasoning" argument: and, yes, my intuition and life experience confirm this although this is not something that I can transmit to you directly.

This seems to imply a fairly different cognitive algorithm than "combine your intuition and your explicit reasoning" (which, to be clear, is a thing I actually do, but probably a different way than you), namely "let your explicit reasoning veto anything it doesn't think is worth doing."

This is a wrong way to look at this. Intuition and explicit reasoning are not two separate judges that give two separate verdicts. Combining intuition and explicit reasoning doesn't mean averaging the results. The way it works is, when your intuition and reasoning disagree, you should try to understand why. You should pit them against each other and let them fight it out, and in the end you have something that resembles a system of formal arguments with intuition answering some sub-queries, and your reasoning and intuition both endorse the result. This is what I mean by "understanding on an intellectual level".

Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself.

I don't insist that you only use explicit reasoning. Feel free use metaphors, koans, poetry and whatnot. But you should also explain it with explicit reasoning.

For now I am basically content to have people either decide that Looking is worth trying to understand and trying to understand it, or decide that it isn't. But I get the sense that this would be unsatisfying to you in some way.

Well, if you are saying "I don't want to convince everyone or even the most" that's your prerogative of course. I just feel that the point of this forum is trying to have discussions whose insights will percolate across the entire community. Also I am personally interested in understanding what Looking is about and I feel that the explanations given so far leave me somewhat confused (although this last attempt by Kaj was significant progress).

comment by query · 2018-03-19T00:55:19.219Z · LW(p) · GW(p)
For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.

I don't think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.

Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).

I also don't think this is as clear cut as you're making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition -> biased, explicit reasoning -> unbiased.

Explicit reflection is indeed a powerful tool, but I think there's a tendency to confuse legibility with ability; someone can illegibly to others or themselves have the capacity to do something (like use an intuition to correct a bias). It is hard to transmit such abilities, and without good external proof of their existence or transmissibility we are right to be skeptical and withhold social credit in any given case, else we be misled or cheated.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-21T21:24:40.377Z · LW(p) · GW(p)
I don't think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust.

I don't see it this way. I think that both intuition and explicit reasoning are relevant to both inside view and outside view. It's just that the input of inside view is the inner structure of question and the input of outside view is the reference category inside which the question resides. People definitely use the outside view in debates by communicating it verbally, which is hard to do with pure intuition. I think that ideally you should use combine intuition with explicit reasoning and also combine inside view with outside view.

I also don't think this is as clear cut as you're making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition -> biased, explicit reasoning -> unbiased.

You can certainly have biases about these things, but these things can be regarded as coming from your intuition. You can think of it as P vs. NP. Solving problems is hard but verifying solutions is easy. To solve a problem you have to use intuition, but to verify the solution you rely more on explicit reasoning. And since verifying is so much easier, there is much less room for bias.

comment by Hazard · 2018-03-18T12:39:33.396Z · LW(p) · GW(p)
If you don't understand it on an intellectual level then how can you know whether it's worth doing?

(Addressing this question in general, and then in terms of this discussion)

I think that "worth doing" is the sneaky part of that question. Any decision making process (by intuition or intellectual thought) is usefully thought of as a trade-off between time it takes to decide, and the delta in payout.

If it takes 5 min to pick the "most worth" meal at a restaurant, and it's only marginally better, maybe you should just order the first thing that comes to mind and spend more time talking with your friends.

Also, if it's super easy to get empirical data about how "worth doing" each action is (you're at a buffet and can sample a small bit off everything), maybe it's better to just do the experiment.

I would only make "have an intellectual understanding of the thing" a prerequisite if I thought it was super costly to try, or I thought it was potentially dangerous, or I only got one shot.

Which I think is a bigger crux for a lot of people on this topic. I don't see it as super costly to put bits of effort here and there into trying to get an experiential sense of Looking.

Though who knows, maybe I don't value my time enough. I certainly at least want to be a person who consistently makes time to test reasonable sounding ideas/actions/experiences like Looking.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-18T19:41:56.196Z · LW(p) · GW(p)

I completely agree that, when something is sufficiently cheap then it is often better to simply try it than spend a lot of effort on trying to analyze it. However, my impression was that Looking is far from cheap, that it is something that requires years of practicing meditation to achieve. I might be wrong? Moreover, it also seems potentially dangerous, at least for me. I know for a fact that my sanity is not impervious and I'm wary of trying anything that might harm it.

Replies from: PeterBorah, Hazard
comment by PeterBorah · 2018-03-19T00:10:56.435Z · LW(p) · GW(p)

It definitely doesn't take years of practicing meditation. Though I'm hesitant to speculate on how long it would take on average, because how prepared for the idea people are varies a lot. The hardest step is the first one: realizing that people are talking about things you don't yet understand.

comment by Hazard · 2018-03-19T21:47:24.713Z · LW(p) · GW(p)

If the question was, "Should I commit to spending years investigating Looking and related ideas?" I'd agree that most people could rightly conclude, "No, I shouldn't".

So a better question becomes, "Is it useful to take a step in that direction?". Again, to me the answer seems to be yes. But besides, "is it worth the time" there's still the prospect of danger you mention.

I used to think that there was no way to be in danger, as long as you took things one step at a time, but slippery slopes and murder ghandis make a lot of sense.

Here's 5 min of thought on why it doesn't feel dangerous to me:

  • I feel I've gotten much better at noticing confusion, and it seems like Looking would have to systematically undermine my ability to notice confusion before it could hurt me (and that registers on a gut level as very unlikely)
  • I've previously been in a place of bad epistemics and bad functionality via going overboard with "naive rationality". Having come out the other end, I guess I feel a bit inoculated to "getting hijacked by an idea".
  • I think this talk by Dan Barker kicked a strong sense of, "Oh, you can have a crazy 'supernatural' experience, and it doesn't have to mean anything."

What do you notice when you think on your feeling of it being dangerous?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-21T21:32:08.468Z · LW(p) · GW(p)

I feel that Looking might be dangerous for roughly the same reason psychedelic drugs are dangerous. I know that my sanity is not very robust, and I also know that my mind is quite well functioning in the moment (so that "if ain't broke" applies). Experimenting with highly unusual states of consciousness seems like something that pushes your mind away from its normal operating conditions and can destabilize the system in difficult to predict ways.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-22T07:51:49.977Z · LW(p) · GW(p)

Learning to Look can definitely be hazardous: some teachers advise people with any kind of mental health issues to be very cautious about trying meditation at all. In particular, learning the required kind of sensitivity for noticing subtle movements of mind means that you also become aware of any unpleasant stuff that you might so far have been suppressing in order to cope.

Ideally, that dark stuff bubbling up to surface will be rough but beneficial, as the person meditating will process it and get over it, but that's under the assumption that they are relatively mentally healthy (on some relevant axis); for people who aren't, it may be too much to handle at once.

And, as you say, deeper Looking inherently involves moving the mind outside its standard parameters. If the techniques aren't used right, there is a very definite risk of breaking things.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-22T16:40:05.297Z · LW(p) · GW(p)

Basically agree with this.

comment by Wei Dai (Wei_Dai) · 2018-03-08T10:02:29.262Z · LW(p) · GW(p)

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept “I” as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

As a "third party", this explanation makes little sense to me. Suppose it's true that "our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness" and I realize this through meditation, it seems that "I" remain something real (namely a group of subagents) and can still be potentially harmed or helped. Why would a group be less capable of being harmed than a monolithic agent? I'm not seeing the logic here.

Also, I'm surprised that you give no mention to transient hypofrontality, which you've blogged about before. (See also this TEDx talk.) To me that makes a lot more sense as a plausible explanation of how meditation can cause a sense of "no self". In short the idea is that our sense of self is something computed by the prefrontal cortex, and with the extreme control over attention that's achievable using the practice of meditation one can downregulate that part of the brain thereby causing the sense of self to disappear. To quote your own summary of the paper:

It proposes that what we experience as consciousness is built up in a hierarchical process, with various parts of the brain doing further processing on the flow of information and contributing their own part to the “feel” of consciousness. It’s possible to subtract various parts of the process, thereby leading to an altered state of consciousness, without consciousness itself disappearing.

What are you current thoughts on that theory?

Replies from: Kaj_Sotala, Kaj_Sotala
comment by Kaj_Sotala · 2018-03-08T15:37:01.630Z · LW(p) · GW(p)

Adding to my other comment...

I'm skeptical about the value of most neurophysiological explanations in general: I think that in many cases, they just create an illusion of understanding by throwing in neurological terms that give an appearance of detail without actually contributing conceptual gears. If I say "learning to navigate a city causes structural changes in the hippocampus", that doesn't really tell most people anything that they could use, but does give them a feeling that they now understand this better.

Similarly, I could have quoted from the Dietrich paper

the prefrontal cortex enables the top layers of consciousness by contributing the highest-order cognitive functions to the conscious experience ... evidence suggests that initial and much ensuing information processing on perception, attention, or memory occurs in other brain areas before further integration in the frontal lobes ... meditation results in transient hypofrontality with the notable exception of the attentional network in the prefrontal cortex

and added something like "and thus, Looking is about learning to selectively downregulate the activity of the prefrontal cortex - which carries out the final-order cognitive functions of conscious experience - in order to get greater conscious access to inputs from other brain areas whose data has not yet undergone those final transformations"...

...but (assuming that it was true) would that really have been more informative than the cognitive-level description of the same that I actually had in the post, i.e. the bits about breaking down the experience of a sound/unhappy thought to more low-level building blocks, and noticing that something was off about the next-to-final-stage-cognitive-content?

I think it might have sounded more impressive, but conveyed fewer gears about what's actually happening and why it's useful.

I did find the Dietrich paper interesting earlier, back when I didn't have the cognitive-level model yet. It didn't tell me very much, but it did at least give me some kind of an understanding of what the heck it is that meditation does. But now that I do have the cognitive-level model, focusing on the neurophysiological model doesn't seem so valuable anymore, since neurophysiological explanations only give a very crude level of detail compared to cognitive-level ones. (Though I do grant that it does feel nice to have some neuroscience-grounded theories about what meditation does, which are compatible with the cognitive-level ones; if the neuroscience said that the cognitive-level explanation was totally implausible, then that would be a problem.)

comment by Kaj_Sotala · 2018-03-08T11:50:04.657Z · LW(p) · GW(p)

Suppose it's true that "our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness" and I realize this through meditation, it seems that "I" remain something real (namely a group of subagents) and can still be potentially harmed or helped. Why would a group be less capable of being harmed than a monolithic agent? I'm not seeing the logic here.

Right, so I'm using a very specific sense of "harmed" here.

The claim is not that the subagents couldn't be harmed or helped in the sense of e.g. brain damage damaging their function, things in the external world happening more or less according to their values, etc. They obviously can, and there's nothing delusional about that.

The "harm" that I'm referring to is the alief that there's something intrinsically bad about experiencing emotions with negative valence. For instance, I might experience stress about going to the dentist, and this is not because I would expect the dentist to do the "objective" kind of damage which you're talking about - obviously I expect the dentist to benefit me, or otherwise I wouldn't be going to in the first place. But I'm still stressed, because I expect the experience itself to be uncomfortable, and there's something about that upcoming discomfort that my mind alieves to be inherently dangerous and a thing to avoid.

When you make the no-self update... well, this is getting to the territory where it's very hard to convey in words why exactly getting an insight to the nature of the self would change that alief. Much of our conceptual system is subtly built on those incorrect assumptions, and the whole update comes from forcing the mind to confront the contradiction between its existing conceptual assumptions and the thing that it is actually experiencing when it Looks at its own operation very closely. But I'll try nevertheless:

What triggered my "kensho" experience (scare quotes because I may again be somewhat misusing a technical term) was something like... I was trying to let go of all doing and simply observe my mind, but then "trying to simply observe my mind" was by itself a form of doing; whenever a thought like this crossed my mind, I sort of just mentally shrugged and went "well okay, if my mind wants to do something, then I will let it do so, because that by itself is also a form of letting my mind do whatever it wants and let go of doing". And then I would feel like my mind start doing something again, and I would try not to interfere with that... and all of this gave rise to the question of "so what exactly is the 'I' that is doing things here, and what does it mean for me to 'let go of doing'? If all of these are just different subprocesses getting access to consciousness, then what does this 'sense of doing' mean?"

At which point, suddenly, my mind flashed back to the conceptual model given in The Mind Illuminated, of the feeling of a sense of self being just another sense percept that was added to the experience as a final processing step... and suddenly on some level I knew and felt that the "I" that was doing things was not any particular privileged subprocess that was doing things, but that it was just whichever subprocess the narrating mind chose to highlight as the current protagonist of the story it was telling. Kind of like the author of a novel changing the viewpoint character between chapters. The sense of "I" was just a tag for "which subprocess has the narrating mind chosen to highlight at this given moment".

Now why did that lead to a (temporary) feeling that pain and discomfort were nothing to be afraid of?

It was something like... when I struggle against pain, there's an element of identifying (fusing together) with the subprocess that is fighting against the pain, rather than with the subprocess that's producing the pain. If I'm thinking of going to the dentist and stressing out about that, then I'm fusing together with the subprocess that finds the thought of pain to be something that it wants to avoid, and there is a thought that the "I" which will experience the pain is the same subprocess which is currently active and struggling against the thought of experiencing the pain. But if there is in fact no privileged "I" and the sense of an "I" is just a tag for a subprocess from whose perspective a narrative is currently being constructed, then that makes it plausible that the subprocesses which will be active during the visit to the dentist won't be the same ones which are currently struggling against the thought of going to the dentist. And the whole concept of the "same" subprocess is kind of arbitrary anyway and if my mind just switches to an ontology which doesn't even include that kind of an equivalence relation for subprocesses or different mind-states, then it's impossible to think that the "me" which will experience the dentist visit will be the same me as the one which is currently active, because there isn't even any ontologically basic me, just different configurations of subprocesses, some of which will sometimes happen to be labeled with an arbitrary XML tag.

(But there is of course still the entirety of the system of subagents which is privileged in the sense of being the one of which future predictions can most reliably be made, and whose body is the one whose actions are the most directly influenced by whatever this system of subagents ends up deciding to do. So in that sense it's useful to keep one version of the concept of "me" around: it's just a useful theoretical fiction, rather than an ontological primitive.)

Did that make any sense?

Also, I'm surprised that you give no mention to transient hypofrontality, which you've blogged about before. (See also this TEDx talk.)

I didn't reference that paper for two reasons:

1) While the paper's hypothesis still seems plausible to me, my amount of neuroscience knowledge is limited, so I don't have high confidence in my ability to evaluate how true its claims actually are. For this post, I wanted to restrict myself only to claims which I feel reasonably certain about. Aside for the thing about the long-term nature of enlightenment, which I explicitly flagged as something I'm uncertain about, everything else in this post is something of which I can that "I'm confident that this is a thing that happens". For that paper, my epistemic position is more like "well that sure does sound plausible".

2) You say that transient hypofrontality sounds like a more plausible explanation of what happens, but I don't see them as two different explanations, but rather as looking at the same thing at different levels of explanation. If I'm giving an explanation of the development of expertise, I might say something like "you become an expert by developing increasingly detailed mental representations of a domain" or I could say "developing a skill causes a (possibly temporary) increase in the amount of neurons in the regions of the brain dedicated to that skill". Neither is wrong, one is just using a cognitive framework and the other is using a neurophysiological one.

Similarly, it's been a while since I last read it, but I believe that the Dietrich paper is explaining on a neurophysiological level basically the same process that this post gave a cognitive explanation of. It might have been interesting to cover that angle as well (assuming that I was convinced about the correctness of that angle), but this article was quite long already, and I didn't think that the neurophysiological side would have added enough value to justify its inclusion.

Replies from: Wei_Dai, Qiaochu_Yuan, ESRogs
comment by Wei Dai (Wei_Dai) · 2018-03-08T18:10:17.178Z · LW(p) · GW(p)

Did that make any sense?

Yes, that does make a lot more sense.

Similarly, it’s been a while since I last read it, but I believe that the Dietrich paper is explaining on a neurophysiological level basically the same process that this post gave a cognitive explanation of.

I think Dietrich's explanation is essentially non-cognitive, i.e., the denial of self is caused by something like a hardware glitch or switch that is triggered by meditation, rather than a sequence of cognitive steps. (Similar things can happen during endurance running, hypnosis, and drug-induced states, which are more obviously non-cognitive.) Here's the relevant part of the paper:

meditation entails sustained concentration and heightened awareness by focusing attention on a mantra, breathing rhythm, or a number of other internal or external events [...] Humans appear to have a great deal of control over what they attend to (Atkinson & Shiffrin, 1968), and in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex, accounting for the α-activity. Phenomenologically, meditators report a state that is consistent with decreased frontal function such as a sense of timelessness, denial of self, little if any self-reflection and analysis, little emotional content, little abstract thinking, no planning, and a sensation of unity.

I can think of several different possibilities here. 1) Dietrich's proposed explanation is just wrong. 2) There are different kinds of "no self" experiences, and/or different ways of triggering them, some cognitive, some non-cognitive. 3) The non-cognitive explanation is actually right, and your brain is making up a cognitive explanation for what's happening, similar to how the left hemisphere of split-brain patients make up explanations for actions caused by the right hemisphere.

Please let me know if you have any additional thoughts on this.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-08T18:36:04.254Z · LW(p) · GW(p)

Ah, right, now I think I understand what you were saying.

I think the thing here is that, like a lot of old research on the topic, Dietrich does not do a very precise job of exactly what kind of meditation he's talking about: possibly because he doesn't (or at least didn't at the time of writing this) realize that meditation actually covers a wide variety of different practices.

In particular, the thing that he's talking about sounds kind of like he's describing something like high-level samatha jhanas: probably something like the seventh or eighth jhana (note: links within that wiki seem to be broken, people curious about the earlier jhanas may want to use the book's pdf instead).

These are indeed mental states where a meditator may end up in, if they manage to concentrate really really intensely on just one thing, to the exclusion of anything else. And from those descriptions, it really does sound like you reach them by successively turning off brain functions until you get a really weird mental state.

However, a lot of traditions - including the author of the linked wiki/book - emphasize that getting into samatha jhana states is not enlightenment. Some of them can be really pleasant, so getting into the early ones is useful for motivating you to practice your concentration skills: but in order to move towards enlightenment, you need to do a different kind of meditation, i.e. actively observing the normal operation of your own mental processes, which you cannot do if you are shutting all of them down. (Though lower samatha jhanas still keep some of them intact, so getting into a nice pleasurable samatha jhana can be useful for helping you concentrate on studying them.) That article for the eighth jhana expressly warns meditators not to get too caught up with the samatha jhanas, saying that people who do so are "junkies":

Just to drive this point home, an important feature of concentration practices is that they are not liberating in and of themselves. Even the highest of these states ends. The afterglow from them does not last that long, and regular reality might even seem like a bit of an assault when it is gone. However, jhana-junkies still abound, and many have no idea that this is what they have become. I have a good friend who has been lost in the formless realms for over 20 years, attaining them again and again in his practice, rationalizing that he is doing dzogchen practice (a type of insight practice) when he is just sitting in the 4th -6th jhanas, rationalizing that the last two formless realms are emptiness, and rationalizing that he is enlightened. It is a true dharma tragedy.
Unfortunately, as another good friend of mine rightly pointed out, it is very hard to reach such people after a while. They get tangled in golden chains so beautiful that they have no idea they are even in prison, nor do they tend to take kindly to suggestions that this may be so, particularly if their identity has become bound up in their false notion that they are a realized being. Chronic jhana-junkies are fairly easy to identify, even though they often imagine that they are not. I have no problem with people becoming jhana-junkies, as we are all presumably able to take responsibilities for our choices in life. However, when people don’t realize that this is what they have become and pretend that what they are doing has something to do with insight practices, that’s annoying and sad.

Basically, Ingram's saying the same thing that you were suggesting: that there's no particular insight to be had from these states, as they're just tripping on weird experiences that you get from turning normal brain functions off, but that people who get too attached to them may start rationalizing all kinds of excess significance to them.

(I think that I've personally been to a mild version of the first samatha jhana a few times, but not anywhere higher than that.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-12T07:56:57.465Z · LW(p) · GW(p)

I think the thing here is that, like a lot of old research on the topic, Dietrich does not do a very precise job of exactly what kind of meditation he’s talking about: possibly because he doesn’t (or at least didn’t at the time of writing this) realize that meditation actually covers a wide variety of different practices.

Good point, that would explain a lot. What do you think of the second paper that I link to here that tries to create a framework for classifying the various contemplative practices? If it seems like a useful framework, where does "Looking" fall into it?

Basically, Ingram’s saying the same thing that you were suggesting: that there’s no particular insight to be had from these states, as they’re just tripping on weird experiences that you get from turning normal brain functions off, but that people who get too attached to them may start rationalizing all kinds of excess significance to them.

Interestingly, it seems that there are deep disagreements between and even within Buddhist traditions about which mental states count as "enlightenment" or "awakening", and which ones are merely states of deep concentration. See the first paper linked to in the same post.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-12T11:57:52.674Z · LW(p) · GW(p)

Those papers are a great find!

What do you think of the second paper that I link to here that tries to create a framework for classifying the various contemplative practices? If it seems like a useful framework, where does "Looking" fall into it?

I really like that framework. This description of the deconstructive family definitely sounds like it's talking about Looking:

Another approach would be to directly examine your experience, for example by dissecting the feeling of anxiety into its component parts and noticing how the thoughts, feelings, and physical sensations that comprise the emotion are constantly changing. In the context of Buddhist meditation, this process of inquiry is often applied to beliefs about the self, though it can similarly be applied to the nature and dynamics of perception, to the unfolding of thoughts and emotions, or to the nature of awareness.

Also, later in the same section, the paper makes a similar claim as what I was saying in my article: that establishing basic proficiency in meta-awareness / the attentional family is a prerequisite for achieving the basic skills for overcoming cognitive fusion, after which one can start developing skill in deconstructive practices / Looking:

When your sense of self is fused with the presence of anger (i.e., the feeling “I am angry”), the arising of anger is not seen clearly, but instead forms the lens through which you view experience. Attentional family practices train the capacity to recognize the occurrence of anger and other states of mind, enabling one to notice the presence of angry thoughts, physiological changes, and shifts in affective tone. This process of sustained recognition allows for the investigation of the experience of anger, an approach taken with deconstructive meditations. With this added element, one is not merely sustaining awareness of the experience of anger, but also investigating its various components, inquiring into its relationship with one's sense of self, and/or uncovering the implicit beliefs that inform the arising of anger and then questioning the validity of these beliefs in light of present-moment experience (see Box 4). This investigation of conscious experience is said to elicit an experience of insight, a flash of intuitive understanding that can be stabilized when linked with meta-awareness. Thus, meta-awareness sets the stage for self-inquiry and allows for the stabilization of the insight it generates while nevertheless being a distinct process.

I didn't really discuss the constructive family in the post, but I did briefly gesture towards it when I mentioned that "I think in terms of meditative practices that work within an existing system (of pleasure and pain), versus ones that try to move you outside the system entirely"; in terms of the paper, meditative practices that worked "inside the system" would probably be classified mostly as constructive ones.

Interestingly, it seems that there are deep disagreements between and even within Buddhist traditions about which mental states count as "enlightenment" or "awakening", and which ones are merely states of deep concentration.

I didn't read the first paper yet, but that's definitely been my suspicion as well. There are probably a number of different states that different traditions call with that label.

comment by Qiaochu_Yuan · 2018-03-08T17:17:31.632Z · LW(p) · GW(p)

Wow, thanks for this incredibly detailed comment. It clarified something for me, especially this:

It was something like... when I struggle against pain, there's an element of identifying (fusing together) with the subprocess that is fighting against the pain, rather than with the subprocess that's producing the pain.

It feels to me like I've acquired some of the skill of not fighting against pain, but I don't think I did it by doing anything to my sense of self. It's more like I just repeatedly noticed that experiencing pain kept not killing me.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-08T17:25:17.038Z · LW(p) · GW(p)

Yeah, there are a lot of things that you can do - or which can happen to you - which will help with not fighting against pain. Just undergoing a lot of painful stuff and noticing that it doesn't actually kill you, is definitely one as well. (there are lots of anecdotes about people who've gone through a lot of terrible stuff and then being totally unfazed by more mundane things, being all like, "is that the best you've got, reality? I've been shot at in a combat zone, I'm not going to get freaked out by a dentist". OTOH, some do get traumatized and even more freaked out by small stuff.)

comment by ESRogs · 2018-03-08T18:02:17.239Z · LW(p) · GW(p)

Meta note: the fact that pasting text into the comment box results in it being bold is a bug.

Let's not start using bold as a convention for indicating that text is a quote. The actual quote syntax (with greater than sign) or italics look much better. Don't they?

comment by romeostevensit · 2018-03-10T00:03:33.737Z · LW(p) · GW(p)

The most significant frustration with trying to speak effectively about this topic is that a significant fraction of what we are engaging with when we do is the mind's attempts to immunize itself from needing to get out of the car by recontextualizing the instructions as something referring to things within the car. This is especially egregious with extremely intelligent folks who can get very creative with it.

"But what about X? I found X very useful!"

Yes, perceived usefulness is one of the best ways it can convince you to watch TED tal-I mean understand contemplative practices.

comment by Kaj_Sotala · 2019-12-12T11:38:31.022Z · LW(p) · GW(p)

I still broadly agree with everything that I said in this post. I do feel that it is a little imprecise, in that I now have much more detailed and gears-y models for many of its claims. However, elaborating on those would require an entirely new post (one which I currently working on) with a sequence's worth [? · GW] of prerequisites. So if I were to edit this post, I would probably mostly leave it as it is, but include a pointer to the new post once it's finished.

In terms of this post being included in a book, it is worth noting that the post situates itself in the context of Valentine's Kensho post, which has not been nominated for the review and thus wouldn't be included in the book. So if this post were to be included, I should probably edit this so as to not require reading Kensho.

comment by Said Achmiz (SaidAchmiz) · 2018-03-09T18:18:33.921Z · LW(p) · GW(p)

“Looking”, as you explain it here, seems to be a way to perceive / examine / understand / gain insight into / etc. your own thought processes. Fair enough.

However:

koans are a classic example of puzzles that are vastly easier to solve by Looking than by normal thinking

A dear friend of mine was with me when my kensho struck, and we were able to Look at each other.

Second, one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta.

A while back I was interacting with a friend of a friend (distant from this community). His demeanor was very forceful as he pushed on wanting feedback about how to make himself more productive. I felt funny about the situation and a little disoriented, so I Looked at him.

These usage examples (all from one person—so it’s not simply a matter of different people using the term differently) do not seem to square with what you describe “Looking” to be.

Clarify, please?

Replies from: PeterBorah
comment by PeterBorah · 2018-03-09T19:46:21.227Z · LW(p) · GW(p)

I'm still not 100% sure I understand Val's definition of Looking, so I'm not quite willing to commit to the claim that it's the same as Kaj's definition. But I do think it's not that hard to square Kaj's definition with those quotes, so I'll try to do that.

Kaj's definition is:

being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

Everything you experience, no matter the object, is experienced via your own cognitive processes. When you're doing math, or talking to a friend, or examining the world, that is an experience you are having, which is being filtered by your cognitive processes, and therefore to which the structure of your mind is relevant.

As Kaj describes, the part of your thought processes you normally have conscious access to are a tiny fragment of what is actually happening. When you practice the skill of making more of it conscious and making finer and finer discriminations in mental experience, you find that there is a lot of information that your conscious mind would normally skip over. This includes plenty of information about "the world".

So consider the last quote as an example:

A while back I was interacting with a friend of a friend (distant from this community). His demeanor was very forceful as he pushed on wanting feedback about how to make himself more productive. I felt funny about the situation and a little disoriented, so I Looked at him. My sense of him as an experiencing being deepened, and I started noticing sensations in my own body/emotion system that were tagged as "resonant" (which is something I've picked up mostly from Circling). I also could clearly see the social dynamics he was playing at. When my mind put the pieces together, I got an impression of a person whose social strategies had his inner emotional world hurting a lot but also suppressed below his own conscious awareness. This gave me some things to test out that panned out pretty on-the-nose.

A fictionalized expansion of that, based on my experiences, might be:

"I was running my usual algorithms for helping someone, but I felt funny about the situation and a little disoriented. In the past I would have just kept trying, or maybe just jumped over to a coping mechanism like trying to get out of the situation. However, I had enough mental sharpness to notice the feeling as it arose, so instead I decided to study my experience of the situation. Specifically, I tried to pay attention to how my mind was constructing the concept of "him". (Though since my moment-to-moment experience doesn't distinguish between "him" and "my concept of him", and since I have no unmediated access to the "him" that is presumably a complex quantum wavefunction, that mental motion might better be described as just "paying attention to my experience of him", or even "paying attention to him".) When I did that, I was able to see past the slightly dehumanizing category I was subconsciously putting him in, and was able to pick up on the parts of my mind that were interacting with him on a more human, agent-to-agent level. I was able to notice somatic markers in my body that were part of a process of modeling and empathizing with him, from which I derived both more emotional investment in him and also more information about the social dynamics of the situation, as processed by my system 1, which my conscious mind had been mostly ignoring. I was able to use all of this information to put together an intuitively appealing story about why he was acting this way, and what was going on beneath the surface. This hypothesis immediately suggested some experiments to try, which panned out as the hypothesis predicted."

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-09T20:41:43.215Z · LW(p) · GW(p)

Yeah, this is basically how I squared it with Val's version, too. A few other examples:

  • The koan thing: I haven't really done koans, so this might be wrong, but I'd guess that the intent is something like: as you are thinking about the koan, Look at the way that your mind represents the koan and how it struggles with trying to solve the paradox; see if those representations give you any hints about the nature of the answer. (One may note that the experience which triggered my "kensho" was by itself an attempt to answer a paradox, and maybe you could formulate it as a koan, something like "what do you do when you let go of doing", or whatever.) Certainly I've felt like meditation experience has given a slightly better intuition of what exactly it is that koans might be hinting at, though again, I haven't really tried working with them.
  • Val also talked about Looking as a way to see the intelligent social web, which also sounds like it's something directed at the outside not the inside. But after reading his post, I've spent some time paying attention to things like... what kinds of narratives and roles do people's words and positions feel like they are taking, and does my own mind feel like it's trying to push others into narrative-shaped boxes. The answer was yes. In particular, I started getting the feeling that some of the conflicts I've been having with an ex, were because we're more intimate than friends but more distant than lovers, in a specific way that leaves my brain confused about how exactly I should behave around them; and as a result, one part of my mind has been trying to solve the issue by pushing them away and another part has been trying to solve it by getting closer to them. That's something that I observed in myself by Looking, but which kind of behavior generalized to the rest of the population would easily get you the kind of social-web stuff Val was talking about. Also, some playing around with me intentionally adopting roles as in the Mythic Mode, and seeing how they influence my behavior and thought patterns, etc.
Replies from: Valentine
comment by Valentine · 2018-03-14T01:46:23.977Z · LW(p) · GW(p)

Yep. I feel understood.

comment by nBrown · 2018-03-08T08:46:02.328Z · LW(p) · GW(p)

Looking became far less mysterious. Thanks.

comment by Ben Pace (Benito) · 2018-03-18T23:42:52.582Z · LW(p) · GW(p)

I see the main contribution of this post as being a personal, phenomenological account of one of the fundamental skills of rationality - this post contains incredibly clear examples and explanations of a very subtle phenomenon. It also helped me (and I believe many others) understand a discussion I'd been confused about. For these reasons, I've curated it.

The main reason I wouldn't want to curate this post is due to it's length, and the fact that I found the second half less clear than the first. But the post is surprisingly readable all-round, so this wasn't a big factor in my decision.

comment by Kaj_Sotala · 2018-03-09T09:59:37.140Z · LW(p) · GW(p)

Cross-posting my comment from another thread here:

--

One way of [explaining what the Buddhist conception of no-self means], which I think should be mostly accurate, is that the state that is being a booed is a belief in the homunculus fallacy.

Dennett, Kurzban, and others have pointed out that there are facts about the way in which the mind and consciousness function which feel deeply counter-intuitive, and that even neuroscientists and psychologists who in principle know that the brain is just a distributed system of separate modules, still often seem to operate under an intuition that there is a single "central" self (as seen from some of the theories that they propose).

I'm not sure whether that's the source of the intuition, but it also seems related that humans seem to have a core system for reasoning about agency which takes as an axiom the assumption that agents exhibit independent, goal-directed motion (as opposed to objects, which only act when acted upon). Which makes sense if you're just reasoning about e.g. social dynamics, but gets you into trouble if you try to understand the functioning of the brain and feel intuitively convinced that there has to be a "central agent" (homunculus) there somewhere, and it can't just be interacting objects all the way down. It's been a while since I read it, but IIRC Kurzban's book had a bunch of examples about how neuroscientists who should know this stuff were still making hypotheses that had the homunculus intuition lurking somewhere.

So when Buddhists say that "there is no self", they are saying that the intuitive belief in the homunculus is wrong; and when they talk about realizing that this is a delusion, they talk about actually coming to internalize this on a deep level.

Replies from: Viliam
comment by Viliam · 2018-03-10T23:19:33.914Z · LW(p) · GW(p)

I wonder whether long-term thinking is a smaller/partial example of this "no self" concept.

I mean, overcoming short-term desires (in order to better satisfy long-term desires) requires realizing that these short-term desires are not you. That e.g. "having your desire to eat chocolate frustrated" doesn't mean that you are being harmed, because "the desire to eat chocolate" is not you; it is possible to frustrate the desire, while benefiting (the other aspects of) you.

It's just that the typical method of breaking this identification is to replace it by identification with something else. Because it is easier to redirect the desire to identify, than to abandon it. So instead of identifying with your short-term goals, you are encouraged to identify with your longer-term goals. The "real you" is no longer the desire to eat chocolate, but e.g. the desire to be healthy, fit, and attractive. Maybe better, but not fundamentally different.

(To use an analogy, it is like abandoning a religion, by converting to a different religion. You no longer believe in god X, because now you believe in god Y instead. Now contrast it with atheism, which means you no longer believe in god X, but there is no replacement for him; you stop having the concept of god. Things that you previously attributed to gods, remain; but they are now seen as natural.)

So it seems to me that this "no self" means realizing that you are neither your short-term goals, nor your long-term goals, not your body, but also not your social group, or religion, or nation, or whatever replacements people use. There is a brain. It works in certain ways. It lives in a body. It generates the process of consciousness. It creates models of itself. It does all of this as a mechanism consisting of parts. One of the things it does, is that it generates a desire to appear internally consistent; it actually sometimes actively works to reduce some great inconsistencies (because regulating the body, which includes the brain itself, is what brains do). But there is no "true you" that this brain is trying to obey or approximate, other than its own content.

But realizing this doesn't effectively turn you into a some p-zombie aware of being a mere zombie, just like being able to reduce your chocolate consumption doesn't erase your propensity to like chocolate. The brain doesn't stop working just because its contents represent the understanding of how it works.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-11T17:50:15.271Z · LW(p) · GW(p)

This definitely sounds at least related to me. (obligatory link: Kegan stages presenting moral development as a process of learning to de-identify with more and more things in the manner that you describe)

I really liked the part where you pointed out that identifying with our long-term desires over our short-term desires, is equally an act of identification, and no more arbitrary than identifying with the short-term ones. Here's something very similar that I wrote on the CFAR alumni mailing list three years ago:

(note that I wasn't really able to make the kind of thing I describe here, into a long-term habit; my cognitive defusion skills weren't developed enough, so I kept getting sucked into fusing / identifying with different desires again, and didn't have enough things to remind me to keep this habit up. for that matter, my skills aren't developed enough to particularly consistently maintain this now, either. need to meditate more!)

--

Briefly: most of us have probably had the experience where we know that we "should" do something, but feel too tired or otherwise unmotivated to do it. For instance, just before typing this message, I was thinking that I could go to bed soon and should therefore brush my teeth first. At the same time, I was feeling too tired to do so, and was trying to come up with excuses to avoid doing it.

The way I ended up brushing my teeth regardless was by noticing that I was essentially having two conflicting desires in my head: one to brush my teeth and one to go right to bed. And because I was identifying with the desire to go to bed, I wanted to resist the desire to brush my teeth. To some extent the desire to brush my teeth felt like "not-me", an annoying burden I wanted to avoid even though I knew it was the right thing to do.

So what I did was to step outside the two desires, and stop identifying with either one. Instead of "being" the desire to go to bed, I was an external observer, watching two parts of me mutually figure out whose suggested course of action would be more useful for the organism's overall well-being.

And then they came to an agreement that brushing my teeth would be better than not doing so, so I brushed my teeth.

How I got here: this kind of thing started happening some time after I'd come home from the UK workshop, and started trying to take seriously the idea of System 1 and System 2 being allies co-operating for the benefit of everyone. I mentioned in an earlier e-mail that at one point I felt a reluctance to do something, noticed that the reluctance was a valuable signal of there being a possible danger to my well-being, and then summoned a feeling of gratitude towards my System 1 for having provided that warning. I continued to work among similar lines, taking seriously every desire and emotional impulse that I felt and treating it as an informational message from System 1. (Of course, that's not to say that I would have consistently remembered to do that, and I often still forget. But whenever I do remember to act so, I try to do so.)

At a certain point, I decided to take this further. Rather than just taking System 1's input, I would trust System 1 entirely, and let it carry out most of the decisions. So for instance, if I would be out at a party and needed to wake up early the next morning, I'd just let System 1 decide when to go home and make the decision based on my feelings. No System 2 trying to force System 1 into going home at some particular time.

Naturally this raised a certain worry. If I was really just going with my feelings with very limited overriding control from System 2, would I ever go home on time? Or would I just end up partying all night?

My worries were assuaged when I considered that, since I was feeling worried about this possibility, and if emotions are the language in which System 1 communicates, then that worry was also something that was coming from System 1. So it wasn't a question of System 2 being the reasonable one and System 1 being the unreasonable one: rather both motives were already contained within System 1, and if I just gave it a chance, I could trust it to take both motives fairly into account.

That was the point where I started to realize that I wasn't modeling the whole thing right in the first place.

So far I had been treating this as System 2 having its own desires of what to do, ones which were very different from System 1's (e.g. S1: party, S2: make sure we're rested the next morning). But in actuality "System 2's desires" weren't really very different from "System 1's desires" - the only thing that happened to make some desires "System 2's desires" in my mind was that I was identifying with those particular desires. Rather than there being two systems with their own desires, there were many kinds of different desires and emotions, each with different motives.

Reflecting upon this, I also realized that there wasn't a very consistent pattern to which desires I happened to identify with. Sometimes I would identify with a "temptation", and struggle to find an excuse to succumb to it. At other times I would identify with some "virtuous" impulse, and struggle to resist the temptation. Which desire I identified with seemed rather arbitrary... suggesting that I could just choose not to identify with either, or maybe identify with both at the same time.

And it turned out that I could.

I could just treat both as pointing to some thing that would be of value to me (e.g. in the tooth-brushing example, getting rest vs. maintaining hygiene), consider both of them to have important and valuable messages, and then neutrally let the two of them work out which one had the highest priority.

Doing this has led to me having much less internal conflict - at least on the occasions when I remember to do it, that is.

Replies from: tcheasdfjkl
comment by tcheasdfjkl · 2018-03-17T23:09:55.616Z · LW(p) · GW(p)

This reminds me of a thing I formulated a little over a year ago and adopted as a "thought-resolution" (goal of changing some thought patterns) in 2017. I will also paste a thing I wrote back then:

---

"Instead of thinking about tradeoffs between what I WANT to do and what I SHOULD do, try to think about choices as tradeoffs between things I want and other things I want.

Examples:

- “I should go to sleep but I want to read this blog post and ALL the comments” --> “I want to read this blog post and ALL the comments right now. I also want to wake up on time tomorrow and have some energy.”

- “I should get up but I want to stay in bed” --> “I want to stay in bed. I also want to both get a good amount of work done today and finish work at a reasonable time.”

- “I want to eat this brownie but I shouldn’t.” --> “I want to eat this brownie. I also want [various good health outcomes].”

Why? Several reasons:

- Making the things that underlie the “should” more explicit might help me actually consider those things in my decision and ultimately make better choices. “I want outcome X” is more motivating than a general sense of unwanted obligation.

- The “should” framing makes me feel guilty when I do things I “shouldn’t” do - even though I don’t believe making these choices is actually *immoral*, which means guilt isn’t justified (and obviously isn’t pleasant). I’m not really making moral choices in these situations, I’m just making tradeoffs between various things I want, which means I shouldn’t feel guilty even if the tradeoff I make isn’t the best one.

- In general “should” is just shorthand for “this is a better choice for me than the opposite”. But this isn’t actually always true. If there is something unusually interesting happening at 2 a.m. one night, it might be worth it to stay up late and incur the negative consequences. I already do that, with the words “I should go to bed but I really don’t want to miss this” - but that makes it sound like a bad choice even when it’s not!"

---

which I guess can be reframed as "I should identify equally with my short-term desires and my longer-term ones; my future self is not a different person from me". De-identifying entirely was not a goal (and I'm not sure it is a goal now, either, though in some ways I do want to move in that direction).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-17T23:40:48.473Z · LW(p) · GW(p)
In general “should” is just shorthand for “this is a better choice for me than the opposite”.

I think it's more complicated than this. In my experience many shoulds come from social pressure, so "I should do X" is often implicitly something like "if I don't do X then the tribe will disapprove of me," e.g. I should exercise, I should eat well, I should study, and so forth.

comment by PeterBorah · 2018-03-08T17:50:43.945Z · LW(p) · GW(p)

This is excellent, thank you for writing it.

I'm not as advanced as you, but I've gotten many of the earlier benefits you describe and think you've described them well. That said, I have some confusion about how stuff like this paragraph works:

And because those emotions no longer felt aversive, I didn’t have a reason to invest in not feeling those things - unless I had some other reason than the intrinsic aversiveness of an emotion to do so.

What does it mean to have another reason beyond the intrinsic aversiveness of an emotion? Who's the "I" who might have such a reason, and what form does such a reason take?

This is a specific question that comes out of a more general confusion, which is: why do descriptions of enlightenment and other advanced states so often seem to claim that enlightenment is almost epiphenomenal? If it were really the case that it didn't change anything, how would we know people had experienced it?

Replies from: Kaj_Sotala, gworley
comment by Kaj_Sotala · 2018-03-08T18:51:22.160Z · LW(p) · GW(p)

These are good questions... unfortunately, in my current state of mind, I don't feel confident in my ability to answer them accurately.

Several of the paragraphs describing my experience, were written based off notes that I made while in that kind of state, as well as memories of the explanations that I thought up while in that state. But even while in that state, I recognized that there's probably a bit of a verbal overshadowing effect going on, with the verbal description mostly but not quite matching my actual experience of the state, with that not-quite-it version nevertheless becoming the main thing that I'd recall from the state when no longer in it.

So, while I remember enough of that state to say that my descriptions here are probably roughly right, the level of detail that your question is trying to tease out is too precise for me to produce a reliable answer, in my current mostly-normal-again state.

I'll see if I can give you a better answer the next time I end up in a state like that. :-)

comment by Gordon Seidoh Worley (gworley) · 2018-03-08T21:42:40.518Z · LW(p) · GW(p)

For what it's worth I've addressed this issue a bit here. A relevant quote from it:

If we include experiences of experiences of experiences in our ontology rather than compressing them into experiences of experiences, thus making a systems-level demand for ontological complexity, we can understand contentment as an experience of happiness towards all experiences of experiences and use it to direct tranquilist axiological reasoning. In this way contentment wraps happy and sad experiences in an experience of happiness, making it of a different type and avoiding apparent contradiction by adding happiness to rather than changing the original experience from pleasure or suffering.
Having deconstructed contentment and understood tranquilism precisely, it appears my original concerns about feedback being suffering were confused because if suffering, a negative valence experience, is something we can be content with, then in this context suffering must be an experience of experience and thus feedback cannot necessarily be suffering because feedback can exist as a direct experience not just a meta-experience.
comment by Vaniver · 2019-11-22T19:54:19.451Z · LW(p) · GW(p)

I think this post basically succeeds at its goal (given the discussion in the comments), and feels like an important precursor to discussion of some of the directions the LW community has been moving in for the last several years. I think the connection to cognitive fusion was novel to me when I first read it, but immediately clicked in place.

comment by SilentCal · 2018-03-21T18:20:03.553Z · LW(p) · GW(p)

Here's something puzzling me: in terms of abstract description, enlightenment sounds a lot like dissociation. Yet I'm under the impression that those who experience the former tend to find it Very Good, while those who experience the latter tend to find it Very Bad.

Replies from: Qiaochu_Yuan, Kaj_Sotala
comment by Qiaochu_Yuan · 2018-03-21T20:02:47.831Z · LW(p) · GW(p)

Yeah, it's certainly very easy to confuse defusion and dissociation. Dissociation is something like trying not to let something in at all (a sensory experience or an emotion or a memory), whereas defusion is something like letting it in fully but then - not sure how to describe this part - feeling whatever you feel about it, and feeling calm one meta level up about whatever that is?

comment by Kaj_Sotala · 2018-03-21T18:58:46.463Z · LW(p) · GW(p)

That puzzles me too!

I've experienced both this stuff and mild dissociative states on occasion, and yeah, they're indeed different in the way that you describe. And I don't have a model which would explain the difference.

My best guess that the pattern of defusion is different, and that in dissociation you're somehow defused from your normal sense of self while still remaining fused with the conceptual structures that say that having a sense of self is important.

Or something. :)

comment by paulfchristiano · 2018-03-20T06:18:04.569Z · LW(p) · GW(p)

I'd like not to suffer for two reasons:

1. I'm compelled to avoid suffering, which is maladaptive; if I didn't care about suffering, I'd get more of the other things I want.

2. Suffering is bad; I'm interested in suffering less whether or not it changes my behavior.

I'm not really clear on what you mean when you say "suffering isn't aversive." (ETA: I meant "pain isn't aversive" or "pain doesn't cause suffering.") Intuitively, I'd expect it to mean that you fix both #1 and #2. Someone for whom suffering isn't aversive could, for example, regularly experience extreme pain just to prove the efficacy of their techniques or earn a small stipend.

My understanding of brains suggests that this is probably not possible to maintain. So it would be very interesting to learn that it is possible. As far as I can tell no one has ever done the kind of stunt that would convincingly show they have this ability, which makes me a bit suspicious but could have other explanations.

Fixing #2 would be consistent with my understanding of brains. But in that case "pain is no longer aversive" would be inapt, since in fact pain is still causing avoidance. Moreover, in this case it seems hard to distinguish "stop suffering" from "become deluded about whether you are suffering," and I'm not sure how I'd tell the difference.

Fixing #1 without fixing #2 would seem quite bad.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-20T10:17:31.086Z · LW(p) · GW(p)

I should note that I didn't say that suffering wouldn't be aversive, I said that pain isn't aversive. My model is basically that suffering is aversion (to e.g. pain), so it wouldn't make sense to say that suffering isn't aversive. So I would reword your #1 as "I'm compelled to avoid pain, which is maladaptive".

That said, based on what I've been able to observe so far, there are at least three things that happen:

1. Something like pain asymbolia, where pain that is currently being experienced ceases to be aversive both in a behavioral and experiental sense: its neither feels subjectively unpleasant, nor do I do anything to disengage myself from the situation with it.

2. An effect where, if #1 happens often enough, my anticipations about the unpleasantness of painful experiences update: the anticipation of a painful event ceases to be aversive (in both the experiental and behavioral senses).

3. An effect where anticipations which have not updated become less aversive, in such a way that I no longer experience the anticipations themselves as being unpleasant. This seems to be a special case of #1, since the pain from anticipated pain is by itself an instance of pain. However, this doesn't necessarily affect my behavior with regard to moving towards the painful thing: the anticipation of something unpleasant still acts as a behavioral disincentive.

The last one probably needs some elaboration. As an example, there's one somewhat unpleasant thing that I should be doing, which I've now been procrastinating on for several weeks. What stuff like Looking has achieved, is that I'm capable of holding the sense of I should be doing this but the thought feels unpleasant in my mind and keep staring right at it, as opposed to feeling compelled to drown it out of my mind completely, by e.g. engaging in entertainment that would dull my experience of it.

This seems indirectly useful, since if I'd be flinching away from the very thought of needing to do this thing, then I couldn't even try to troubleshoot the issue. Now, I'm capable of at least holding the thought in my mind, which still hasn't caused me to actually do the thing, but at least it has enabled me to try to figure out where the problem is, as well as helping me use the thought for structured procrastination (as opposed to my day being entirely wasted because I try to just play a game or something to get the whole thing out of my mind).

Someone for whom suffering isn't aversive could, for example, regularly experience extreme pain just to prove the efficacy of their techniques or earn a small stipend.

There are a few studies documenting something like this, but the ones that I'm aware of seem to be few in number and have all the standard caveats related to psychological research. Still, e.g. this review seems plausible to me, because it reports studies which found that what it calls "open monitoring"-style meditation seemed to reduce pain sensitivity (even when its practitioners were not actively meditating) while "focused attention"-style meditation did not have the same kind of effect; of these two, "open monitoring" roughly means "Looking-style practices", so the results are what I'd expect.

Replies from: paulfchristiano
comment by paulfchristiano · 2018-03-20T16:42:30.109Z · LW(p) · GW(p)

Those results look like they are in the placebo effect range, not the "qualitative change in the way that pain is processed" range.

Is your understanding that it should be possible to completely or almost completely eliminate the pain-suffering connection? If so, do you believe that any humans have actually achieved that?

(ETA: If the answers are "no" I don't think that's particularly damning. Mostly the relevant thing is that by default I'm not going to adjust me models of cognition based on this kind of report, worst case is that I miss out by failing to incorporate some evidence.)

Replies from: 9eB1, Kaj_Sotala
comment by 9eB1 · 2018-03-26T07:04:36.709Z · LW(p) · GW(p)

The case of the Vietnamese monk who famously set himself on fire may meet your criteria. The Vietnamese government claimed that he had drugged himself, but it's hard to imagine a drug that would allow you to get out of a car under your own power and walk to a seated position, and then light a match to set yourself on fire but still have no reaction as your flesh burns off.

comment by Kaj_Sotala · 2018-03-20T18:00:43.442Z · LW(p) · GW(p)
Is your understanding that it should be possible to completely or almost completely eliminate the pain-suffering connection? If so, do you believe that any humans have actually achieved that?

I don't have enough information to answer with any higher confidence than "maybe" to the first question. (For the second, I don't know what it would mean in practice for it to be a thing that's possible, but which no human has achieved - if nobody has managed to achieve it yet, including any of the monks who have the option of spending basically all their waking hours meditating, then it seems to be impossible for all practical intents and purposes.)

comment by Aella · 2018-03-21T16:57:20.830Z · LW(p) · GW(p)

I'm not sure if you've tried psychedelics. Psychedelics have very different effects on people, but I was very lucky; on me they produced exactly the effect you described - reducing my mental processes to far more granular levels. I did psychedelics enough that now this type of 'unfusing' process feels somewhere between 'default' to 'always present but sleeping' to me. I feel rendered mute when trying to talk about this, because this topic triggers a strong inability in myself to remain fused with the thoughts I am trying to handle. It also makes me cry, which makes discussions awkward.

I've spent a lot of time in rationalist communities trying (and failing) to talk about this topic (cause of the crying). Reading stuff like this makes me feel a lot of emotions and gives me a desire to be around you and Valentine and the others who are saying similar things.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-21T18:46:25.987Z · LW(p) · GW(p)

Thank you for sharing. :) I feel touched from hearing that my article affected you in such a major way.

Would be happy to hang out with you in case we're ever on the same country and continent!

comment by ZeitPolizei · 2018-03-11T17:43:42.974Z · LW(p) · GW(p)

[Note: mostly just me trying to order my thoughts, kind of hoping someone can see and tell me where my confusion comes from]

So the key insight regarding suffering seems to be that pain is not equal to suffering. Instead there is a mental motion (flinching away from pain) that produces (or is equal to?) suffering. And whereas most people see pain as intrinsically bad, Looking allows you to differentiate between the pain and the flinching away, realizing that pain in and of itself is not bad. It also allows you to get rid of the flinching away, thus eliminating the suffering, but without eliminating the pain. But is the flinching away intrinsically bad? Or is it also possible to defuse from the flinching in a way that makes it less unpleasant?

And then, is there also an equivalent for good experiences? Pain is to suffering as pleasure is to…? Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

Now, once you get that pain is not equal to suffering, and you've thus managed to eliminate suffering for you personally, what reasons remain to try to change something about what you expect to experience in the future? If you still care about other people suffering, then of course there is plenty to do, to reduce other people's suffering by reducing the pain they experience. But it wouldn't really be about the pain, just the reaction to the pain.

Then, suppose we somehow managed that all people (or conscious entities) no longer experience the flinching from pain suffering. Would there still be reasons to make the world "better", or would we be content with things just unfolding however, because as long as we don't suffer over it, nothing is intrinsically bad? Is the kind of suffering that comes from the flinching away from pain maybe the only thing that is bad in a morally relevant way? Once suffering is out of the picture, what kinds of wants, preferences, reasons or values remain, that actually make a difference to how the world is supposed to look? Intuitively, a world in which suffering is eliminated via getting rid of an aversion to pain feels very much like a world where everyone is wireheaded and would contain very little value, if any. I have a sense of it being a bad thing if people are feeling okay (or great, in the case of actual wireheading) while the world is actually really shitty. Now is this sense of it being a bad thing due to values that I hold which go beyond pleasure and suffering, and are they stable under reflection? Or is the correct conclusion that yes, once nobody suffers anymore, it doesn't matter if the rest of the world looks really bad? Is the reason it feels so bad simply because I still have the alief that pain is intrinsically bad, and Looking would allow me to see that pain really is in a way irrelevant?

if you truly step outside your entire motivational system, then that leaves the part that just stepped out with no motivational system,

And if you see yourself going to the store to get some food, well, why not go along with that? After all, to stop acting as you always have, would require some special motivation to do so.

Even if you do manage to defuse from everything that causes you suffering, your existing personality and motivational system will still be in charge of what it is that you Look at in the future.

These quotes, as well as what I remember others saying about enlightenment, make it sound like there is still ultimately a "self" or "I" that is the one that "steps outside your motivational system", "sees yourself going to the store", "manages to defuse", or "sees through the illusion of the self". But if I understand correctly, what actually happens is that there is a conscious process that makes one of these motions, but it doesn't have any privileged position and is no more the "true self" than e.g. the urge to go to the store. So ultimately all these different parts, insights, and thoughts are just part of the same single person. I would have initially expected this to mean that there would be feedback between the different parts (e.g. realizing pain isn't so bad should also eliminate the motivations for avoiding pain). But upon reflection, it seems like those kinds of insights are only possible because there is no feedback between the different parts? I feel like I may be mixing together some things here that are actually separate.

Replies from: Qiaochu_Yuan, Hazard, eisher-saroya
comment by Qiaochu_Yuan · 2018-03-12T19:37:46.940Z · LW(p) · GW(p)
But is the flinching away intrinsically bad? Or is it also possible to defuse from the flinching in a way that makes it less unpleasant?

I'm confused about what you mean by "intrinsically bad" here, and especially given the relationship of the second question to the first question, suspect that your concept of "intrinsically bad" conflates at least two things. Your second question is much easier to answer: yes, you can defuse from flinching, and yes, that makes it less unpleasant.

Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

Yes, there is a mental motion of welcoming an experience, and you can do it to any experience, not just pleasurable ones; you can even find joy in welcoming any experience, not just pleasurable ones. I am still confused about what you mean by "intrinsically good."

Now, once you get that pain is not equal to suffering, and you've thus managed to eliminate suffering for you personally, what reasons remain to try to change something about what you expect to experience in the future?

Because you want to. (I'm not sure how to explain what I mean by this. For me the internal experience of "I want this" is quite different from the experience of "I am chasing after this in order to escape from pain / suffering," but the distinction may not be experientially clear for many / most people.)

Then, suppose we somehow managed that all people (or conscious entities) no longer experience the flinching from pain suffering. Would there still be reasons to make the world "better", or would we be content with things just unfolding however, because as long as we don't suffer over it, nothing is intrinsically bad?

Yes, lots. I used to flinch away from pain constantly; I do it less now, which means I'm more free to do things that I want to do, like make music and hug people and generally flourish and encourage human flourishing. Also, I increasingly suspect you have some confusion wrapped up in your concept of "intrinsically good / bad."

Is the kind of suffering that comes from the flinching away from pain maybe the only thing that is bad in a morally relevant way?

Nope.

Once suffering is out of the picture, what kinds of wants, preferences, reasons or values remain, that actually make a difference to how the world is supposed to look?

Uh... all of... the other ones?

Or is the correct conclusion that yes, once nobody suffers anymore, it doesn't matter if the rest of the world looks really bad?

I'm confused about what you mean by this and what it would mean to answer this question, mostly because I don't know what you mean by "matter."

I feel like I may be mixing together some things here that are actually separate.

Yes, I think so too. Can you try paying a lot of attention to what comes up when you think about the concept of "intrinsically good" or "intrinsically bad" (edit: also "suffering" and "mattering") and just write down literally everything that pops into your head, including words or sentences that sound outrageous or too dramatic or whatever?

comment by Hazard · 2018-03-18T12:53:40.103Z · LW(p) · GW(p)

In this post, Nate Soares outlines a thing he calls "Moving Towards the Goal", which feels incredibly relevant to this conversation.

This leads us to my second trick for avoiding akrasia: I am not Trying Really Hard. People who are Trying Really Hard give themselves rewards for progress or punishments for failure. They incentivize the behavior that they want to have. They keep on deciding to continue doing what they're doing, and they engage in valiant battle against akrasia. I don't do any of that.
Instead, I simply Move Towards the Goal.

I'd highly recommend Nate's Replacing Guilt sequence. In a very concrete, "traditional LW" way, he lays out how you can still do cool stuff, yet not think in terms of shoulds, guilt, or intrinsically good or bad.

comment by Eisher Saroya (eisher-saroya) · 2018-03-19T03:02:35.970Z · LW(p) · GW(p)
And then, is there also an equivalent for good experiences? Pain is to suffering as pleasure is to…? Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

I wonder about this too. If there is pleasure and the mental experience of welcoming a pleasure, then what happens if you stop 'welcoming a pleasure'?

Wouldn't you no longer be motivated to pursue pleasure? How would you ever feel happy? Would pleasure feel 'bland' or unsatisfying? I also wonder if it's possible to mistakenly decouple pleasure and 'welcoming a pleasure' without ever meditating?

Replies from: Raemon
comment by Raemon · 2018-03-19T05:21:56.339Z · LW(p) · GW(p)

Seems like "Are Wireheads Happy?" is relevant to this.

comment by Gordon Seidoh Worley (gworley) · 2018-03-08T21:59:22.746Z · LW(p) · GW(p)

Although there may be different ideas about enlightenment within different lineages, what Kaj describes is pretty consistent with the way we think about enlightenment within Soto Zen. That is, enlightenment is just the state of always being awake to what's going on (Looking as Val put it), or as I would probably put it, to be able to hold as subject only intentionality and hold everything else as object. It doesn't give you special abilities or anything like that; it just means you notice what's happening.

That said, noticing what you're up to can have pretty profound effects on you over time because much of how we operate depends on hiding from ourselves, which is why Hanson sometimes talks about homo hypocritus, LW talks about heuristics and biases, and psychotherapy may talk about the shadow. I've certainly seen myself change a lot as a result of past subject-object shifts (a more technical term I would use for kensho, which also highlights that there is more than one level of awakening) as I've detailed to some extent here.

comment by Ben Pace (Benito) · 2018-03-18T23:17:07.196Z · LW(p) · GW(p)

In the section On why enlightenment may not be very visible in one’s behavior which of the two things do you mean to argue?

  • Learning to Look at suffering is not likely to make a visible change in one's behaviour
  • Learning to Look is not likely to make a visible change in one's behaviour

Because I would buy the first claim but not the second. I expect that if someone in general is able to not flinch away from painful experiences, they're able to have better long-term relationships. I notice how people I talk to often flinch away from (a) silence (b) awkwardness. They will make any assumptions required "Oh, of course that was my fault, I apologise" to avoid having to deal with the fact that we have different norms and will have to explicitly navigate them. While this reduces immediate discomfort, it doesn't strengthen the long-term relationship as much.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-19T10:51:04.614Z · LW(p) · GW(p)

It's a little complicated, but my current model is something like "learning to Look at suffering is going to make a visible change in your behavior, but the gains from some of the later-stage steps aren't necessarily as large as you'd expect from what a naive suggestion of 'overcoming suffering' might imply".

But to use your example of long-term relationships, I've definitely noticed improvements in my ability to e.g. just be okay with things that cause tension with my relationships with other people, in a way that lets me accept those tensions rather than react with an unhealthy need to "fix" the other person. (Because obviously if something about my relationship with someone else doesn't work the way I'd like, it's the other person that needs fixing... or at least, so some parts of my mind seem to think. But they've been a lot less vocal about this recently.)

comment by ryan_b · 2018-03-09T18:21:38.615Z · LW(p) · GW(p)

Prodigious improvement over other explanations I have seen! I have no inherent objection to identifying things as ineffable, but there are usually boundaries around the ineffability which can be identified. Some of them are very well understood.

For example, childbirth. There is no way to compress the experience of childbirth in such a way that someone who has not gone through it themselves can be said understand the experience: but we have doctors and midwives who specialize in managing it; books and classes about how to approach it and deal with it safely; tools and techniques for making it safer and more comfortable. Yet the experience itself remains ineffable.

Another example, combat. There is also no way to compress that experience. But we have large institutions that train people how to recognize and prepare for such an experience, by the million. We have specialized areas of medical knowledge that arose specifically to deal with the aftermath of these experiences. Yet the experience itself remains ineffable.

I wonder about ineffably-ineffable experiences; it seems like once we have a good institutional buildup for producing the prerequisite experiences then maybe we could build ineffable institutions to approach the next level. It seems like the population of enlightened-veteran-mothers will be very small though.

comment by romeostevensit · 2018-03-09T17:53:18.623Z · LW(p) · GW(p)

Newbies to meditation talking about enlightenment sounds about as dumb as science reporters talking about quantum mechanics. The whole discussion will improve if we taboo the word. Thinking that minor attainments are enlightenment has a several thousand year history at this point. The general advice given for attainments is give it 6 months before you go broadcasting it and preferably talk to a teacher who is farther along than you. For the rationality community I strongly encourage chatting with Michael Taft or Kenneth Folk, both of whom are available for online video conferencing. They are very good at separating out epistemic claims from perceptual/ontological claims and have decades of experience.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-03-09T20:52:31.278Z · LW(p) · GW(p)

This sounds like a criticism of me speculating about the nature of enlightenment. I acknowledge that my hypothesis is based on very early-stage data and might be wrong / is the weakest part of my article (and I flagged it as such). But I felt like some speculation was necessary, in order to address the evidence brought up in the earlier threads which suggested that this whole enlightenment thing is just wireheading with no real benefit. It would have felt logically rude to simply write an article about the benefits of insight without making at least some attempt to square my current understanding of its usefulness with the evidence that had previously been offered for it being just useless wireheading.

If you think that my speculation is just blatantly wrong, as you seem to be implying, then I would appreciate a summary of a position that's more correct while also engaging with those criticisms.

Replies from: romeostevensit
comment by romeostevensit · 2018-03-09T21:23:33.432Z · LW(p) · GW(p)

Sorry that that sounded overly critical. I mostly wanted to alert the people reading these recent posts. I think this post is useful. The question of which aspects of contemplative practices/schools are just wireheading (at least some of them likely are) also is not benefited from the 'enlightenment' trope IMO.

PMd you more about the last part.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-03-09T22:57:07.070Z · LW(p) · GW(p)

If someone wants to find out more about whether/which contemplative practices/schools are more than just wireheading, what's the best (i.e., lowest cost/risk) way of doing that? Are you aware of any good evidence or arguments about this, that haven't already been brought up here recently?

Replies from: romeostevensit
comment by romeostevensit · 2018-03-09T23:42:27.688Z · LW(p) · GW(p)

there are no summaries that I have encountered that I am truly happy with, and my guess based on past experience is that if I did, I would disagree with that assessment a year from now. Getting genre savvy in this way is apparently part of the reason teachers are mum on many aspects. My own motivation is based on an attempt to suss out upstream levers in a scope sensitive way, ie what are the modal properties of truth seeking processes. Attacking that one with an intent to dissolve misunderstandings eventually gets you out of the car. Or at least gets you a hand out the window.

Also, thanks for the useful thought: we have lots of thoughts about what counts as epistemic evidence. What counts as ontological evidence? Teleological evidence? Traditional answers are pretty low complexity: coherence, compressibility, reference class forecasting. Underspecified.

edit: I do recommend Michael Taft and Kenneth Folk's writings as well. As well as their teacher, Shinzen Young. Though he is more old school being from the previous generation and thus having fewer or incorrectly used shibboleths.

I'll also mention that the traditional answer is that people have to find teachers they resonate with. The mindscape is large, and the next most useful step is non-obvious from different places within it. (See: law of equal and opposite advice.) Meta-level advice is more like "you can't do gradient descent on a flat surface" -Harrison Klaperman (ie if you can't perceive movement in the z-axis). A good teacher should be giving you fairly non-mysterious answers. A great example is the question "Who authorized you to teach." if you get a non-answer or you look up the person they mentioned and they seem batshit, well, no problem grabbing useful ideas from them, but definitely don't take them super seriously. Another great example is that effective practices should show results within a few weeks. If a teacher tells you to do something, it doesn't work, and you go back and they tell you it might take years, run away fast.

Replies from: ESRogs
comment by ESRogs · 2018-03-10T21:00:08.918Z · LW(p) · GW(p)
What counts as ontological evidence? Teleological evidence?

What do you mean by these terms?

Would ontological evidence be evidence about what is (in contrast to epistemic evidence being about which statements are true)? It's not clear to me that you'd want to evaluate answers to questions about what is differently from other kinds of claims.

Replies from: romeostevensit
comment by romeostevensit · 2018-03-11T15:02:47.598Z · LW(p) · GW(p)

Ontological: heuristics that result in your dividing up the world into categories in a certain way. Descriptive, prescriptive. What are your tacit heuristics, what is the result, do you endorse this result?

Teleological: same but for intentionality, goal directed behavior.

comment by chodpa · 2024-09-11T17:18:22.601Z · LW(p) · GW(p)

including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees

What a trio of books! All three of those sit prominently on my bookshelf, and have had significant impacts for me at different times. I absolutely treasure The Seeing that Frees. I am so happy that Rob lived long enough to give birth to that book, and his many talks online - though of course would have loved to have seen his later teachings on Soulmaking in book form.

comment by Stuart_Armstrong · 2019-11-28T10:17:25.901Z · LW(p) · GW(p)

Excellent and well worked on, suggesting many different interesting ideas and research avenues.

comment by Ruby · 2019-11-24T16:04:01.499Z · LW(p) · GW(p)

Seconding Vaniver.

comment by Chris_Leong · 2024-07-27T15:37:32.524Z · LW(p) · GW(p)

You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself.

 

I don't think this is true. You can use the desire to want to defuse from the belief to get yourself to a point where you are trying to defuse from the belief, then you just let the desire to defuse from the belief go (even if just temporarily) so that you can actually defuse.

comment by romeostevensit · 2021-01-09T07:19:51.270Z · LW(p) · GW(p)

Pain as information about option value is a nice compact frame. Thanks.

comment by ship_shlap (Bluestorm_321) · 2024-04-22T15:17:34.050Z · LW(p) · GW(p)

I won't correct everything I find wrong, but I felt that the "Understanding Suffering" section was completely off. I will just mention one of the major points:


Remember, enlightenment means that you no longer experience emotional pain as aversive. In other words, you continue to have “negative” emotions like fear, anger, jealousy, and so on - you just don’t mind having them.

 

This is utterly wrong. Enlightenment in Buddhism means emotional pain cannot arise, period. In Buddhism, there are five "hindrances" or negative mental states: desire, aversion, compulsion/agitation, slothfulness and remorse. This list is said to encapsulate all possible negative feelings. In an enlightened person, these hindrances cannot arise. The "fetter", the bond which causes a person to experience these is uprooted. 

Secondly, in Buddhism, it's believed that negative mental states are always a bad and painful experience so it's impossible to not mind having them. If you think about it, you can't be sad and not mind it. You can't be angry but not mind it. There are a few Buddhist circles which believe you can be detached from anger or desire, but this doesn't make sense because in Buddhist theory, such mental states arise from attachment in the first place. 

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2024-04-22T20:34:53.846Z · LW(p) · GW(p)

Yeah, some Buddhist traditions do make those claims. The teachers and practitioners who I'm the most familiar with and trust the most tend to reject those models, sometimes quite strongly (e.g. Daniel Ingram here). Also near the end of his life, Culadasa came to think that even though it might at one point have seemed like he had predominantly positive emotions in the way that some schools suggested, in reality he had just been repressing them with harmful consequences.

Culadasa: As a result of my practice, I had reached a point where emotions would arise but they really had no power over me, but I could choose to allow those emotions to express themselves if they served a purpose. Well, it’s sort of a downweighting of emotions – negative emotions were strongly downweighted, and positive emotions were not downweighted at all. So this was the place I was coming from as a meditation teacher. I just never really experienced anger; when something would cause some anger to arise, I’d notice it and let go of it, and, you know, it wasn’t there. Negative emotions in general were just not part of my life anymore. So it was a process of getting in touch with a  lot of these emotions that, you know, I hadn’t been making space for because I saw them as unhealthy, unhelpful, so on and so forth.

Michael: So, in essence, you had bypassed them.

Culadasa: Yes, it’s a bypassing. I think it’s a very common bypassing, too, when somebody reaches this particular stage on the path. I mean, this is a big of a digression, but I think it maybe helps to put the whole thing into perspective, the rest of our conversation into perspective…

Michael: Please digress.

Culadasa: Okay. So this is a stage at which the sense of being a separate self completely disappears. I mean, prior to that, at stream entry, you know, there’s no more attachment to the ego, the ego becomes transparent, but you still have this feeling that I’m a separate self; it still produces craving; you have to work through that in the next path, and so on and so forth. But this is a stage where that very primitive, that very primal sense of being a separate self falls away. Now, what I know about this from a neuroscience point of view is that there’s a part of the brainstem which was the earliest concentration of neurons that was brain-like in the evolution of brains, and there are nuclei there that were responsible for maintaining homeostasis of the body, and they still do that today. One of their major purposes is to regulate homeostasis in the body, blood pressure, heart rate, oxygenation of the blood, you name it, just every aspect of internal bodily maintenance. With the subsequent development of the emotional brain, the structures that are referred to as the limbic system, evolution provided a way to guide animals’ behaviors on the basis of emotions and so these same nuclei then created ascending fibers into this limbic system, from the brainstem into these new neural structures that constituted the emotional brain.

Michael: So this very old structure that regulated the body linked up with the new emotional structures.

Culadasa: Right. It linked up with it, and the result was a sense of self. Okay? You can see the enormous value of this to an animal, to an organism. A sense of self. My goodness. So now these emotions can operate in a way that serves to improve the survival, reproduction, everything else of this self, right? Great evolutionary advance. So now we have organisms with a sense of self. Then the further evolution of cerebral cortex, all of these other higher structures, then that same sense of self became integrated into that as well. So there we have the typical human being with this very strong, very primal sense that “I am me. I am a separate self.” We can create all kinds of mental constructs around this, but even cats and dogs and deer and mice and lizards and things like that have this sense of self. We elaborate an ego on top of it. So there’s these two aspects to self in a human being. One is the ego self, the mental construct that’s been built around this more primal sense of self. So this is a stage at which that primal sense of self disappears and what usually seems to happen is, at the same time, there is a temporary disappearance of all emotions. I think that we’ll probably eventually find out that the neural mechanism by which we bring about this shift, that these two things are linked, because the sense of self is – its passageway to the higher brain centers, which constitute the field of conscious awareness that we live in and all of the unconscious drives that we’re responding to, the limbic system, the emotional brain, is the link.

Michael: Yes.

Culadasa: So something happens that interrupts that link. The emotions come back online, but they come back online in a different way from that point. So instead of being overcome by fear, anger, lust, joy, whatever, these things arise and they’re something that you can either let go of or not. [laughs] That’s the place where I was.

Michael: They seem very ephemeral…

Culadasa: Yes, right. They’re very ephemeral, and very easy to deal with, and there is a tendency for other people to see you as less emotional and truly you are because you’ve downregulated a lot of more negative emotions. But you’re by no means nonemotional; you’re still human, you still have the full gamut of human emotions available to you. But you do get out of the habit of giving much leeway to certain kinds of emotions. And the work that I was doing with Doug pushed me in the direction of, “Let’s go ahead and let’s experience some of those emotions. Let’s see what it feels like to experience the dukkha of wanting things to be different than the way they are.” So that’s what we did. And I started getting in touch with these emotions and their relationship to my current life situation where I wasn’t fulfilling my greatest aspirations because I was doing a lot of things that – stuff that had to be done, but that I had no interest in, but I had to do it and that’s what occupied my time.

I'm guessing that something similar is what's actually happening for a lot of the schools claiming complete elimination of all negative feelings. Insight practices can be used in ways that end up bypassing or suppressing a lot of one's emotions, but actually negative feelings are still having effects in the person, they just go unnoticed.

If you think about it, you can't be sad and not mind it. You can't be angry but not mind it. 

This disagrees with my experience, and with the experience of several other people I know.

Replies from: Bluestorm_321
comment by ship_shlap (Bluestorm_321) · 2024-04-23T07:06:45.281Z · LW(p) · GW(p)

Based on the link, it seems you follow the Theravada tradition. The ideas you give go against the Theravada ideas. You need to go study the Pali Canon. This information is all wrong I'm afraid. I won't talk more on the matter.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2024-04-23T09:41:37.922Z · LW(p) · GW(p)

Based on the link, it seems you follow the Theravada tradition. 

For what it's worth, I don't really follow any one tradition, though Culadasa does indeed have a Theravada background.

comment by bfinn · 2019-12-11T18:15:03.667Z · LW(p) · GW(p)

On a small point, maybe it would be helpful to use a more natural term than 'defusion', e.g. 'detachment' (if that expresses it clearly), or perhaps something like 'objectivity'.

As better to avoid the confusion of introducing a new technical term if something can be expressed just as well with a familiar one.

comment by sampe · 2018-07-28T20:24:34.512Z · LW(p) · GW(p)

Kaj, where can I read more about the three marks of existence? Preferably something as detailed as possible while still being readable in no more than a full day.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-07-29T08:33:00.711Z · LW(p) · GW(p)

Good question, I haven't really encountered anything that would provide a very good and comprehensive explanation in third-person terms. The sources that I've seen are more concerned with giving you pointers to things in your own experience that you can investigate and then come to experience them directly, as that's the thing that actually causes your mind to update, whereas simply getting a conceptual description of them doesn't.

comment by Dan Vekhter (dan-vekhter) · 2018-05-07T09:47:31.357Z · LW(p) · GW(p)

I think that a succinct statement of enlightenment would be: one flavor.

You notice the oneness, the sameness, of all subjective experience, and cease flinching from certain ones and grasping at others.

Any thoughts

comment by rk · 2018-03-29T11:46:52.267Z · LW(p) · GW(p)
Based on how I experienced things when I had the experience that made enlightenment seem within reach, something like a lack of noticeable change is in fact exactly what I would expect from many people who become enlightened.

If this is the case, our experience becomes slightly surprising from an anthropics-ish point of view.

That is, if there are multiple ways to experience the world that are instrumentally the same (like suffering from pain or not), whichever one we happen to have is a random draw. It seems we could have evolved to have any of them with equal probability. The more options there are, the more it would be nice to have a hypothesis that put extra weight in ending up with the experience we have.

Of course, one could bite the bullet and just say "well, we had to end up with one of the possible experiences! we're just kind of unlucky". Also one could argue that though the behaviours are the same, this experience is somehow more costly (maybe awareness of the constituents of your experience is just energetically costly) and recover naturality that way