Meditation, insight, and rationality. (Part 1 of 3)
post by DavidM · 2011-04-28T20:26:29.071Z · LW · GW · Legacy · 123 commentsContents
Why meditate? What's wrong with your mind. How meditation works. Benefits. Risks. Addendum. None 123 comments
For millennia, the practice of meditation has been deeply intertwined with many of the world's major and minor religious and spiritual traditions, as a technique aimed as everything from developing magical powers to communing with gods and demons. By contrast, during the last few decades in the West, enthusiasts have promoted meditation (along with a variety of its secularized offshoots) as a good way to cultivate relaxation, creativity, and psychological self-improvement in the context of our hurried and stressful lives. Because of this variegated cultural history, it's no surprise that many people see it as either as an exercise that leads to irrationality and madness, or as a harmless but questionably-effective pop science fad---sometimes both at once!
Set against this backdrop, small and somewhat private groups with an interest in meditation have long gathered together in secret to discuss and learn. Not satisfied with the popular dogmas, they got down to figuring out, as best they could, whether meditation really leads to anything that could be called "enlightenment": by experimenting on themselves, comparing notes with others, seeing where it led them, and seeing whether it would repeatably lead others to the same point. Because their subject is taboo, they have labored in the shadows for a very long time; but the modern mass-adoption of the internet has allowed what they know to reach a widening audience. And while they fought for years to discover these things, you now have the opportunity to hear about them merely for the cost of your internet connection---for some of you that may be a blessing, but guard your minds so that it isn't also a curse.
Before I begin, there are three caveats:
1) The perspective I'm going to present is one most closely associated with Buddhism, and you may be inclined to ask "Is this a good description of what Buddhists believe?" or "Is this what Buddhism is really about?" or even shout "This doesn't sound like Buddhism to me!" The relation between this material and Buddhism is an interesting topic (and I'll discuss that in Part 2), but for now, I make no claims whatsoever. This material draws enormous inspiration from particular strains of Buddhism, and one may argue that it is a highly plausible interpretation of what Buddhism is 'really about,' but in the end it stands or falls by itself.
2) What is declassified on the internet is still taboo in many communities. If you walk into your local dharma group / meditation center / Buddhist sangha or what-have-you and start asking about enlightenment or talking concretely about your own meditation experiences and what you think they mean, you may not get the response you'd expect. Having warned you, my conscience will remain clear...even so, don't be a jerk, and please recognize that not everyone who appears to be interested in meditation wants to hear about these things.
3) What follows is the best attempt at writing this information up in a way that I think suits the LW community. No one besides me is to blame for any shortcomings it has.
Why meditate?
You may take up or have taken up meditation for all kinds of reasons. It may help you to relax, it may help you to think clearly, and it may even help you to fit in with your New-Agey friends and the alternative lifestyle crowd. But one of the best reasons to start meditating is so that you can stop being deluded.
Delusions come in many kinds, and the right medicine for one may be ineffectual for another. Some things called delusions are merely misinformation, and melt away in the light of evidence. Other types of delusions stem from mental illness and can be vanquished by therapy or medication. The common practices of rationalists are well-suited to eliminating delusions that spring from cognitive biases. The sane human mind is generally quite good at representing and then talking about these cases: you can call yourself out on these types of delusions, or failing that, someone else will call you out. If you disagree with their assessment, you at least can expect to understand what's at stake in the disagreement.
But there is another way to be deluded, in which you can't easily understand what it means to be deluded in that way. For the purpose of crafting a simple metaphor, think of beliefs, thoughts, various cognitive representations, etc. as tangible objects in the factory that is your mind, and think of the various cognitive transformations that your mind is capable of as industrial processes that take these objects as inputs and produce others as outputs. So, via one process, these objects can be looked at, via another their properties can be manipulated, or further objects can be generated as a function of the properties of the inputs; ultimately, all objects are either put to further use in-house as inputs to other processes, or marketed to consumers (= behaviors in the external world) at some point. Most processes are simple, but others are sophisticated (second-order) and can assess the ways that different inputs and outputs make the factory's gears grind differently, and adjust operations to compensate. If the outputs are built to spec, all's well; malformed outputs are either rejected on the market or gum up the works when used in-house, depending on what they are and what they're supposed to do.
There are lots of simple ways that factories can run badly: the processes are obsolete, there aren't enough doo-dads available when the machinery requires doo-dads to run, or someone puts sprockets in the chute clearly marked "COGS ONLY". But there are also systematic ways that production can be inefficient.
Suppose that some processes take objects and project their image, via a lens, onto a photosensitive site that controls the specifications of whatever that process outputs. If the lens is sufficiently good, there's no problem. If the lens has severe aberrations...well, it depends. Some processes may not be sensitive to the distortions that the lens imposes, so there is no practical effect. Other processes will output different objects than they otherwise would have due to the lens' distortion. Those malformed objects may be destined for the market, where consumers may or may not be sensitive to the malformation, or they may be inputs to other processes which are not sensitive to the malformation. But for those processes that ARE sensitive to it...if THEIR malformed outputs feed into processes that are also sensitive to it...and THEIR outputs do as well...there's a potential for some serious industrial mishaps.
How would you, the factory owner, assess whether such a problem exists? Perhaps there's a camera that feeds into a CCTV display in the main office, and you could have it point at the objects being generated, inspect them, and make an assessment. If you see that the objects are not built to spec, you can inspect the machinery, and, finding the junky lenses, replace them. Sounds good...unless...the camera was built in-house with a lens that also produces a distorted image. That's a more complicated problem.
If the camera's image looks distorted on the screen, you can always stop the production lines and take a look with your own two eyes, bypassing the camera and its problems.
Unfortunately, there is no homunculus perched on a chair somewhere in your brain, waiting to spring into action. In our metaphor, the camera's image is input for a second-order process, perhaps a rudimentary AI meant to regulate overall production, cobbled together by some sloppy but effective evolutionary process outside the factory. How likely is it for the AI to consider the possibility of a distorted camera lens? Suppose it's so unsophisticated that it does not even understand that the camera's output is a representation of anything, but assumes the output is direct access to the thing-in-itself? (Imagine that it does not even know that there is a camera, and is built in such a way that it receives the camera's output with the tag "WHAT's GOING ON AT COORDINATES X,Y,Z" and nothing else.) If it has no primitive concept of data representing something, and the process by which it receives data from the camera is completely opaque to it, then it may be quite oblivious to the fact that there even is a problem. Even if it responds to natural language, you can type "MACHINES X AND Y ARE MALFUNCTIONING AND THE CAMERA BY WHICH YOU OBSERVE THEM IS MISLEADING YOU!!!!!1" on the terminal all day with no guarantee of making headway.
To fix the problem, the AI needs to be adaptive enough to find a way to conceptualize it in the first place, and, depending on the idiosyncrasies of the evolutionary process that built it and on the degree to which that process selects for AIs that happen to be good at factory control rather than something else, the ways by which it might recognize that there is a problem so that it forms the relevant concepts to deal with it could well be limited.
Welcome to the human condition.
Building new concepts.
Here's a stylized story about how the the AI might manage to figure out that some of the machines it watches over, along with the cameras by which it watches, have lenses that produce distorted images, and these images are leading to production problems.
Suppose there are multiple cameras it receives data from, and the AI, for whatever reason (perhaps an unintended consequence of something else it's doing) directs them both towards the same machine. Lo and behold, two sources of information tagged "WHAT'S GOING ON AT COORDINATES X,Y,Z" are not identical! How strange. Perhaps from this and some adaptive inference it figures out that what representation is and that these data merely represent what's going on at coordinates X,Y,Z. If the camera lenses are only moderately distorting, the AI may point one camera at the other, match the image it sees to an image of a camera in its database, and by doing so, manage to peek into the black box that produces the data by which it monitors the factory. And perhaps now it has an inkling of an idea that, since production has been slower and more problematic than expected, that means something is wrong, despite the fact that all the data it has access to do not allow it to pinpoint any particular problem: because, as it now knows, the data could be inaccurate.
From here, there are various ways that the AI could discover that something is wrong with the camera lenses. If the distortions aren't uniform over the image the lens produces, it could rotate one camera, un-rotate the output, and see that this is not equivalent to the previous output from an un-rotated camera. Or, knowing the layout and dimensions of the factory, it could aim both cameras at the same location, transform the data from one camera so that in theory it would match the data from the other (given the known positions of the cameras as well as the machines on the factory floor being looked at), and yet find that they did not match. Now it can infer that at least one representation is inaccurate.
Since the factory makes lenses in-house, to test the hypothesis that one of the cameras' lenses is faulty, the AI could replace one camera's lens with a different one (of unknown quality), and depending on how clever it is and how much it knows about optics, try to work out what the lens aberrations at issue here are. If the malfunctioning machines are the ones that produce lenses, there may be multiple rounds of producing lenses with different kinds or degrees of aberrations, inserting them in the cameras, inspecting the machines, modifying the machines, building new lenses based on the modifications....successively getting closer to the point where it has enough data from the various distorted images it's collected to have managed to produce a lens of sufficiently high quality to
- Accurately observe the previous defective camera lenses,
- Reflect on how those lenses led to faulty information about the machines and their outputs,
- Accurately observe the malformed outputs of the machines,
- Accurately observe the defective lenses inside of the machines,
- Discover the details by which the defective lenses are leading to malformed outputs,
- Deduce a lens design that will not lead to malformed outputs, and
- Build and install such lenses.
What's wrong with your mind.
Understanding that it's an oversimplification, the preceding metaphor is a very good one for describing how human minds works by default with respect to what meditation is good for. Some cognitive processes within the mind have defects that yield distorted outputs. And the second-order processes that evaluate how the first-order processes work are themselves defective, yielding more distorted outputs. All these distorted outputs are fed back into various cognitive processes, where they muck up effective operation in all kinds of ways.
If I tell you about the defects in your own mind, it is unlikely that you will understand (really understand) what I mean. The first-order processes may be messed up and I could describe that to you, but when you attempt to introspect on their status, the image of those processes that you see is itself distorted. Further, you may not have even developed a concept by which you could understand what the distortion is, or even what it would be like for the outputs of your first- and second-order cognitive processes to be distorted or not distorted in the way I mean. So we would be talking past each other.
This post was inspired by Skatche's. He writes, parenthetically,
"[...in] the Buddhist system...the unenlightened mind can't truly understand reality anyway, so you'd best just shut up and meditate."
This is a common impression people have. It's also more or less true. Not because enlightenment is some wild, indescribable altered state of consciousness which people mistake for a glimpse into 'reality,' but because the unenlightened mind probably can't even begin to conceptualize the problem, and definitely doesn't have the tools with which to understand it with sufficient clarity. 8-year-olds typically can't grok higher mathematics, not because mathematicians are tripping balls, but because 8-year-old minds don't have the tools to grasp what mathematicians talk about, and don't have the concepts to understand what it is that there's not grasping. C'est la vie.
I'm sure you want to hear what the big deal is anyway, so here you go. Your first-order cognitive processes that take experiences as objects are malfunctioning, and output bizarre things such as 'attachment,' 'craving,' and 'hatred.' The second-order cognitive process that monitors them is malfunctioning, and can't see what is so bizarre about those outputs ("aren't these built to spec?"). The same process, when monitoring experiences, outputs bizarre things such as 'Self' (and many variations on it). When it takes itself as an object to see its own inner workings, no output is produced, and a function call to the rational inference module asking what process outputted 'Self' yields a variety of confabulated answers (typically 'the senses' or 'rational deduction,' and claimed to be built to spec). When high-level cognitive processes take 'Self' as an object, the outputs are more bizarre objects: for example, 'existential angst,' 'Cartesian dualism,' and so on. From then on, the lives of these malformed objects are variegated: 'existential angst' as an input for the poetry generation process yields a product roundly rejected by consumers, 'attachment' and 'existential angst' as inputs for the life-goal planning process yields questionable long-term plans, and 'Cartesian dualism' as an input into the philosophy of mind process causes a blue screen of death.
All this happens without you batting an eye, and yet if you reflect in a very general and nonspecific way on whether all these malformed objects are helping or hurting your functioning, and helping or hurting your everyday behavior, you may be able to see that, at least in some ways, they're gumming up the works. But can you see what's wrong with them? Aren't they built to spec? Don't you need them in order to lead a normal life?
You may be quick to say that you have a perfectly good meaning in mind when you say 'Self.' Either 'Self' is a matter of definition and can be defined innocuously, or better yet, describes the behavior of biological systems in a useful and accurate way---carving reality at the joints. So it is not a delusion, and anyone who says otherwise is...well...deluded.
Well, bull. You have at least two concepts, Self and Self*. What you are describing, what carves reality at the joints, is Self*, an output of the rational thought process. Because your lens distorts, Self and Self* look indistinguishable to you. When you make a function call to ask what process outputs [Self or Self*, since they look the same to you], the answer you invariably get is 'rational thought.' "See," you think, "no delusion!" as you happily feed Self into the processes that generate attachment, craving, hatred, existential angst, etc. etc. from it, even when Self* is not an input that would produce those outputs.
The rationally-derived concept Self2 that you use doesn't and couldn't play the role in your mental machinery that it seems to. When you were young, before you were mature enough to form the concept Self*, you had attachment, craving, and so on. Today, you still do. How likely is it that Self* is responsible for those things right now? When you feel, deep down in your bones, that you want something--sex, drugs, money, status, friendship, happiness, anything--what is the 'you' who appears to want it? Self*? Knowing what you know about human minds, human development and comparative neurophysiology, is the 'you' who appears to want it the kind of thing that is likely to be the output of a rational process?
Think about it. See if you can begin to form a new concept that better captures what could be going on in your mind.
This metaphor is just illustrative. If it doesn't make sense to you on some level, I know of no argument that will be able to change that. If, for example, you have the intuitive feeling that you are a homunculus experiencing the output of your brain and yet rationally know that that's not true, the tension between the two may be a starting point for you. Or if you've had experiences where your sense of self was altered in radical ways, you may be able to see that there's more to the way you normally conceive of the world in relation to you than first meets the eye.
But it isn't irrational for this not to make sense. If it doesn't make sense, you simply haven't built the right concepts and tools yet to allow it to make sense. Being ill-equipped is not a matter of irrationality. It's a simple problem that you can solve if you're motivated to.
Whichever case best describes you, I claim you can build the concepts and tools you need to understand this through meditation. If you're interested, you can do the experiment yourself, and see what happens.
How meditation works.
Meditation, at least meditation for the goal I've described, can be thought of as a series of attentional and perceptual exercises. Experience has shown that directing your attention and perception in particular ways will help you to begin to see the ways in which your cognitive machinery is distorted. As in the metaphor, you eventually need to build new lenses in order to get a handle on what's going on, but luckily, you don't need to know their specs or retrofit your cognitive machinery; if you do the exercises, neuroplasticity will handle those parts for you.
EDITED FOR CLARITY: There are a range of "attentional and perceptual exercises" (= meditation styles) that are effective, but it important to note that not all are especially effective, and more importantly, a couple tend to work really well compared to the rest. Common kinds of meditation instructions, such as "relax, follow your breath, and cultivate equanimity towards whatever thoughts arise without getting involved with them", are unfortunately not the kinds of instructions that have an especially good track record among typical meditators. At least with respect to attaining the kind of insight under discussion. Such instructions do seem to work very well for helping people to be relaxed and less overemotional, though. More details in Part 2.
Experience shows that doing the exercises will cause your mind to generate various new lenses with different aberrations (there are various ways to categorize how many different types), and as your mental processes adapt to the output that these aberrations engender, you gain more and more data with which you can piece together the ways in which these distorted outputs have misled you. When you have enough data, your mind is able to produce a lens that it strictly less distorting than everything that came before. Retrofitting everything with this new type of lens makes your life better, and it makes the exercises easier. As you continue the exercises and cycle through new lenses, eventually your mind is able to repeat the feat, and make a lens that is strictly less distorting than in the previous case. On and on.
The first time you generate and use a lens that is strictly less distorting, you are partially enlightened.
When you have generated and installed a lens that does not distort in ways that lead to attachment, you are fully enlightened.
These results do not depend on any effort to maintain, and they are not altered states of consciousness. The goal of this type of meditation is not to produce any particular mental state, but to fix your cognitive machinery. When it's fixed, it's fixed. Experience has shown that no maintenance is required.
Unlike what popular mythology says, this process need not take a lifetime, or half of a lifetime, and definitely doesn't require that you live on a mountaintop in the Himalayas. Bearing in mind that individual variation exists, contemporary methods can yield deep and powerful cognitive upgrades along these lines within a few years. Many people are able to reach what is considered to be the first partial stage of enlightenment within months, in the context of a dedicated and regular practice during their daily life, and this is not considered especially atypical.
Benefits.
The reasons you might pursue this kind of mental upgrade are individual---just as in every other case. I don't have THE REASON that this is important for you to do. But here are a selection that you as an individual might find compelling.
- Be happier; function better.
When you begin to cut off the automatic generation of attachment, craving, hatred, etc., those things get used less as inputs to other mental processes: your life will likely become a more fun, more carefree, and more worthwhile experience. As you begin to cut off the generation of the concept Self by second-order processes, it gets used less as inputs to higher-level cognitive processes: you will think more clearly about existential issues of all kinds.
- Know what your goals would be if you were more insightful.
It's easy to think about what you want, and build a plan for your life around what you think you want. But your ability to know what you want is curtailed by the fact that you have delusions about what 'you' means. If you begin to get rid of the delusions by beginning to cut off the flow of Self into various processes, you will be in a better position to decide on how to live your life. Imagine you could get a pre-Friendly AI glimpse into CEV; might that change your current goals? What would a glimpse of your own, private extrapolated volition be worth to you? What would you do to get such a glimpse?
- Be more rational.
As you do the attentional and perceptual exercises involved in meditation, you develop a less and less distorted view of your own mental processes. This eventually allows you to fix any processes that are systematically malfunctioning due to the want of non-distorting components. But as a side effect, it also lets you see an enormous selection of what's going on in your mind: lots of things that you might not have previously noticed, or that you would have previously called '"subconscious," may become effortlessly clear to you.
Suppose you are biased against non-degreed people, and one day, a high school dropout tells you something that you currently disbelieve. If the thought "he doesn't know anything, he has no education!" arose in your mind, you might not normally even notice it, or you might delusively attribute the thought to 'you' and then be invested in acting according to it and defending it (since it's 'yours,' or since 'you' thought it). As your mental processes snap into focus, it's much easier to see the thought, and regard it as 'a thought' rather than 'my thought' or 'my belief' or 'mental content I generated'. When your mind can't sweep it under the carpet and yet you have no special attachment to it, it is easy to face it explicitly and decide how to work with it. If you already have the motivation, accounting for and dealing with your own cognitive biases is likely to become much simpler and easier than before.
- Understand the origin of delusive intuitions.
One example. Many people have the intuition that they have free will, i.e. that they are homunculi controlling their bodies and minds in a way that is outside the normal laws of physics. Even those of us who know better may still have that feeling. Meditation can ultimately eliminate that feeling. Undercutting the intuition and seeing where that leaves the rational case for free will, from a first-person perspective, may be very informative for understanding other cases in which your intuitions are misleading you by corrupting your rational thought.
- Understand the limits of your own conceptual apparatus.
The space of potential minds is huge; the space of human-like minds is a tiny subset of it. You may believe that your human mind cannot really conceive of what other potential minds would be like if they were sufficiently different, but do you know that in your bones? The result of meditation is a mind that is well within the space of human-like minds...but you will not be able to imagine what having that kind of mind is like until you have it. That puts potential alien minds and AIs, or rather, your ability to imagine them with any sort of accuracy, into perspective.
Risks.
It is extremely important to realize that the process of replacing the lenses of your mental processes can lead to intense mental turmoil, potentially severe enough that it impacts your ability to function effectively for weeks, months, or even years. This does not happen to everyone, and it need not be severe when it does happen, but you should consider the degree to which you're committed to this before you start. I would recommend not starting at all unless you are willing to see it through and not give up just because it seems to have made things temporarily suck: experience has shown that giving up while things suck is a great way to make things suck for a long time. (And experience has shown that commitment helps to avoid this problem.)
It is also important to realize that this is an experiment in self-transformation. Past a certain point, there is no going back, and no undo button. As a matter of informed consent, you need to know that the style of meditation that leads to the goal I've described can and will change the functioning of your brain, permanently. Lots of people have found these changes worthwhile. That doesn't mean there's no need to think about what you're about to do before you do it.
More information forthcoming in Part 2. (Perhaps next week.)
Addendum.
I have made all kinds of claims in this post, some of which may be seen as wild, reckless, unfounded, unargued-for, and so on. Certainly I'm not doing very much hedging and qualification. The really remarkable thing that communities interested in this kind of human development have discovered is that people who work at meditation long enough will reliably and regularly say the same kinds of things, in the same order; and people who have stumbled onto the exercises that lead to this kind of development outside of these communities will also, reliably and regularly, say the same kinds of things (although some translation between cultural frameworks may have to go on first). Further. I have not known anyone to suffer from a deficit in rationality or in the ability to observe and assess themselves by practicing meditation in the way that leads to this kind of development. So my working hypothesis is:
- Certain styles of meditation lead to bona fide insight, and there is a consensus on what that insight is among people who meditate in those styles; anyone with the same cultural background (e.g. contemporary Westerners) who takes up meditation is likely to experience that insight and describe it a way that is broadly similar to everyone else's description, whether or not they are primed to do so by the discourse of the communities of which they are members.
I hope that exposing readers of Less Wrong to this information will help me to confirm or deny this hypothesis. More importantly, I'm also sharing the information that I am because I hope that learning about it will ultimately help people to benefit personally from it, as I have.
Also, please note that my metaphor of a factory is just a metaphor, intended to be intuitive and helpful, not intended to be anything like a precise and thorough description of how minds work or how meditation changes how minds work.
Finally, this was written as a blog post, not a final draft of a formal article. Criticisms related to tone and style are especially welcomed. And apologies in for the length of the piece, as well as any formatting issues it has (I have little experience with effective formatting for blogs.)
123 comments
Comments sorted by top scores.
comment by CarlShulman · 2011-04-29T22:42:44.101Z · LW(p) · GW(p)
Finally, this was written as a blog post, not a final draft of a formal article. Criticisms related to tone and style are especially welcomed.
As an introduction, this did not 'hook' me. As you say, you make many large abstract claims without supporting evidence in a very long intro post. But these claims are pretty similar (although sprinkled with other LW content) to those I would expect as the result of an affective death spiral, selection for enjoyment of meditation, and so forth.
What might attract me is a post front-loaded with the concrete evidence that makes you believe what you believe, with some assessment of the major biases bearing on it. By burying the evidence until later, you prevent the reader from assessing the likely value of continuing.
comment by Zetetic · 2011-04-28T21:05:07.822Z · LW(p) · GW(p)
I don't know, this post feels a bit... 'woo' to me. Are there any peer-reviewed studies of the effects of meditation that you think are particularly enlightening? (sorry, I couldn't help myself there)
Seriously, though, I've meditated before but it was difficult to determine whether any perceived benefits were really placebo or not.
Replies from: DavidM, nhamann, Bongo↑ comment by DavidM · 2011-04-28T22:04:44.751Z · LW(p) · GW(p)
Unfortunately, there are no such studies that I know of. There is a large disconnect between the models that practitioners use and the ways in which scientists have been interested in examining the subject. This disconnect is something I intend to discuss in Part 3. Theravada Buddhism, for example, has an official (i.e. sanctioned by doctrine) model of the various stages of meditation, which makes extremely detailed predictions about the changes a meditator is likely to undergo, in what order, etc. but for various reasons, scientists don't seem to be interested in or aware of that model.
About your experiences with meditation, one thing I ought to have made clear (and I'm not sure whether I did) is that contemporary practitioners have found that some methods work better than others. Common methods that people throw around (e.g. "relax, follow your breath, and try to remain in equanimity regardless of what thoughts come up" turn out not to be very effective. What method did you use?
I'll briefly describe methods that have been found to be especially effective in Part 2. "Finding and sharing effective methods" is one of the major things that contemporary meditators interested in enlightenment have done well at.
↑ comment by nhamann · 2011-04-29T00:09:22.965Z · LW(p) · GW(p)
A brief poke around in Google Scholar produced these papers, which look useful:
Alterations in Brain and Immune Function Produced by Mindfulness Meditation. Psychosomatic Medicine 65:564 –570 (2003)
Mindfulness training modifies subsystems of attention. COGNITIVE, AFFECTIVE, & BEHAVIORAL NEUROSCIENCE Volume 7, Number 2, 109-119
Long-term meditation is associated with increased gray matter density in the brain stem. NeuroReport 2009, 20:170–17
Attention regulation and monitoring in meditation. Trends Cogn Sci. 2008 April; 12(4): 163–169.
Replies from: Zetetic, Jonathan_Graehl, DavidM↑ comment by Zetetic · 2011-04-29T16:40:32.687Z · LW(p) · GW(p)
Much appreciated! I was hoping that I might be able to get some meta-analysis out of one of the meditation advocates, but unfortunately it has not been offered up.
I do not even know what enlightenment is (or if it is even an actual phenomenon, beyond placebo) in terms of physiology/brain chemistry. It sounds like a threshold dose of LSD, judging by the subjective definitions. Because of this, I am not interested in enlightenment, but I am interested on any known enhancing effects of meditation techniques.
↑ comment by Jonathan_Graehl · 2011-04-29T01:13:30.530Z · LW(p) · GW(p)
Also, meditation reduces pain sensitivity, even for future pain.
↑ comment by DavidM · 2011-04-29T00:27:30.527Z · LW(p) · GW(p)
Thanks for the references. I should have made clear that I meant, not that there are no peer-reviewed studies about meditation, but there are none that I know of that concern enlightenment, the typical stages of meditative experience leading up to it, cognitive / neurophysiological sequelae, etc. (which are what I would find interesting in this context).
If you know otherwise, I'd love to hear about it.
Replies from: nhamann↑ comment by nhamann · 2011-04-29T01:09:15.558Z · LW(p) · GW(p)
Ahh, good point. My comment is somewhat irrelevant then with regards to this, as it seems that what you're interested in is beyond the scope of science at present.
Replies from: Fly, DavidM↑ comment by Fly · 2011-04-29T04:13:39.328Z · LW(p) · GW(p)
My gold standard for understanding reality is science, i.e., the process of collecting data, building models, making predictions, and testing those predictions again and again and again. In the spirit of "making beliefs pay rent" if Buddist meditation leads to less distorted views of reality then I would expect that "enlightened" Buddists would make especially successful scientists. As a religious group the Jews have been far more productive than the Buddists. Apparently Buddist physicists have no special advantage at building models that "carve reality at the joints". The Buddist monk may experience the illusion of knowing reality but actually understand less than a physicist. Or perhaps Buddist meditation trains the mind to "not care" or "not trust perceptions" to a degree that interferes with science? In what fields have Buddist monks excelled?
I am following with interest recent studies on brain changes due to mindfulness meditation, specifically improvements in executive function that accompany the enlargement of white matter tracts connecting the prefrontal cortex to the amygdala. So far I interpret the results as brain circuits being strengthened by attentional focus training so that the prefrontal cortex can inhibit signals arising in the amygdala, insula, thalamus, and hypothalamus. For those lacking such control this may be beneficial, i.e., those with low impulse control, for example children. There may be a motivational downside for those who already habitually inhibit such drives, e.g., those who easily become lost in abstract thought.
Replies from: MichaelVassar, wedrifid, DavidM, Bongo↑ comment by MichaelVassar · 2011-05-02T11:41:59.313Z · LW(p) · GW(p)
"In what fields have Buddhist monks excelled?"
Martial arts? Some other arts. Propagating a religion. Overcoming what seem to many people to be overwhelming motivations, such as the motivation to eat or to avoid extreme amounts of pain, convincing people that they are wise, maybe some memory and rapid cognition feats.
If you count Stoics as Buddhists, as I would, governing Rome & providing that part of the content of Christianity for lack of which the ancient world seems most alien.
↑ comment by wedrifid · 2011-04-29T04:56:33.014Z · LW(p) · GW(p)
So far I interpret the results as brain circuits being strengthened by attentional focus training so that the prefrontal cortex can inhibit signals arising in the amygdala, insula, thalamus, and hypothalamus.
Might I suggest that as well as the inhibition you actually benefit from the cortex having more access to the information and processing that the aforementioned regions provide? Because generalised inhibition in itself isn't all that difficult, mindfulness aside. It is nuanced, well considered inhibition that takes work. It is also what lasts in the long term - because simply inhibiting the signals from those centres doesn't help eliminate the cause.
↑ comment by DavidM · 2011-04-29T12:20:45.938Z · LW(p) · GW(p)
Do you have statistics or studies concerning the claim that Buddhist physicists are not advantaged in science? How would you even begin to rationally approach the issue? It seems complicated---you'd have to adjust for education levels, the possibility that meditators are inclined to pursue subjects other than physics, the fact that meditation takes up time that could otherwise be devoted to studying physics, different cultural backgrounds of meditations vs. controls...
Intuitively, I think your claim is likely to be true, but I can't really see how you can rigorously support it. Data on the % of Buddhist physicists, if it even exists, would only be scratching the surface of what you would need to support your claim. (Not that I want to debate the claim. But if you feel it's important, I want a non-handwavey argument.)
A better model for enlightenment, meditation and rationality, I'd say, is that these things give you tools that allow you to be more rational if you're so inclined. As with everything in life, it's your own goals and inclinations that determine what you do with them.
An analogy is drinking coffee. Paraphrasing Paul Erdos, a mathematician is one who turns coffee into theorems. Do coffee drinkers have a special advantage in mathematics? Probably not. So perhaps Erdos was wrong; perhaps having to empty one's bladder more often actually interferes with being a good mathematician? Again, probably not. Most likely, drinking coffee leads to mathematical productivity for people who are interested in increasing mathematical productivity.
Replies from: novalis↑ comment by novalis · 2011-04-29T17:47:55.128Z · LW(p) · GW(p)
Coffee drinkers may well have an advantage in mathematics
Replies from: None, None↑ comment by DavidM · 2011-04-29T03:02:18.345Z · LW(p) · GW(p)
Unfortunately, as far as I know, it's an issue that hasn't been studied...but because of the detailed knowledge that has come out of communities interested in enlightenment, I see no principled reason why it couldn't be studied.
Actually, I think it's low-hanging fruit.
↑ comment by Bongo · 2011-04-29T08:19:12.821Z · LW(p) · GW(p)
Indeed. Compare to this earlier post about meditation.
Replies from: DavidMcomment by Ivan_Tishchenko · 2011-04-29T06:36:59.105Z · LW(p) · GW(p)
Upvoted the post. But, as I can see, for some reason it is not getting many upvotes (I can only see 5 now). Please do not stop writing Part 2 because of that -- I really, really want to know those few methods of effective meditation you are talking about.
At least, let us, interested ones, know about these somehow -- if you decide to not continue the sequence.
Thanks in advance.
Replies from: DavidM↑ comment by DavidM · 2011-04-29T21:06:02.028Z · LW(p) · GW(p)
No problem!
Thanks for letting me know that you're interested.
Replies from: abramdemski↑ comment by abramdemski · 2011-05-02T20:19:54.331Z · LW(p) · GW(p)
Advice on upvotes: Perhaps the article should have been briefer, or broken up into two or three. Perhaps it should have included some of the large body of modern scientific evidence concerning the effects of meditation.
comment by MinibearRex · 2011-04-28T20:57:14.926Z · LW(p) · GW(p)
I had a recent conversation with a local Buddhist organization that was running a meditation workshop in my area. They were talking about enlightenment as realizing that your emotions don't need to be caused by things in the world. I asked if that meant that if I was enlightened, I would feel the same thing regardless of whether or not an evil tyrant was torturing his people. The response I got was "Buddhism isn't a save-the-world religion."
Is this actually what you're talking about when you talk abouy detachment from emotion, or is this just a group with crazy ideas/bad communication?
Replies from: pjeby, DavidM↑ comment by pjeby · 2011-04-28T21:16:24.949Z · LW(p) · GW(p)
I asked if that meant that if I was enlightened, I would feel the same thing regardless of whether or not an evil tyrant was torturing his people.
I don't know about the particular branch of Buddhism you've encountered, but Zen certainly detaches the concept of "right action" from emotional attachment.
In Zen, it might be considered a just and proper action to go and overthrow the tyrant, but spending a lot of time being upset about the existence of tyranny would not be.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-29T02:32:40.534Z · LW(p) · GW(p)
I think it was a Tibetan strand...I seem to recall someone saying something about "diamond way".
I suppose I can somewhat support that idea. Obsessing about something isn't a great way to do anything. My own complaint was that basically I did think that I should be more upset, at least on some level, if I learned that crimes against humanity were being committed, than I would be if that wasn't going on, and they seemed to be telling me that I was wrong about that.
Replies from: pjeby, AdeleneDawner↑ comment by pjeby · 2011-04-29T03:31:16.356Z · LW(p) · GW(p)
I did think that I should be more upset, at least on some level, if I learned that crimes against humanity were being committed, than I would be if that wasn't going on
Why?
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-29T04:13:19.741Z · LW(p) · GW(p)
Feelings of upsetness and contentedness arise out of my utility function. I have a negative term in my utility function for crimes against humanity. That is why I prefer outcomes where people are not being tortured by dictators. Removing that term seems to me to be the equivalent of Gandhi taking the murder pill; it's something I do not want to do. That is one of the preferences that I prefer not to get rid of.
Replies from: wedrifid, AdeleneDawner↑ comment by wedrifid · 2011-04-29T05:05:28.048Z · LW(p) · GW(p)
Feelings of upsetness and contentedness arise out of my utility function.
They need not. And if you do, in fact, have a preference to be upset in a given situation that is independent of your preference for that situation not existing. It is martyrdom, not altruism. Of potential signalling benefit to yourself but no benefit to what you ostensibly assign value to.
Removing that term seems to me to be the equivalent of Gandhi taking the murder pill
Not remotely. It is equivalent to Gandhi not self flagellating every time someone else murders someone. In fact, it is not even equivalent to that. Gandhi did engage in hunger strikes when certain undesirable things happened but those at least had instrumental value. Because Gandhi was a brilliant politician who had learned how to harness martyrdom effectively. It would be equivalent to Gandhi secretly self flagellating and making sure nobody ever found out.
Assigning negative utility to crimes against humanity is an entirely different thing to assigning positive utility to your personal misery. The latter is primarily what we do to signal to ourselves and others that we are altruistic while excusing ourselves from actually doing something about it.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-29T20:17:13.427Z · LW(p) · GW(p)
I did not communicate what I meant to say very well. I'll try again.
I view my utility function as a mathematical representation of all of my own preferences. My working definition of "preferences" is: conditions that, if satisfied by the universe, will cause me to feel good about the universe's state, and if unsatisfied by the universe, will cause me to feel bad about the universe's state. When I talk about "feeling good" and "feeling bad" in this context, I'm trying to refer to whatever motivation it is that causes us to try to maximize what we call "utility". I don't know a good way in english to differentiate between the emotion that is displayed, for instance when a person is self flagellating, and the emotion that causes someone to try to take down a corrupt ruler.
If I learn that some dictator ruling over some other country is torturing and killing that country's people, my internal stream of consciousness may register the statement, "That is not acceptable. What should I do to try to improve the situation of the people in that country?" That is a negative "feeling" produced by the set of preferences that I label "morality". I do not particularly want the parts of my brain that make me moral to vanish. I do not want to self modify in such a way that I will genuinely have no preference between a world where the leader of country X is committing war crimes, and a world where country X is at peace and the population is generally well off.
Should I mope around and feel terrible because the citizens of country X are being tortured? Of course not. That's unhelpful. I do not, in fact, have a positive term in my utility function for my own misery, as my earlier post, now that I've reread it, seems to imply. Rather, I have a positive term in my utility function for whatever it is that doesn't make a person a sociopath, and that was what I was trying to talk about.
Replies from: pjeby, Alex_Altair, AdeleneDawner↑ comment by pjeby · 2011-04-30T00:43:14.307Z · LW(p) · GW(p)
My working definition of "preferences" is: conditions that, if satisfied by the universe, will cause me to feel good about the universe's state, and if unsatisfied by the universe, will cause me to feel bad about the universe's state.
That's not really a preference. A preference is, "I like strawberry ice cream better than vanilla". I experience more utility from strawberry than vanilla, but this doesn't make me feel bad if there's only vanilla.
It is a serious misunderstanding of the human cognitive architecture to assume that an unfulfilled preference should cause you to feel bad.
my internal stream of consciousness may register the statement, "That is not acceptable. What should I do to try to improve the situation of the people in that country?" That is a negative "feeling"
No, that's a statement. How you choose to feel about that statement is a separate event.
Rather, I have a positive term in my utility function for whatever it is that doesn't make a person a sociopath
Was Gandhi a sociopath? Dr. Martin Luther King Jr.? Their speeches seem to be a good example of Buddhist thought in action: motivation via compassion, rather than outrage.
That is, "people are suffering, I want to help them", not "that's unacceptable and bad."
From my observation of some philanthropists I know, they do not appear to be feeling bad about people suffering in Africa; they instead feel good about being able to do something.
People who feel bad have protests and "raise awareness"... people who feel good, OTOH, seem to actually go to Africa and do something.
So, when somebody protests that they'll become immoral if they don't feel bad about something, my general impression is that the signaling part of their brain is currently running the show -- i.e., what other people will think is currently more important to them than whatever the ostensible goal of their emoting is. Think Hansonian, "X is not about X".
Heck, let's just make it specific: "Feeling Bad About Moral Issues Is Not About Actually Doing Anything About Them".
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-30T01:28:11.537Z · LW(p) · GW(p)
That's not really a preference. A preference is, "I like strawberry ice cream better than vanilla". I experience more utility from strawberry than vanilla, but this doesn't make me feel bad if there's only vanilla.
If I am forced to eat vanilla ice cream, it will not ruin my week. It will not even make me upset; it's not like the term in my utility function for vanilla ice cream is negative. I prefer vanilla ice cream to nothing at all. I will, however, be generally happier if I eat chocolate ice cream than vanilla. I like it more. If I genuinely was as happy while/after eating vanilla ice cream as I am while/after eating chocolate, saying I had a "preference" in either direction would be meaningless.
my internal stream of consciousness may register the statement, "That is not acceptable. What should I do to try to improve the situation of the people in that country?" That is a negative "feeling"
No, that's a statement. How you choose to feel about that statement is a separate event.
That statement never would have shown up in my stream of consciousness without some emotion causing it to appear. A person who genuinely does not have positive or negative emotions about a topic is extremely unlikely to use the term "not acceptable". That statement was motivated by a feeling. In my case, if I hear about crimes against humanity, the immediate response is actually a small flash of anger towards the ruler, and sympathetic pain with regards to the population. Those feelings sparks the thought, "what can I do?", and I think that's how it should be.
That is, "people are suffering, I want to help them", not "that's unacceptable and bad."
That statement, "people are suffering, I want to help" is sparked by sympathetic pain. If they didn't feel any pain on behalf of others, that motivation to help wouldn't have ever popped into their heads.
From my observation of some philanthropists I know, they do not appear to be feeling bad about people suffering in Africa; they instead feel good about being able to do something.
I hope I never get to the point where I think the statement, "Oh boy, people are hurting! What an exciting opportunity for me to do something that I find to be pleasurable!"
I don't believe that's how philanthropists typically think. I'm involved in biological research, trying to find cures and treatments for diseases. I enjoy that. The research is interesting, the people are fun to interact with, and I'm helping to increase the lifespans, happiness, and health of a lot of people. My motivations are twofold. There is the hedonic aspect; I like my job. There is also a moral motivation. I feel sympathetic pain when I see someone with Alzheimer's, or cancer, or any other disease, and I feel sympathetic joy when those people get better. Now, if Omega were to drop down from the sky and offer me a cure for Alzheimer's for $50, if my primary motivation was that I enjoyed the work of helping people, I wouldn't pay. If I pay for it, I don't get to do the work. However, what I enjoy more is actually people getting help. I fork over the cash immediately.
So, when somebody protests that they'll become immoral if they don't feel bad about something, my general impression is that the signaling part of their brain is currently running the show
Most people that say that probably are just trying to appear moral. I don't condone moping around and enjoying the fact that you're so emotional about the plight of the poor indigenous people, etc. But that little twinge of sympathetic pain you get when you hear about some tinpot dictator torturing his people is not something I'm particularly eager to remove from human brains. You're right, feeling upset by an immoral situation isn't doing anything, but that upsetness is what motivates us to do something.
Replies from: pjeby↑ comment by pjeby · 2011-04-30T16:03:36.056Z · LW(p) · GW(p)
that little twinge of sympathetic pain
Zen doesn't have any objection to momentary twinges, so long as they don't interfere with anything practical. (See, e.g., the story of the two monks whose punchline is "Are you still carrying her?")
I think that Asimov's "never let your sense of morals prevent you from doing what is right" is a very Zen saying. ;-)
I hope I never get to the point where I think the statement, "Oh boy, people are hurting! What an exciting opportunity for me to do something that I find to be pleasurable!"
Come on now. Do you want to tell me that writing that sentence didn't just give you an enjoyable feeling of righteous indignation?
See, there are many kinds of "feeling good" besides "pleasure". Love and compassion feel good... and so too, unfortunately, does righteous indignation.
However, not all kinds of feeling good motivate the same kinds of actions - each comes with its own bias as to what type of actions are selected. Indignation and love, obviously, motivate different sorts of actions, despite both feeling good!
Notice, btw, that I'm speaking here of what actions are motivated by having the feeling, which is a different thing than the motivation to obtain the feeling. The pleasure of having ice cream is not the same as having a desire to get some.
This is because humans are not utility maximizers; we're more like time-averaged satisficers. Desire arises when our time-averaged measurement of some physical or virtual property (like "blood sugar level" or "amount of interesting stuff to do") drops below some reference point, and we then take action to restore that property to a perceived-safe or optimal level.
This is why trying to discuss humans behavior in terms of "utility" is a complete waste of time if you want to understand what's actually going to happen when you self-modify.
At a fundamental level, we are not utility maximizers, even though we can certainly entertain the belief that we desire to be utility maximizers, or participate in competitive or co-operative frameworks that collectively aim towards maximizing something (e.g. corporations).
There is the hedonic aspect; I like my job. There is also a moral motivation.
Yeah... this is where we need to sort out the signaling.
See, here's what I said:
From my observation of some philanthropists I know, they do not appear to be feeling bad about people suffering in Africa; they instead feel good about being able to do something.
You then basically said this is terrible, and that you have "moral" motivation instead. But then, you go on to say:
However, what I enjoy more is actually people getting help.
Uh, so how is that not feeling good "about being able to do something"?
You're right, feeling upset by an immoral situation isn't doing anything, but that upsetness is what motivates us to do something.
Really? Let's test that, shall we.
Which of the following more closely matches your experience every day:
"Man, the world is full of diseased people. I feel awful. Better get to work right away..."
"Man, I'm glad I can make a difference. Let's get to work!"
It's really easy to test my hypothesis: different emotions bias people towards different types of action. (Actually, I'd guess that if you put it that broadly, there's probably already plenty of research to support it.)
More specifically, though, my thesis is that most emotions we describe as "negative" do not support any sort of sustained activity over time, in the absence of a visceral, immediate threat. They especially don't support creative or imaginative thinking, or indeed any sort of clear, non-rote thinking at all. (Also fairly-well documented.)
IOW, negative emotions bias towards short-term, rote and reactive behaviors, which make them far less useful for actually changing anything. They're designed for emergency responses, not ambitious campaigns of action.
Sure, negative emotions can motivate us to "do something"... the problem is, it's the sort of motivation that leads to the syllogism:
- We must do something!
- This is something.
- Therefore, we must do this.
In other words, a short-term emergency response.
None of the philanthropists I know seem to view their activities as an emergency, and they take thoughtful and considered long-term actions. When they speak, they don't seem to me to be experiencing any negative emotion. They say, "These people don't have water. We can help them. Let's do this!"
Now, there is one category of negative emotion that appears to produce motivation, and that'd be the category that contains moral signaling emotions, such as righteous indignation, disgust, disapproval, etc.
People under the influence of these emotions often appear to be "doing something", but that "something" is usually something like protesting, "raising awareness", or engaging in other "X is not about Y" activities.
That's because these emotions bias us towards activities of protest and punishment. And, as I mentioned in the "offense" thread recently, such protest and punishment actions usually don't accomplish anything, while making us feel like we're accomplishing something... thus leading to a perverse form of covert procrastination.
(Like spending a lot of time being mad about having to take out the trash, instead of just taking the trash out already so you don't have to think about it any more!)
For this reason, my most recent major change in mind-hacking heuristics is to look for a pattern of moral disapproval of something...
And then have the subject get rid of it.
Because when we're under the influence of those "moral" feelings, our brains seem blocked from thinking about how to solve the actual problem, vs. just protesting it in some way, and maybe trying to get other people to do something about it!
Which means that Asimov really was right after all.
Because, as it turns out, our feelings of moral disapproval -- our "sense of morals", if you will -- really does prevent us from doing what is right.
And that's why you're wrong about upset being motivating. The feeling of disapproval ("this is unacceptable") is distinct from the feeling of sympathy or compassion evoked by someone else's suffering, and each feeling will motivate you to do different things, over different time periods.
P.S. One secondary effect of moral feelings that I've noticed, is that they motivate us to speak out in favor of keeping the morals that generate them. Which makes sense if you think about tribal politics: anybody who suggests that maybe we shouldn't get so upset about people pissing in the river is probably pissing in the river him/herself, and so should be publicly disapproved of -- i.e., punish the advocate of non-punishing.
This happens with me too, with every stupid "moral" injunction I remove: my first emotional response is to protest that if I, say, stop disapproving of people who aren't sufficiently perfectionistic, then somehow society will collapse and the world will be in chaos.
While that might've been the case in our ancestral environment, the truth in today's world is that the only thing affected by me stopping my disapproval is that I'll be nicer to people who weren't going to change because of my disapproval anyway. ;-)
So... I suggest you consider whether your reaction to what the Buddhists said, and what I'm saying, is simply a protest from that part of your brain that motivates the maintenance of moral rules, whether or not there's any real consequences for changing the rules.
In the modern world, where few of us hold any real punishment powers over most of the people we encounter, moral disapproval is by and large a maladaptive response.
Replies from: NancyLebovitz, MinibearRex↑ comment by NancyLebovitz · 2011-05-01T15:40:18.275Z · LW(p) · GW(p)
I'm replying to a small part of a post which generally seems reasonable.
"This is unacceptable!" strikes me as a useful motivator for well-defined, achievable territorial defense. If I have a splinter in my foot, seeing it as unacceptable strikes me as part of the motivation for removing the splinter. I need to have enough calm mixed in to make sure I have good lighting, an appropriate tool, and the patience to make sure I get all of the splinter out, but I'm also served by having enough impatience that I'm not willing to put up with continuing to have any fraction of the splinter still in my foot.
The emotional state when I insisted that someone get his car out of a flea market space I'd rented doesn't seem all that different. His car in that location wasn't acceptable. I wasn't planning on a crusade to get all inappropriately parked cars moved at that flea market, or at all flea markets.
Of course, it gets more complicated when there's a large social issue really involved. Afaik, abolitionism really did work in Britain because a great many people thought slavery was unacceptable. The process took about a century.
And I've spent a lot of time on unproductive outrage, so I'm not saying you're completely wrong, but either the question is more complex than you say, or there's more than one flavor of "That's unacceptable!".
Replies from: pjeby↑ comment by pjeby · 2011-05-01T16:01:05.350Z · LW(p) · GW(p)
"This is unacceptable!" strikes me as a useful motivator for well-defined, achievable territorial defense.
Sure. That's certainly what it's arguably "for", from an evolutionary point of view.
If I have a splinter in my foot, seeing it as unacceptable strikes me as part of the motivation for removing the splinter.
Do you see it as morally unacceptable? I expect that you are describing a different emotion here.
either the question is more complex than you say, or there's more than one flavor of "That's unacceptable!"
Both, actually. In the first place, the catch is that for moral outrage to be useful, you have to have enough people who share the same outrage, or at least have powerful people who share it.
And, on the second front, there are certainly many emotions that people might say, "that's unacceptable" to, including situations with no emotional content at all. (e.g. "the terms you're offering me to buy my house are unacceptable, because they won't fulfill my goals", vs. "that offer is unacceptable -- how dare you insult me with such a low price".)
In the present context, I took the original poster's "unacceptable" to be about a feeling of moral judgment based on the other things they said around that statement.
Replies from: None, NancyLebovitz↑ comment by [deleted] · 2011-05-01T16:36:36.356Z · LW(p) · GW(p)
I'm actually going to agree with Nancy here.
The "That's unacceptable! This is an outrage! You can't treat me that way! I deserve better" reaction seems to be really important to protect yourself from being taken advantage of. I have very little temper most of the time -- it's very rare for me to stand up for myself, complain about being mistreated, get in a fight, etc. It's more natural to me to blame myself if other people are treating me in a way I don't like. I need to preserve at least some ability to get outraged, otherwise I'll put up with any old kind of treatment.
Oddly enough, caffeine makes me more outraged/arrogant/grandiose/easily offended. I've had some very weird experiences with high doses of caffeine -- it's like being possessed by the Red Queen. The crippling fury of believing I'm superior to everyone around me and the mortals won't recognize my dominion! Off with their heads! I don't consume those quantities of caffeine anymore, since I figured out the bizarre effect it has on me. Of course, if I ever need to psych myself up for a confrontation, I can use caffeine as a handy chemical aid.
Replies from: pjeby, wedrifid↑ comment by pjeby · 2011-05-01T20:16:20.008Z · LW(p) · GW(p)
The "That's unacceptable! This is an outrage! You can't treat me that way! I deserve better" reaction seems to be really important to protect yourself from being taken advantage of.
Why? Is there some useful behavior that you would not engage in if you were not experiencing the emotion?
I don't have to actually be outraged to yell at someone... assuming that yelling is the most useful response in the first place. (And it often isn't.)
it's very rare for me to stand up for myself, complain about being mistreated, get in a fight, etc. It's more natural to me to blame myself if other people are treating me in a way I don't like. I need to preserve at least some ability to get outraged, otherwise I'll put up with any old kind of treatment.
If you have to get outraged to stand up for yourself, this is an indication of a boundary problem: it's a substitute for healthy assertion. (I know, because I'm still discovering all the settings where I was taught to use it as a substitute!)
So, if someone has a "self-defense" objection to dropping a judgment, I first help them work on removing the judgments that keep them from being able to assert boundaries in the first place.
(IOW, the reason people usually have problems asserting their boundaries is because they were imprinted with other moral judgments about the conditions under which they're allowed to assert boundaries!)
Replies from: None, khafra↑ comment by [deleted] · 2011-05-01T22:29:34.029Z · LW(p) · GW(p)
That's probably a good insight.
It's typically been my goal to put up with as much unpleasantness as I can without complaining, for as long as I can. The trouble is that some of the things I've taught myself to tolerate are not good for me. (Everything from untreated illness, to accepting rules that it would be smarter to bend, to letting people put me down in rather hurtful ways.) Becoming more "stoic" (in the sense of more inclined to endure bad circumstances rather than change them) seems dangerous to me -- I'm already too "stoic" for my own good!
Replies from: pjeby↑ comment by pjeby · 2011-05-02T15:35:39.535Z · LW(p) · GW(p)
The trouble is that some of the things I've taught myself to tolerate are not good for me. (Everything from untreated illness, to accepting rules that it would be smarter to bend, to letting people put me down in rather hurtful ways.)
While I haven't been doing work with the developmental stuff for very long (maybe the last 6 months), that sounds pretty much like a Levin stage 1 dysfunction... almost exactly like one of the ones I fixed recently, which has made me much less stoic in that sense.
Becoming more "stoic" (in the sense of more inclined to endure bad circumstances rather than change them) seems dangerous to me -- I'm already too "stoic" for my own good!
I'd suggest Weiss & Weiss's "Recovery from Co-dependency" and Levin's "Cycles of Power" as being very helpful with this.
The essential thesis of both works is that there are patterns of child development during which we learn how to get certain categories of need met, wherein the choices we make lay the groundwork for personality traits (like assertion, self-care, thoughtfulness, etc.).
Generally speaking, a choice that might be adaptive in one phase (say, learning not to cry out as a baby, but instead to wait for someone to show up unprompted) can then result in making later choices that are basically workarounds.
So you end up with messy code in your brain, with lots of patches and workarounds... and the books are like a software developer's "patterns and antipatterns" catalog, listing typical bugs, workarounds, and how things ought to be set up in the first place.
I've made some rather substantial changes to personality characteristics like these (e.g. becoming less stoic, being more comfortable with novelty, more flexible about changes of plans) in the last 2.5 months, and I've barely done anything past development stage 2 yet.
[A side note: I'm not actually using the methods described in the books to implement the changes; the books discuss roleplay in psychotherapy, and miscellaneous self-care activities. I'm instead using other, more-direct mindhacking techniques, while referring to the books as a map of what to change and what to change it to. For example, I've just realized this morning that a chronic problem I've had with planning is probably related to a missed developmental goal in stage 3, so I'm going to go dig through their case studies and such for stage 3 to figure out what I need to change so that I naturally behave differently in that area. But I won't be making that change by roleplaying being a two-year old (with a therapist pretending to be a parent), as Weiss and Weiss suggest people with stage 3 issues do!]
↑ comment by khafra · 2011-05-02T16:50:53.448Z · LW(p) · GW(p)
How'd you learn to yell at people/firmly defend boundaries/etc. in the absence of a feeling of outrage? I couldn't find anything but car dyno tuning when googling for "Levin Stage 1." The only person I know to have taught himself that skill spent months working as the "mugger/bad guy" in a women's self-defense course.
Replies from: pjeby↑ comment by pjeby · 2011-05-02T17:37:32.203Z · LW(p) · GW(p)
How'd you learn to yell at people/firmly defend boundaries/etc. in the absence of a feeling of outrage?
It's not like I tried to learn that as a skill, specifically. What I learned were subcomponents of that skill, which included such things as noticing that I needed something, the need wasn't being met, it being okay to have needs and to be upset they're not being met, etc. etc.
(Being more assertive came about as an unplanned, but natural side-effect of having the building blocks available; I simply noticed that I'm automatically behaving in a more assertive way, rather than trying to behave in a more assertive way.)
I couldn't find anything but car dyno tuning when googling for "Levin Stage 1."
I'm referring to Pamela Levin's developmental cycles model, described in "Cycles of Power" and heavily used in "Recovery from co-dependency." Stage 1 (called "Being" by Levin, and "Bonding" by Weiss & Weiss), is the stage where infants (zero to six months) learn how to respond to their internal physical state, and more or less set their basic emotional tone for responding to themselves and the world.
There are some online resources about the stages at behaviourwall.com -- they appear to be selling some sort of courseware for teachers in the UK to address student behavioral problems through remedial skills work.
The complete model (as described in the books I suggest) includes both developmental tasks or goals (skills to be learned) and "affirmations" (signals sent from parent to child to establish the child's outlook or attitude), as well as typical patterns of dysfunction and distortion occurring in the skills and attitudes.
Btw, don't be fooled by the pretty charts on behaviourwall.com... there's not enough information there to actually do anything. In general I have noticed that if a given "affirmation" or task is one you haven't successfully acquired, you will not really know what it means from a brief description; examples of healthy and dysfunctional people's thoughts and behavior relevant to that task or affirmation is essential to being able to even grasp what a real problem is, let alone how to address it.
Without this information, the task and affirmation descriptions sound like nonsense or trivialities, especially since they're phrased for comprehension by children!
For example, the stage 3 affirmation, "you can think about your feelings" sounds stupidly obvious, but the actual skill meant by this phrase is not so simple or obvious! It really means something more like "you can think about your goals while in the grip of a strong emotion, while considering what you really want, without first worrying whether what you think or say is going to embarrass your parents or get you in trouble, and without needing to suppress what you actually want because it's not allowed... ", and, well, a much longer description than that. ;-)
(And of course, it's not just the idea that you can do that, but the actual experience of being able to do it that matters. The "can" of actually riding a bicycle, not the "can" of "of course it's possible to ride bicycles".)
↑ comment by NancyLebovitz · 2011-05-01T23:29:33.196Z · LW(p) · GW(p)
I don't think I'd have been as definite to the guy with his car in my flea market space if I didn't think he was in the wrong. If it had been a minor loss to me and seemed like an honest mistake on his part, I might have let him have the space.
On the other hand, I think I felt more stubborn and determined than outraged, so it might not be the sort of thing you're talking about. And I got what I wanted, and didn't feel a huge need to talk about it afterwards, as I recall. (The felt need to keep talking about relatively minor offenses might be a topic worth pursuing.)
"I won't let you get away with this!" and "This is unacceptable!" might be really different emotions.
Would you take a crack at the matter of political action? Suppose that the government decides that reviving frozen people is impossible, cryonics is based on fraud, and therefore freezing people is illegal. How could political action be taken without encouraging a sense of outrage?
Replies from: pjeby↑ comment by pjeby · 2011-05-02T15:58:29.372Z · LW(p) · GW(p)
"I won't let you get away with this!" and "This is unacceptable!" might be really different emotions.
Well, they're certainly different statements, and I can imagine people with either emotion saying either, so again it's not about the words.
And I got what I wanted, and didn't feel a huge need to talk about it afterwards, as I recall. (The felt need to keep talking about relatively minor offenses might be a topic worth pursuing.)
Yes, as I discuss in some of my courses, when you find yourself going over a situation over and over again, it's an indication that you think something "shouldn't have happened", when in fact it DID happen... which is definitely a symptom of the category of emotion I'm talking about, as well as a failure of rationality. (i.e., arguing about what "should have" happened is not productive, vs. actually thinking about how you'd like things to happen in the future... After all, we can't change the past.)
Would you take a crack at the matter of political action? Suppose that the government decides that reviving frozen people is impossible, cryonics is based on fraud, and therefore freezing people is illegal. How could political action be taken without encouraging a sense of outrage?
I'm not saying you can't use outrage as a tactic. I'm just saying that having outrage be an automatic response to almost anything is a terribly bad idea. In programming terms, we'd call it a "code smell"... that is, something you should be suspicious of.
Some people might say, "ok, I'll just be suspicious when I'm feeling outraged, and be extra careful", except it just doesn't work that way.
Because, when you're already outraged, you feel certain that things shouldn't have happened that way, and that you're in the right, and that Someone Should Do Something. Self-suspicion simply isn't going to happen when you're already filled with a spirit of total self-righteousness.
What's more, outrage is self-maintaining: under its influence, you are primed to defend whatever principle created the outrage, and the very idea that you should give up being outraged is, well, outrageous!
IOW, outrage is a form of not-very-pleasant wireheading that makes people not want to take out the wire, because they believe that terrible things will happen or society will collapse or some unspecified outrage will occur. If you think you want to keep the wire in, it's almost certainly the wire talking.
So, IMO, one should not have the wire plugged in, when deciding whether it's a good idea to have it plugged in! There may be valid game-theoretical reasons for you to want to precommit to be say, outraged about parking spots. However, you are not in a position to make that decision rationally, if you currently do not have the choice to not be outraged.
That is, if you automatically become outraged by situation X, then you are not in a good position to reflect rationally on whether it is a good idea to be outraged by situation X, because by the very nature of automatic outrage, you already vehemently believe it's a good idea.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-05-02T16:08:45.706Z · LW(p) · GW(p)
If you're using outrage as a tactic, is that equivalent to trying to train other people to be automatically outraged about something you want changed?
Replies from: pjeby↑ comment by pjeby · 2011-05-02T16:16:16.901Z · LW(p) · GW(p)
If you're using outrage as a tactic, is that equivalent to trying to train other people to be automatically outraged about something you want changed?
Yes, so whatever you're going after had better really be worth it.... or at least more important than whatever those people would have been outraged at instead!
(The media makes a living by provoking outrage, so it's not like most people aren't already being outraged by something.)
↑ comment by MinibearRex · 2011-04-30T18:22:55.823Z · LW(p) · GW(p)
I think we're going in circles here. I'm agreeing with almost everything you're saying; I think we're just using different terms for the same thing and the same term for different things.
Trying to make my point as simple and brief as possible: if I see someone hurting, I get a small twinge of sympathetic pain. That pain sparks the conscious response, "What needs to be done in order to help this person?", and I start to think about how the problem arose, how to fix it, etc. If I see someone who was in pain and now has been helped by me, I feel a small spike in sympathetic joy.
I have no particular problems with this particular mental algorithm. But, the group of Buddhists I was talking to said that this algorithm was bad, and that I should get rid of it. My understanding from what you've said, and what other people have told me, is that this not the position of most Buddhists. Since that qualm about Buddhist meditation practices has now been satisfied, I'm looking forward to reading the rest of the posts in this sequence.
↑ comment by Alex_Altair · 2011-04-30T00:00:12.537Z · LW(p) · GW(p)
I think the terminology I use for what you're talking about is simply "desire". Desire is definitely separate from, but related to, the emotions that motivate. I think failure to separate these concepts is responsible for some stereotypes of rationality (see Dr. Manhattan). So, while controlling emotions is helpful, changing my desires is effectively changing my utility function. This can get a little complicated in some areas such as procrastination, but in general I want to keep them.
This is one of the things that turns me off the most from Buddhism. I'm interested in meditation and deep introspection, but the Four Nobel Truths at the heart of Buddhism start out as:
1) All life is suffering. 2) The cause of suffering is craving. 3) Therefore we should stop craving.
If "craving" means desire, then this is horrible. But if it means something else, then I'm interested.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-30T00:59:12.341Z · LW(p) · GW(p)
I agree with what you there. The problem is, "desire" is not very much different from "preference", and I think that those thingies are inextricably bound up in emotions. If you purge emotions, I think your desires would go away too, which would probably make you indistinguishable from a computer in standby mode.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-04-30T01:09:17.128Z · LW(p) · GW(p)
If you purge emotions, I think your desires would go away too, which would probably make you indistinguishable from a computer in standby mode.
This sounds like a theory that could use testing.
Replies from: gwern, MinibearRex↑ comment by gwern · 2011-04-30T03:45:27.126Z · LW(p) · GW(p)
I think it has, in effect, with aboulia: http://en.wikipedia.org/wiki/Aboulia
"Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them..."
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-04-30T04:22:07.578Z · LW(p) · GW(p)
There's a couple bits of that description that I find interesting, in this context:
The clinical features most commonly associated with aboulia are:[5]
...
- Reduced emotional responsiveness and spontaneity
As opposed to 'lack of emotional responsiveness' or 'lack of emotions' - in other words, I suspect that the people in question are experiencing emotions, but don't feel any drive to communicate that fact.
Most experts agreed that aboulia is clinically distinct from depression, akinetic mutism, and alexithymia.
This is less clear, but again it reads to me as saying that aboulia is not related to a lack of emotions or emotional awareness. I also note that anhedonia isn't mentioned at all in relation to it.
↑ comment by MinibearRex · 2011-04-30T01:31:28.060Z · LW(p) · GW(p)
Minds, as we know them, are engines of optimization. They try to twist reality into a shape that we want. Imagine trying to program a computer without having a goal for the program. I think you're going to run into some challenges.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-04-30T01:54:01.322Z · LW(p) · GW(p)
We're not in disagreement about that. But your assumption that emotions are necessary for goals to be formed is still an untested one.
There's a relevant factoid that's come up here on LW a few times before: Apparently, people with significant brain damage to their emotional centers are unable to make choices between functionally near-identical things, such as different kinds of breakfast cereal. But, interestingly, they get stuck when trying to make those choices - implying that they do attempt to e.g. acquire cereal in the first place; they're not just lying in a bed somewhere staring at the ceiling, and they don't immediately give up the quest to acquire food as unimportant when they encounter a problem.
It would be interesting to know the events that lead up to the presented situation; it would be interesting to know whether people with that kind of brain damage initiate grocery-shopping trips, for example. But even if they don't - even if the grocery trip is the result of being presented with a fairly specific list, and they do otherwise basically sit around - it seems to at least partially disprove your 'standby mode' theory, which would seem to predict that they'd just sit around even when presented with a grocery list and a request to get some shopping done.
Replies from: h-H↑ comment by h-H · 2011-05-01T04:09:50.544Z · LW(p) · GW(p)
but isn't being presented with a to-do list or alternatively feeling hungry then finding food different than 'forming goals'?
to be more precise, maybe the 'survival instinct' that leads them to seek food is not located in their emotional centers so some goals might survive regardless. but yes, the assumption is untested AFAIK.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-05-01T05:24:33.030Z · LW(p) · GW(p)
but isn't being presented with a to-do list or alternatively feeling hungry then finding food different than 'forming goals'?
I don't think so, but that sounds like a question of semantics to me. If you want to use a definition of 'form goals' that doesn't include 'acquire food when hungry', it's up to you to draw a coherent dividing line for it, and then we can figure out if it's relevant here.
↑ comment by AdeleneDawner · 2011-04-29T22:32:56.131Z · LW(p) · GW(p)
How does your system handle jealousy, rage, and desire-for-revenge?
↑ comment by AdeleneDawner · 2011-04-29T04:25:11.161Z · LW(p) · GW(p)
The first sentence of that response seems to be unrelated to the rest of it. What does the rest of it have to do with feeling upset?
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-29T20:18:38.825Z · LW(p) · GW(p)
I didn't communicate what I meant by "feeling upset" clearly. I tried to clarify that here.
↑ comment by AdeleneDawner · 2011-04-29T02:38:54.955Z · LW(p) · GW(p)
What do you mean by 'should' in this context? And how did you come to that conclusion?
(I know that these questions sound rhetorical, and it may be useful in a sense to take them that way, but they're not intended to be. Rather, I'm actually curious; my mind naturally works in a rather Buddhist kind of way, and never generates that kind of conclusion except in a purely instrumental sense, and I've always wondered how normal people do it.)
↑ comment by DavidM · 2011-04-28T22:53:43.799Z · LW(p) · GW(p)
First of all, I don't believe I said anything about detachment from emotion.
Many Buddhist organizations see and practice meditation as a form of psychotherapy / relaxation, which is different from what I'm talking about. What they said to you seems in line with that style of practice---one that aims at not being stressed, not reacting in unhelpful ways to emotional upsets, not worrying over what one can't control, etc.
Many people seem to find that style of practice extremely helpful for themselves. For a person whose sole goal is to gain insight into the workings of their mind, I would probably not recommend it.
I wouldn't say that the group you're mentioning has "crazy ideas" or "bad communication". I'm sure they mean exactly what they say, and what they say doesn't seem especially unreasonable. Many people would benefit from being less reactive. I think it's simply a case where their goals are to become less reactive, and they practice accordingly, whereas a person who does not have that as a goal of meditation (and instead has the goal of e.g. insight into the defects of their own cognitive processes) would not meditate in a way that aims solely at cultivating that attitude. Different strokes for different folks.
Replies from: Lila, inanytime, MinibearRex↑ comment by Lila · 2011-04-30T10:02:06.757Z · LW(p) · GW(p)
First of all, I don't believe I said anything about detachment from emotion.
You used the word "attachment" a lot, as an example of something bizarre and, it seemed, negative.
What do you mean by attachment? (And why is it that this word is so often used for so many different things?)
I am looking forward to part 2 and 3, and I hope that you are planning to give full instructions on how to do the meditation.
Replies from: donjoe↑ comment by donjoe · 2011-05-04T13:50:26.910Z · LW(p) · GW(p)
Agreed, that's one of the main things this article leaves me hoping to see fully explained in future installments or comments: the term "attachment". Until I understand what you mean by it, I can't have a snowflake's hope in hell of determining whether it's something that afflicts me or that I might want to get rid of (by your method or by any other).
↑ comment by inanytime · 2011-04-29T00:18:33.296Z · LW(p) · GW(p)
These two goals may not be as dissimilar as they seem to you. Sometimes it's best to stop and think, than to get into action right away. In fact, that may be one of the major problems of our time. People want to act, they want to contribute and do something. That explains the popularity of charities generated by large corporations that aim to perpetuate the machinery causing the problems in the first place. To overthrow an evil tyrant without stopping to think sans attachment may not get rid of the system that creates the tyrant in the first place. It may be better to stop and think and then generate a solution. I do not care about how Buddhism deals with it, but to reach a state of non-reaction may be very similar to the aim of non-attachment that you propose in your article.
↑ comment by MinibearRex · 2011-04-29T02:29:03.481Z · LW(p) · GW(p)
Thanks. That pretty much answers my question.
I can see how that type of practice might help some people. Personally, I'm not all that interested, probably because I am fairly interested in saving the world.
comment by PlaidX · 2011-04-29T06:00:07.136Z · LW(p) · GW(p)
Wow, I often curse the world for not dropping the information I need into my lap, but here it seems to be on a silver platter. When I got around to reading this post, I had literally 23 tabs open, all of them about research into meditation.
I've been meditating for about six months and in the last week or so, getting disenchanted with the mainstream (within theravada buddhism) model of the path, and looking into alternate sources of information.
It's excellent to see that there's people already succeeding in the independent investigation I was wearily beginning to attempt!
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-04-29T09:38:15.668Z · LW(p) · GW(p)
http://www.interactivebuddha.com/Mastering%20Adobe%20Version.pdf has seemed like a good guide (esp. part III). Though I can't vouch for its accuracy yet, since I haven't even properly reached the access concentration stage in my own meditation practice yet.
comment by wilkox · 2011-04-29T03:45:14.072Z · LW(p) · GW(p)
I'm confused by the idea that the kinds of meditation you are talking about have until now been practised by "small and somewhat private groups" in secret. Why would this kind of meditation be taboo? What did these groups have to fear that drove them to secrecy, and why has that changed?
Replies from: DavidM↑ comment by DavidM · 2011-04-29T04:09:45.969Z · LW(p) · GW(p)
At least in the context of Buddhism-inspired practices, the reasons are threefold...
1) Monks in many (all?) Buddhist traditions are prohibited from discussing their own attainments with non-monks by the rules of their organization.
2) Most (nearly all?) contemporary dharma centers / etc., for various sociocultural reasons, have strong taboos concerning discussion of attainments.
3) If you tell a person in normal society that you are interested in reaching enlightenment, hope to do so soon, or perhaps already have, you are most likely to be written off as mentally ill, a member of a cult, a drug user, or something along those lines.
So, suppose you are a contemporary Westerner interested in learning about and openly talking about meditation and enlightenment. There is almost no context in which this would be socially acceptable, apart from the context of a small group of people who share the same interest and make it a point to keep their interests hidden from the public at large.
The only change that I know of has been that people are willing to talk about all kinds of things on the internet that may be taboo in other contexts, and are better able to find like-minded peers who share their interests. (If you want an example, try talking about the benefits of cryonics in person vs. on LW and see how your reception differs.)
The case might be different with practices associated with non-Buddhist traditions; I wouldn't know.
Replies from: None↑ comment by [deleted] · 2011-04-29T21:17:40.307Z · LW(p) · GW(p)
You use somewhat poetic language when talking about this secrecy and it might be prone to misinterpretation. I'm quite sure there are secretive groups pursuing expertise in meditation if the meaning of secrecy is such that you could replace "expertise in meditation" with, say, "plaing tabletop role-plaing games" without changing the sentence's truth-value. So they meet privately, do their thing and don't talk about it with people who wouldn't get it (most people, that is) because nobody likes to be called weird.
However, the second paragraph of your post made me imagine secret societies, with robes, masks, irregular meetings in remote locations during the darkest hours of the night and other assorted features. For a moment there, I expected to read a genuinely crazy conspiracy theory. I see that this is not the case but I am still slightly bewildered.
comment by [deleted] · 2011-04-29T01:04:15.068Z · LW(p) · GW(p)
Nice article. Anything that makes more people aware of serious meditation gets my upvote. It's such a pity that amazing works like the Visuddhimagga are pretty much unknown outside some small Theravadan niche.
My main criticism - and not just for you - would be that too much time is spent on talking about meditation and very little time devoted to actually doing something. I love all the extensive maps and detailed techniques, so I'm totally guilty of this myself, but just sitting down and doing a basic technique like Mahasi-style noting instead would be better for pretty much anyone.
Also, I'd like to say more about the risks, but don't exactly know how. Insight-focused meditation is a big commitment. Once you start, you are pretty much stuck until you work through a lot of suck and trying to drop out later will only prolong the sucking more. But then, I know I wouldn't have listened to or understood such a warning when I started out, so I'm not sure it'll do any good. Nor do I think people shouldn't meditate. Just, you know, people shouldn't look at the Tibetans and think it's all smiling and happiness all the time and then panic when things get tough and it turns out their mind is actually a mess.
I'm looking forward to your take on good methods. I'll save any comments on that for your next post.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-04-29T01:22:12.302Z · LW(p) · GW(p)
Also, I'd like to say more about the risks, but don't exactly know how.
I think one of the most useful things to say about risks is 'if you seem to be badly or dangerously stuck, talk to someone sane about it'. I've looked into traditional meditation with an eye to the risky bits, and it looks to me like someone stuck in one of those places will generally have a really hard time getting themselves out of it without outside help. Also, giving specific warnings seems likely to make things more dangerous, not less, since expecting something to be dangerous or go badly can be a self-fulfilling prophecy in this kind of situation.
Replies from: None↑ comment by [deleted] · 2011-04-29T18:28:07.886Z · LW(p) · GW(p)
Define "sane". The main problem is that unless the other person is themselves an experienced meditator (or maybe very good at rationality), they are pretty much useless and might easily make it worse.
By far, the most typical reactions to someone stuck in the Dark Night or going through a peak experience are alienation, attempting to engage their psychological waste[1] or treating it as a mental illness[2]. Neither helps, only more practice and calming down does.
Overall, I don't necessarily disagree with you, but actual help requires either personal experience or sanity that would reasonably pass LW standards, and those aren't all that common (or easy to identify when you're stuck). I had enough negative experiences in that regard that I have a personal "shut up and practice" policy.
[1] Meaning, trying to engage the content when you should be engaging the thought pattern. It's like the difference between psychoanalysis and CBT.
[2] I know several people who had an early peak experience, thought they were Jesus for a few days and who got institutionalized. They typically didn't mind (Jesus doesn't care about wards) and it never lasted long, but that's the kind of response you get from most professionals when weird shit happens.
Replies from: glunkthunker, AdeleneDawner↑ comment by glunkthunker · 2011-04-29T23:51:02.767Z · LW(p) · GW(p)
As someone who stopped early on because of a frightening experience I'd be interested in more discussion about risks. I'm also curious about the term 'Dark Night.'
Also, I was told that it's best to learn how to meditate in a group with a trained faciliator as this can greatly reduce the risk of bad reactions. This was true in my case. I only encountered problems when I went out on my own.
Replies from: None↑ comment by [deleted] · 2011-05-01T01:05:16.581Z · LW(p) · GW(p)
"Dark Night (of the Soul)" is a common term[1] used by people for the not-so-pleasant period between a peak experience and (re-)establishment of equanimity. To give a bit of an analogy, it's like after you realize that an important core belief is bullshit, but before you have a comfortable world-view again. Symptomatically, it looks a lot like (sometimes manic) depression.
Technically it's probably not a risk because it's inevitable. Everyone passes through it (multiple times), although the extent of the suckiness varies a lot.
The important part here is to not abandon the practice and distance oneself from whatever bad feelings might come up. People easily get frightened or feel disgusted with their life and start making stupid decisions. (Been there, done that.) The best is just to relax and postpone any drastic change until the suckiness has passed.
I can't say if groups are any help as I'm a complete autodidact. A supportive group and a calm, reassuring teacher might be beneficial, sure. But then, all advice boils down to "this is normal, don't worry, keep on going" and it depends if you prefer someone else to tell you this or to do so yourself. ;)
Edit: However, finding a useful teacher or group for insight meditation is pretty hard. As DavidM mentioned, most people unfortunately don't discuss or care about their attainments and most meditation practiced today is either focused on concentration or well-being. I'm not aware of any risks or side-effects associated with those (except for some people getting addicted to the feeling of bliss and becoming meditation junkies, but how negative this is depends on your attitude towards wire-heading).
[1] Specifically, I took it from Daniel Ingram as a name for the dukkha nanas, but many people seem to come up with the same or similar labels on a regular basis. Daniel's book is highly recommended.
↑ comment by AdeleneDawner · 2011-04-29T23:06:48.427Z · LW(p) · GW(p)
Erk. Yeah, I should have noted that I was using a non-standard definition there, and we seem to be on the same page regarding what one should actually look for in a 'sane' person. (My rule of thumb is that their response to such things should boil down to a compassionate spin on 'so what?'.)
comment by ksvanhorn · 2011-04-30T22:49:02.123Z · LW(p) · GW(p)
Common kinds of meditation instructions, such as "relax, follow your breath, and cultivate equanimity towards whatever thoughts arise without getting involved with them", are unfortunately not the kinds of instructions that have an especially good track record among typical meditators. At least with respect to attaining the kind of insight under discussion. Such instructions do seem to work very well for helping people to be relaxed and less overemotional, though.
I'm interested in both stress reduction and enhanced clarity of thought/perception. Do you consider the kind of meditation you are advocating to be useful for both purposes, or just the latter?
Replies from: DavidM↑ comment by DavidM · 2011-05-01T03:10:30.085Z · LW(p) · GW(p)
I would say that the methods leading to enlightenment will help with stress, but only indirectly. Once you've begun to do away with the delusions that cause you suffering, life starts to go a lot more smoothly. But the practice that most directly aims at that is not a relaxing one and is not one that I would ever recommend to someone to control their stress levels for immediate relief.
Parenthetically, some people find that, with partial enlightenment, the practices that directly improve mood and stress levels become ridiculously easy (for those who care to indulge). Most beginners typically find the same practices hard to execute successfully.
comment by knb · 2011-04-29T19:59:09.419Z · LW(p) · GW(p)
I have made all kinds of claims in this post, some of which may be seen as wild, reckless, unfounded, unargued-for, and so on. Certainly I'm not doing very much hedging and qualification. The really remarkable thing that communities interested in this kind of human development have discovered is that people who work at meditation long enough will reliably and regularly say the same kinds of things, in the same order; and people who have stumbled onto the exercises that lead to this kind of development outside of these communities will also, reliably and regularly, say the same kinds of things (although some translation between cultural frameworks may have to go on first).
Scientologists report experiencing new levels of peace and self-control, too. Of course, illiterate Catholic kids from remote villages have reported ecstatic visions similar to those achieved by ancient mystics. People have a tendency to report the expected, approved, results from the sacred rituals of their religious clique. I wouldn't expect Buddhism to be an exception.
Replies from: DavidM↑ comment by DavidM · 2011-04-29T20:32:24.885Z · LW(p) · GW(p)
You quoted me talking about people outside these communities (I.e.people for whom neither Buddhism nor meditation have any special cultural significance) but appear to have ignored it in your response.
That there is a belief-independent similarity in people's experiences was my point!
I don't really know how to respond to your comment other than to re-emphasize what I wrote and you quoted.
Replies from: knb↑ comment by knb · 2011-04-29T21:01:38.934Z · LW(p) · GW(p)
Sorry if I wasn't being clear. What I am saying is that the people who "stumble onto the exercises" are being treated as culture-naive which is a big assumption, particularly since they were talking with other Buddhists like yourself, and their experiences were being translated into your "cultural framework". That was the reason I mentioned "illiterate Catholic kids from remote villages who have reported ecstatic visions similar to those achieved by ancient mystics." There was an assumption that they were naive about the expected experiences of the culturally approved experience, when they in fact weren't.
Replies from: DavidM, khafra↑ comment by DavidM · 2011-04-29T21:44:23.711Z · LW(p) · GW(p)
I never claimed to be a Buddhist and I explicitly disclaimed the value of trying to see my post as an instance of Buddhism (near the top), so I don't know why you've called me one.
I prefer to be thought of as someone sharing what they know, rather than an adherent of some system.
"stumble onto" could mean a number of things. Effective meditation exercises need not be esoteric, and people can and do find them just by using their minds in certain uncommon ways. Sometimes that's just what happens, and then they search out other people or groups who they imagine might be able to tell them something about the experiences they've had. Cases like those are the really suggestive ones.
comment by Jonathan_Graehl · 2011-04-28T21:54:28.149Z · LW(p) · GW(p)
Similar descriptions of cognitive artifacts abound e.g. "near death experience".
If you're not careful in how you identify the set of people who've had a series of meditation-guided changes on the way to bona-fide enlightenment, "people who have had such experiences describe them in similar ways (modulo 'cultural differences')" is just a tautology.
I don't rule out the possibility that there are exercises that will equip me sooner with a clear lens than what I already do (thinking about and really admitting, to myself and friends, how I feel and why). Upvoted because I want to examine your specific proposals.
Replies from: DavidM↑ comment by DavidM · 2011-04-28T23:10:35.521Z · LW(p) · GW(p)
I'm not sure what cognitive artifacts you have in mind. "Enlightenment" is not any particular mental state. It has no particular qualities. In many ways it's quite mundane. Just everyday life, minus some unhelpful kinds of cognition. No special relation to near death experiences that I can see, if that's what you meant.
On the other hand, If what you meant is that near death experiences tend to play out in a certain pattern, which is like what I'm claiming about the regularity of meditation experiences, that's an interesting comparison. I'd say it indicates that both phenomena have a strong biological basis (independent of culture) that is worth investigating. The biggest difference that I can see is that near-death experiences play out over a short period of time, whereas the progression of meditation experiences can unfold over days or weeks or more, with lots of everyday non-meditation experiences interspersed between them. (Described in Part 2)
Agree that "cultural differences" needs to be defined carefully so as not to make the hypothesis untestable.
Being completely honest about your thoughts is great advice for anyone. Meditation is really a different sort of exercise. Honesty happens anew in each moment in which there is something to be honest about. Meditation is something you do for awhile, and eventually, something that you can forget about.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-04-28T23:30:39.731Z · LW(p) · GW(p)
near death experiences tend to play out in a certain pattern, which is like what I'm claiming about the regularity of meditation experiences
Yes, that's all I meant. NDE accounts are claimed to be similar to each other by those advocating that they reflect some supernatural reality. I didn't realize the ambiguity in what I wrote; I meant artifacts as in distortions inherent in the mechanism). I tend to explain any 'universality of descriptions of subjective experience' at least partially with 'there may be something about the way our brains work that causes that'. Since you haven't claimed meditation is about anything other than thinking/feeling in ordinary ways, I'm not making any other point.
Honesty without limit is ridiculous, of course (unless it really is the predominant terminal value). I was thinking, specifically, of noticing when something would be painful to admit, and then experimenting provisionally with admitting it. Usually it's a relief.
Replies from: DavidM↑ comment by DavidM · 2011-04-28T23:35:07.432Z · LW(p) · GW(p)
Honesty without limit is unhelpful, but in many contexts, the value of honesty at the margin tends to be high, which is why I'd say it's great advice.
Are there times where there is something that would be painful to admit, but you don't realize until later that it was weighing on you? I wonder whether you would find doing an active search for such things beneficial (in the right social contexts).
Replies from: Jonathan_Graehl, Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-04-29T01:20:50.017Z · LW(p) · GW(p)
Are there times where there is something that would be painful to admit, but you don't realize until later that it was weighing on you?
Yes, of course. Sometimes I'm too focused to notice in the moment.
Focus (actually trying to perform well at a given task) has its advantages. Maybe it's possible to train (or cue with some external trick) brief moments of global or introspective thinking, but quickly returning to the flow if adjustment isn't needed. Probably there's both a trade-off and a happy medium.
↑ comment by Jonathan_Graehl · 2011-04-29T00:59:10.608Z · LW(p) · GW(p)
I haven't tried it much in real time; mostly post-mortem. I guess I could experiment with low-stakes cases (nearly anything with strangers in the city).
comment by bogdanb · 2011-05-15T13:48:00.392Z · LW(p) · GW(p)
Hi David! This whole meditation thing sounds interesting, though I’m having trouble figuring out exactly why. (That’s probably a question about my brain rather than about meditation. I’m mentioning it as a hint to why I’m asking what I’ll ask below, not as a subject for discussion.)
I generally dismiss all new-agey stuff immediately as unworthy of attention. AFAIK my reasoning is something like 1) this is associated with obviously wrong theories; 2) thus, people involved have wrong thinking; 3) thus, their statements have negligible evidential value; 4) even if some of the stuff happens to be true (i.e., work), given (3), the probability of that is nearly the same as that of just about any random thing I could pick being true; so 5) pretty much any other method of picking something to devote attention to would be at least as good.
Your two posts did a very good job of separating the particular brand of meditation you discuss from the silly theories around them, so I’m paying attention at least to what you’re saying. But that only means that your statements don’t have obviously negligible evidential value, not that they are significant evidence, which means I have to actually think about it.
The factory & lenses metaphor seems like a good argument for why meditation should work in the sense of allowing a flawed process to discover and improve itself despite its own flaws. But: the key part of that metaphor is that using a flawed lens one can discover the flaws of the lens, by confronting contradictory evidence caused by flaw; and people are notoriously prone to not reaching the correct conclusion when presented with contradictory evidence. Cases like Aristotelian physics and Descartes conclusion on God make me wary of conclusions gotten to based just on what happens inside one’s own mind. This doesn’t mean that I immediately discount such conclusions—we did eventually think about the scientific method—but still, I do get uncomfortable with just in-brain processes.
Which is my problem with your posts. You discuss meditation in what appears to be an empirical way, but as far as I can determine (almost) all empirical observations involved happen inside the mind of the one experimenting. They aren’t quite verifiable by others. (You explicitly say that you can’t communicate lots of it; I’m not arguing that you’re hiding the testability of the claims or something.) You say that different people tend to describe similar internal observations and effects after following various similar processes. But that’s different from different people describing similar observations after witnessing the same process.
In your analogy, two different factories (initially constructed by the same method) that discover their own flaws, communicate with each other, and noticing that they discover the same kind of defect in their own lenses is interesting, but is less conclusive than exchanging their lenses and confirming their observations, and even less conclusive than obtaining lenses that are not generated by their kind of factory. Barring the latter methods of testing, factories are still vulnerable to some defect that causes (many or all of) them to systematically reach wrong conclusions. (As a real-life example, consider the N-rays experiment. Note that the definitive conclusion was reached not after Wood reproduced the experiment on his own (unsuccessfuly), but when he interfered with the experiment run by Blondlot—who, incidentally, wasn’t quite convinced even then.)
OK, sorry for the long introduction. I wanted to give a background for what I’m asking below, and I’m really bad at summarizing.
What I’m looking for is if you can think of consequences of meditation that are observable by others than the one meditating.
This definition is quite wider than I’d like it to be, but it’s hard to express my precise meaning (mainly because I’m not quite sure I understand the claims), which is partly the reason for the big text above; I’m hoping you can figure out the kind of observations I’m talking about.
As a trivial example, it’s obvious that someone who meditates is likely to be observed to meditate, which is not what I’m talking about.
You claim that a benefit is happiness, which is also observable by others (with some fuzziness), but isn’t that big a deal; one can probably be happier with a steady diet of drugs, and it’s not quite obvious that the costs are bigger than an hour or so daily for a long period. (I’m a bit fuzzy on the cost here: you seem to claim that after achieving enlightenment the changes become permanent, but it’s not clear if that means “as long as you keep meditating an hour a day” or not; people into meditation seem to do it more-or-less until they stop being interested in meditation, but it’s not obvious if they stop because it didn’t work or because they’re done—and, as I said, I pay attention to statements from people like you but not to most practitioners of such methods.)
The rest of the benefits all seem to be only introspectively observable. I’d like those benefits, but I wouldn’t trust my own introspection to judge if I gained them or not. Being divorced from all the usual mumbo-jumbo, your procedure seems like something I’d like to try, but only if I can ask someone else to judge the results. (I’m thinking of asking a friend to observe some things about me, not knowing exactly what I’m doing, and judging the effectiveness by his observations. I’m aware this still wouldn’t be a double-blind test, but it’s still much better than judging myself.)
Note that I’m not asking you to come up with an experimental protocol or anything even close to that. I’m quite willing to do that myself, but I’m just not sure if you claim among the effects of meditation anything that is (a) quantifiable by someone else than the meditator and (b) that I’m reasonably sure is an effective benefit.
I’m not quite sure of how to express this, but let me give some concrete examples:
Being less attached to things might pass (a); my friend could observe that I get less angry or sad about things. But it doesn’t quite pass (b), as I’m not quite convinced that would help me to achieve goals. (There are persuading pro and con arguments.)
Understanding the limits of my own conceptual apparatus passes (b); I’m quite sure that’s helpful. But it doesn’t pass (a), as my friend can’t quite tell if I correctly found my limits or not.
For contrast, something like “acting for bigger long term gains rather than short term ones” pass both (a) and (b). (As long as the effect is strong enough.) I’m aware that it’s not enough for a scientific confirmation of the benefits of meditation (e.g., something else might cause the change). It’s just a minimum to at least attempt the experiment. (Also, note that this minimum would be very different for an experiment that didn’t involve my own mental processes.)
Replies from: DavidM, AdeleneDawner↑ comment by DavidM · 2011-05-16T04:17:58.936Z · LW(p) · GW(p)
The factory & lenses metaphor seems like a good argument for why meditation should work in the sense of allowing a flawed process to discover and improve itself despite its own flaws. But: the key part of that metaphor is that using a flawed lens one can discover the flaws of the lens, by confronting contradictory evidence caused by flaw; and people are notoriously prone to not reaching the correct conclusion when presented with contradictory evidence.
Well, the metaphor only goes so far. This process does not ask a person to explicitly apply what they learn in meditation. (If it did, the possibility for bias and error would be quite large, as with most things.) Rather, cultivating attention and perception allows the defects to be seen clearly, and one's brain somehow manages to correct them "under the table," leaving it somewhat mysterious how that happens.
This is something I planned to write about in Part 3. To be honest, I find it surprising and somewhat bizarre that this can happen, and that it can happen in such a regular way, in discrete steps. If anything strains my own credulity about the whole process, it's this. (In my experience and the experiences of others it has worked this way, which makes it merely surprising to me. That information probably doesn't help you, though.)
The problem I see with your request is not that you want something that meditation causes which is observable by a third party (there are lots of potential ideas in the comments), but you want something that meditation causes which is observable by a third party which your goal structure approves of. Which, in principle, is fine, but I can't say I know what your goal structure is. I have continually emphasized that the value of enlightenment for a person depends on their particular goals. (There are things about it that I think would benefit most people or everyone, but it doesn't help much to say that when people can't conceptualize those things and so can't judge now whether they actually would want them.)
Adelene, who counts as partially enlightened according to my model, describes being able to see multiple senses clearly enough and close enough to simultaneously to be able to transcribe her synaesthetic experiences on paper. Would that be a benefit for you? Perhaps it depends on whether you have visual synaesthesia or a fertile imagination.
I suggest that "wide perceptual width" (a side effect of enlightenment) may lead to strong improvements in the ability to observe and describe parts of one's visual field that are not actively being focused on. Would this be a benefit for you? In principle many people might be indifferent to it...apart from visual artists, for whom it might be anything from really important to life-changing. Or police offers, for whom it might one day spell the difference between getting killed or not.
I spelled out some other possiblities in Part 1, and I think there are even more in the comments. And there are others which I haven't mentioned yet, but am working on. And there are even others which I haven't thought of in the first place.
So, I can't help with your question unless I know your goals.
I understand where the desire for third-party verification comes from. But you wrote
your procedure seems like something I’d like to try, but only if I can ask someone else to judge the results.
which makes it sound as if you think this is just a cute way to improve some mental abilities and want to be sure that you can tell whether it worked. I hope my posts so far didn't give you the impression that I think this is a good idea. I would strongly suggest not meditating unless you're prepared for potentially large changes in the way you see yourself and the world. If that's something you're interested in, the side effects can be really awesome. And if you want the side effects badly enough, perhaps that's a good reason, too. But your attitude seems very casual, which in my opinion is likely to lead either to you getting more than you bargained for, or to you quitting at stage 3 because you don't have enough commitment to the end result and thereby causing yourself a lot of pointless, avoidable suffering. (But if I'm misreading you, just say so.)
So, think about it.
And, the clarification that you asked for:
you seem to claim that after achieving enlightenment the changes become permanent, but it’s not clear if that means “as long as you keep meditating an hour a day” or not; people into meditation seem to do it more-or-less until they stop being interested in meditation, but it’s not obvious if they stop because it didn’t work or because they’re done—and, as I said, I pay attention to statements from people like you but not to most practitioners of such methods.)
Let me start off by saying that I don't speak for anyone else, so I can't comment on why specific people stop. If you look at the population of people who have stopped, probably the whole range of reasons that people stop doing any kind of self-cultivation will be represented (e.g. same as for practicing a musical instrument, writing fiction or poetry, drawing, etc.).
Once you reach any stage of enlightenment, it's permanent; no more meditation (and no more anything, except maybe food and air) is required to maintain and upkeep that attainment. In principle a person could completely forget about meditation and everything related to it and go on their way, and still retain all the benefits. (Most likely their attention and perception have been cultivated so much that they can't help but do meditation-like things when going through daily life, so they might continue to make some progress anyway.)
The cognitive side effects of enlightenment, i.e. the benefits that aren't enlightenment but are related to it such as e.g. perceptual improvements, can wax and wane like anything else related to mental functioning, but seem to be pretty stable in the long run without appearing to require maintenance or upkeep either.
Once you reach full enlightenment, there is no more need to meditate in this style ever again. There is nothing left to get out of it, and the only reason I can think of to do it would be to review what it's like so as to explain it to others.
Even fully enlightened people may continue to meditate in other styles for unrelated reasons. Full enlightenment produces a surprising amount of mental pliability, and one can pursue meditation aiming at altered states of consciousness, relaxation, bliss, etc. surprisingly easily and effectively at this point. But that isn't the same process, even though we have the same word ("meditation") for it.
By the way, the previous paragraph is rather easy to test if meditating for pleasure is approved by your goal structure. Even the first stage of enlightenment makes that sort of thing a lot easier. (The process of doing that will probably push you towards further stages of enlightenment, so testing it happens to also be a commitment to working towards full enlightenment.)
Replies from: bogdanb↑ comment by bogdanb · 2011-05-28T23:40:42.890Z · LW(p) · GW(p)
The problem I see with your request is not that you want something that meditation causes which is observable by a third party (there are lots of potential ideas in the comments), but you want something that meditation causes which is observable by a third party which your goal structure approves of. Which, in principle, is fine, but I can't say I know what your goal structure is.
Hi David, thanks for taking the time to answer at such length. I think I’ll wait for your series to continue (or for me to read it if it did, I don’t and won’t have a lot of attention to spare for a while) before continuing with my questions.
In the mean time I’ll just leave a few comments about the examples you gave here, in case it helps understanding better what I’m interested in. (Your “goal structure” phrasing touches but isn’t quite what I’m trying to express.)
The things I’ve seen Adelene mention don’t quite interest me. (I mean, they’re interesting, and I probably would like experiencing them on occasion, but it’s not something I’d spend effort for.)
Your concern about my casual attitude is probably unwarranted. I’m just not very concerned with thinking about potentially unpleasant side effects before determining that the potential good effects are worth the effort, which is why you probably got that impression. (As an example, before considering learning to fly I would definitely consider the dangers, but only after deciding that flying would be useful to me at all.)
You suggest that "wide perceptual width" is a side effect of enlightenment and may lead to strong improvements in the ability to observe and describe parts of one's visual field that are not actively being focused on. That is interesting in the sense I’m trying to describe; I’m quite skeptical the kind of meditation you describe would have that efect (rather than seeming to have that effect to a practitioner), but it is testable enough for my purposes. It’s probably not something I’d invest an hour a day for a year even assuming certainty of effect and no side effects, but together with several other things of the kind I might give more thought.
(That said, I’m not sure exactly what that “may” means; are you not sure that’s an effect, or you’re sure that’s an effect but it only happens for some people? In either case, I’d like more details.)
ETA—Forgot to mention: meditating for pleasure is not “dissaproved of” by my “goal structure”, in the sense that that’s nothing wrong with it, but I already have more things I can do for pleasure than time to do them; “pleasure” is not one of the reasons I might take up meditation for (this century).
↑ comment by AdeleneDawner · 2011-05-15T18:07:51.293Z · LW(p) · GW(p)
A possible experimental procedure:
Have a friend do some neuroscience research and pick a particular multi-part thing that brains do that has been observed via fMRI or other brain scanning methods (example: resolving moral dilemmas like the trolley problem) and that you haven't read about in that context. When you meditate, specifically try to observe your mind doing that thing; pay particular attention to any specific subsystems you see interacting with each other. Note: This is not 'meditate on how X might happen'; it's 'meditate and try to observe X happening', which may be too subtle of a distinction for a beginner; try to make sure you can do the latter rather than the former before starting. (If you're not sure if you can, it's probably safe to assume you can't.) When you're confident you know what's going on in your mind in that area, check the neuroscience literature and see if it matches.
comment by Armok_GoB · 2011-05-05T13:02:35.455Z · LW(p) · GW(p)
I wish I could upvote this more than once, how the expletive does it only have 26 upvotes?
One thing that is strange is that I recognize all of these things very intimately, and this enlightenment they describe seems to describe a lot about what's different with me than most other people... But I always associated it with learning about rationality in general, and have never gotten into the habit of explicitly meditating, although I certainly spend a whole lot of time in vaguely meditation-like mental sates. Is there any precedent for archiving this kind of enlightenment through means other than meditation, or is this just my pattern matching returning a false positive (wouldn't be the first time...)?
Replies from: AdeleneDawner, DavidM↑ comment by AdeleneDawner · 2011-05-08T12:07:23.639Z · LW(p) · GW(p)
You're at least not the only one, though in my case it appears to just be my natural state rather than being related to anything I learned.
(My first 'wait, what?' moment relating to meditation was when I was a teenager, newly out of being Christian, just starting to look into new age stuff, not even aware of rationality as a potential thing to be pursued. I got a book on meditation and concluded that I must be misunderstanding it because all the stuff it talked about about involuntarily thinking about things and having to put a lot of effort into quieting those thoughts was so foreign to my actual experience.)
Replies from: Armok_GoB↑ comment by DavidM · 2011-05-09T02:39:07.334Z · LW(p) · GW(p)
Curious about your experience and why you think that, perhaps, you have achieved enlightenment or partial enlightenment already. What specifically causes you to think so?
There was a brief discussion of the possibility of enlightenment without meditation in the comments section of Part 2.
Another possibility, which I consider more likely without knowing anything more about your situation, is that you're simply in one of the later stages. As I said, stage two does specifically tend to lead to some sort of overall cognitive change that's for the better. If people don't begin in stage one, the most likely place for them to be is actually stage three (more about this in Part 3).
And there's always simply the possibility for error, which is quite common.
Read Part 2, see what you think, and let us know.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-09T12:36:31.636Z · LW(p) · GW(p)
Stuff like observations of stuff inside my brain and outside my brain being the same kind of thing, and not having any sense of "self" in the way most people describe it. Seeing myself as an an algorithm that this brain is approximating and a bunch of related notions like that are intuitively obvious in retrospect. Actually, the retrospect part is just an assumption, having always known such things sound extremely unreasonble, but I don't remember ever having not done so and can't imagine what it'd possibly be like. ... ugh this explanation sucks and sounds way more preposterous than what I actually mean by it but it's the closest I can get with words.
That's the biggest one at least, a bunch of other minor things seem consistent with the experience of being enlightened you describe as well. The only strange thing is that I don't seem to perceive any vibrations, but then again I've never actually looked for them and I do seem to instantly understand what exactly you're talking about and what it is that cases me not to see them individually and them being there seems to be somehting I obviously know even if I can't see them...
I'm still sceptical thou, all of these experiences and memories come flagged as suspect and might have been fabricated/altered/distorted by some psychological phenomena to fit your descriptions better. Wouldn't be the first time my brain did something like that.
I've read part 2, liked it a lot less than part 1 and were a bit creeped out by some of the descriptions, especially of stage 3... Made me a lot more weary of trying this whole meditation thing. (Also set of my absurdity heuristic big time but we all know that one isn't reliable so I'm trying to ignore that...)
Replies from: DavidM, AdeleneDawner↑ comment by DavidM · 2011-05-11T04:30:50.326Z · LW(p) · GW(p)
Not sure what to make of your situation. Specifically, I don't know what this means:
Stuff like observations of stuff inside my brain and outside my brain being the same kind of thing,
If you mean something like "it intuitively and self-evidently appears to me that some things are 'inside' me (e.g. feelings) and some things are 'outside' me (e.g. physical objects or their sensory representations) but they all seem quite the same on some level," I would specifically say that you are probably not partially enlightened.
About the sense of self, there are various ways it changes through meditation, even before partial enlightenment. A simple, intuitive notion of self is something like "I am the entity that thinks, intends and acts." One that is often attained through meditation and which replaces it is "my mind is comprised of various impersonal processes, and I am the subjectivity that experiences them / there is some kind of subjectivity that experiences them." If I had to suggest what an enlightened person might say along these lines, it might be something like "every mind process is impersonal, and recognizing that means that mind processes no longer appear to be personal or impersonal."
It's definitely possible that some kind of progress through the stages has been going on for you, which could be the cause of some of what you're reporting, even if it hasn't gotten you partial enlightenment yet. It can also just be, as you said, something associated with learning about the mind in an everyday sort of way. (Or both.)
If you think you can 'almost' see vibrations, then you can try to look for them for awhile and see if they make themselves clear. They do become clearer and more obvious when you have more concentration, so you can try to develop that skill and see what happens. However, keep in mind that this is a form of meditation, and if you're wary of stage 3, and haven't been there yet, doing this is a great way to push yourself there. You might as well just do the technique I describe in Part 2.
(In some ways, as a rationalist, you should be more wary of stage 2. Stage 3 sucks, but in stage 2, people are likely to form all sorts of false beliefs because the mind generates some weird thoughts and the pleasurableness / enjoyableness of stage 2 entices many people to give those thoughts way more credence than they deserve. Though it can be interesting retroactively to observe how easy it is to be misled by one's feelings.)
You could also try what I described here as a test, but the same caveat applies.
FYI, if you can manage to see vibrations after putting in only a little bit of effort, but can't pass the cessation-of-consciousness test, I think it's more likely that you're in the beginning or near the beginning of stage two (which does give some insight into the workings of one's mind), and more importantly, that whatever your mind did to get you there without any formal meditation will be something that it will continue to do, which will eventually plop you down in stage three. Whether or not you choose to formally meditate now, I would ask that you keep this in mind, and if one day you realize that your life has been sucking for absolutely no reason, re-read what I wrote about stage three (in Part 2) and see what you think then. If everything I said about meditation is wrong, it's no skin off your back, but if what I've said is true AND there is evidence that you've found your way to stage three by accident, you are likely to be able to save yourself a lot of suffering if you take up meditation then compared to just trying to coast through.. Forewarned is forearmed.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-11T12:45:56.686Z · LW(p) · GW(p)
My guess is that meditation trains a lot of different skills, that whatever my brain does trains an overlapping but slightly different set of skills and at different proportional effectiveness, and that the end result is me being all over the place and not really possible to place on the scale.
Hmm, some of my many of my psychological problems that's been ruining my life for more than a year or so actually sounds a lot like how you describe stage 3... Than again half f every psychological effect or condition I've ever heard of does, so it's not very string evidence.
I did not really mean something like "it intuitively and self-evidently appears to me that some things are 'inside' me (e.g. feelings) and some things are 'outside' me (e.g. physical objects or their sensory representations) but they all seem quite the same on some level," more something like "I know that some kind of events take place inside my brain (I call all of these "thoughts" and am confused abaut how people seem to classify some as "imagery" some as "thought" some as "feeling" etc. Those words are complete synonyms to me.) and some happen outside of my brain, but other than location they don't seem any different and I get information about them through the same channel not sorted into two different piles like most people do. I can sort events by where they happen and put aside those who have the location 'armoks brain' but it's not somehting by brain does all the time if I don't tell it to. "
When I said I had no self I meant it more literally than you describe the meditation-attained one. "my mind is comprised of various automatic processes, there is nothing that 'subjectively experience' them, and words like ''me and 'I' are just pragmatically useful labels the usage of which varies with context and which obviously don't correspond to anything in the real world. ". (Speaking of which, if you ever need an expendable human to be tortured for 3^^^3 years or somehting I'll volunteer so that an actual person wont have to do it.)
Replies from: DavidM↑ comment by DavidM · 2011-05-13T15:30:03.733Z · LW(p) · GW(p)
My guess is that meditation trains a lot of different skills, that whatever my brain does trains an overlapping but slightly different set of skills and at different proportional effectiveness, and that the end result is me being all over the place and not really possible to place on the scale.
From my experience, it seems that the core skill related to enlightenment is "second-order recognizing" (with two aspects: speed, and range of phenomena that it has access to), and everything else is downstream from it. Other skills built in meditation seem to be either to be incidental or merely helpful in developing second-order recognizing.
In light of that, I would not be so quick to assume some kind of personal uniqueness in terms of the model I laid out, especially given that that kind of thinking does seem to be a common human bias.
Hmm, some of my many of my psychological problems that's been ruining my life for more than a year or so actually sounds a lot like how you describe stage 3... Than again half f every psychological effect or condition I've ever heard of does, so it's not very string evidence.
Right, I wouldn't take psychological problems as evidence for being in stage three, unless there was additional evidence for that. Psychological problems are common enough.
more something like "I know that some kind of events take place inside my brain (I call all of these "thoughts" and am confused abaut how people seem to classify some as "imagery" some as "thought" some as "feeling" etc. Those words are complete synonyms to me.) and some happen outside of my brain, but other than location they don't seem any different and I get information about them through the same channel not sorted into two different piles like most people do.
If you intuitively and self-evidently see some phenomena as happening outside and some phenomena happening inside, which is what "[...]other than location they don't seem any different[...]" means to me, that seems precisely to be sorting phenomena into two different piles. As if they come pre-tagged with "location" data.
As a long-time meditator, I don't recognize phenomena as intuitively or self-evidently "inside" or "outside" or "neither inside nor outside" or "both inside and outside." That entire classificatory scheme has ceased to exist for me. (In some ways I lack the ability to conceptualize what it would mean, although I remember that it used to mean something to me.) There is no location tag, and there is no empty field where the location tag would be. Of course, my model of the world tells me that some phenomena (sensory experience) represent stuff in the external world (albeit produced produced "inside," through brain activity), while other phenomena (cognition) merely represent the activity of my brain, but that is just a model, something which can be altered for all sorts of reasons, and not the default way that my experience is parsed.
How would you really know what's inside your brain and what's outside your brain, except by applying an explicit detailed model of how the world works? Is that what you're doing when sorting phenomena? Or are you using some more primitive way to sort (e.g. "sort by location tag")? Because sorting by the application of a model is not a low-level cognitive process, and has all the implications for what that sorting is like which follow from not being a low-level cognitive process.
Apart from this, I'm not sure why you think thoughts and imagery and feelings are synonymous. If someone asked you how you felt, do you think "visualizing purple monkeys" could be an appropriate response? (Are you a synaesthete?)
I can sort events by where they happen and put aside those who have the location 'armoks brain' but it's not somehting by brain does all the time if I don't tell it to. "
My guess is that this refers to the explicit sorting by the application of a model; for you, things come tagged by location, and you can choose to use the location tag to sort phenomena, or you can simply not sort.
So, unless I'm wildly misunderstanding you, my guess is that you aren't partially enlightened. But, who knows. Have you tried the cessation-of-consciousness test I described to Adelene?
When I said I had no self I meant it more literally than you describe the meditation- attained one. "my mind is comprised of various automatic processes, there is nothing that 'subjectively experience' them, and words like ''me and 'I' are just pragmatically useful labels the usage of which varies with context and which obviously don't correspond to anything in the real world. ".
As far as I can tell, "me" and "I" correspond to things in the world, or at least, there is a way to interpret them so that they do. They may not correspond to anything ontologically unique, but they definitely do describe features of the functioning of a physical system (your brain / body complex in relation to its environment) as precisely as might be expected from natural language terms.
What you're describing sounds to me like some kind of dissociative state.
(Speaking of which, if you ever need an expendable human to be tortured for 3^^^3 years or somehting I'll volunteer so that an actual person wont have to do it.)
In your opinion, what are you missing that would make you an actual person? The feeling of being an actual person formed of processes that cohere? (regarding my "dissociative state" guess.)
I can assure you (with high probability) that most people you run into would consider you an actual person rather than a sequence of unrelated processes, regardless of whether or not you feel like an actual person. I'm sure (with high probability) that you recognize that all the processes in your experience function together in a precise, finely-tuned way, which is how you're managing to have this conversation with me, and handle the rest of your life. So what's missing?
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-13T17:31:10.060Z · LW(p) · GW(p)
[enlightment skillset stuff]
Hmm, updating on this I'd guess I a very wide Range of Phenomena, but maybe normal or possibly even worse worse speed. I'd also guess the incidental skills and effects are probably involved in the stages phenomena.
Also I never said I were alone in this. In fact we already know of two individuals that show these symptoms just in the pool of people who have read this thread.
Right, I wouldn't take psychological problems as evidence for being in stage three, unless there was additional evidence for that. Psychological problems are common enough. I have a lot more than normal psychological problems. Also I meant the specifics you described, like feelings of things sucking and blaming it on various things that turn out to have been completely unrelated.
[location stuff] Gah, no that's the OPPOSITE of what I meant. I mean location literally, as in x,y,z,t coordinates. And no things dont come PRE sorted, then I would have to sort them. Location doesn't sort into two piles, it sorts into an infinite amount of piles arranged in a hierarchy. Examples of location tags would be:
Everything>thisUniverse>earth>home>armoksBrain>visualCortex
Everything>thisUniverse>earth>home>armoksStomac
Everything>thisUniverse>earth>home>desktop>harddrive>documentsFolder
Everything>thisUniverse>earth>USA>EliezersBrain>MoRverse>harrysBrain>modelOfQuirelsBrain>modelOfHarrysBrain>modelOfQuirelsBrain>auditoryCrotex
Everything>algebra>sin(x)>thirdInflectionPointToTheLeftFromOrigo
etc.
The concept of applying tags to things that are not part of my model of the world makes no sense. An outside datastream becoming integrated into the model is what "sensory experience" MEANS. Same thing with like half the concepts you are referencing. "know" is defined as a part of the model that is trusted.
I'm not sure what would be an "appropriate" response, visualizing is an action, output not input, and also "how do you feel" tends to be after longer term trends rather than the exact moment, but if I had been doing nothing but that for hours "I feel purple monkeys." would be a perfectly valid response. It's weird, but that's because the actual state it describes is weird.
["me/I" semantics] Well as I said they can mean different things depending on context, and some of those correspond to things in the real world, but the most common meaning don't.
personhood
My working definition is somehting like "an agent that a correct and fully informed implementation of CEV would assign subjective experience and care about for it's own sake.".
Replies from: AdeleneDawner, DavidM↑ comment by AdeleneDawner · 2011-05-13T18:16:21.487Z · LW(p) · GW(p)
My working definition is somehting like "an agent that a correct and fully informed implementation of CEV would assign subjective experience and care about for it's own sake.".
I strongly suggest that you start tabooing CEV, both in this conversation and in your thoughts. Trying to use a concept that's as poorly defined as that generally is in such a basic way carries a significant risk of leading to some very, very confused ways of thinking.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-13T19:09:44.008Z · LW(p) · GW(p)
That sounds like a good idea, but I have no idea how to actually implement it since it refers to a somehting I know is defined, but can only guess at the definition of.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-05-13T19:14:47.337Z · LW(p) · GW(p)
I expect you can taboo the term in the regular sense even if you can't taboo it in the rationalist sense. (Yes, this will mean re-doing your definition. I expect that that will be beneficial.)
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-13T20:58:28.005Z · LW(p) · GW(p)
So I'd be just saying "Friendly AI" instead? I don't see how that's going to change anything except being even vaguer.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-05-13T21:11:15.149Z · LW(p) · GW(p)
No; the concept of Friendly AI depends on the concept of CEV, so a proper tabooing (in either sense) of the concept of CEV would affect that, too.
You know how your mind returns an 'I don't know what you're talking about' error when asked direct questions about your self, since it doesn't use that concept? What I'm suggesting is that you remove concepts such that it returns that same error when asked direct questions about CEV, and re-answer DavidM's question using only the remaining concepts. You can rebuild the CEV concept, if you want, but I suggest you do so in a way that allows you to rationalist-taboo it.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-13T21:34:29.793Z · LW(p) · GW(p)
I am unable to remove the Friendly AI concept without destroying the concepts of "good", "bad", "value", "person", "worthwhile", "preferable", "person", "concious", "subjective experience", "humanity", "reality", "meaning", etc. the list just goes on and on and on. They're all directly or indirectly defined in therms of it. Further, since without those there is no reason for truth to be preferable to falsehood with this removed any model of my mind won't try to optimize for it and just turns to gibberish.
Replies from: AdeleneDawner, rhollerith_dot_com↑ comment by AdeleneDawner · 2011-05-13T21:50:25.181Z · LW(p) · GW(p)
Ouch. Okay, the above advice is probably too late to be useful at all, then.
If those are all defined in terms of CEV (subjective experience? really? I'm not sure I want to know how you managed that one, nor humanity or reality), then what's left for CEV to be defined in terms of?
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-14T19:02:55.435Z · LW(p) · GW(p)
Math?
Ok, granted, I used a kind of odd definition of "definition" in the above post, but the end result is the same; The model I use to reason about all LW type things (and most other things as well) consists of exactly two parts; Mathematical structures, and the utility function the math matters according to. The later one is synonymous to CEV as closely as I can determine. Every concept that can't be directly reduced to 100% pure well defined math much be caused by the utility function and thus removing that removes all those concepts. (obviously this is a simplification but that's the general structure of it.)
↑ comment by RHollerith (rhollerith_dot_com) · 2011-05-13T23:38:21.727Z · LW(p) · GW(p)
You have an interim definition that does not rely on CEV or FAI of personhood that you can use to distinguish the persons from the non-persons who appear before you, do you not?
↑ comment by DavidM · 2011-05-13T18:22:53.286Z · LW(p) · GW(p)
Hmm, updating on this I'd guess I a very wide Range of Phenomena, but maybe normal or possibly even worse worse speed.
What you'd need to know is what counts as normal for the population you think you're part of, and not for people in general. I'm not sure I have that information, apart from this broad generalization:
-In stage 2, range is not very wide, speed is very high
-in stage three, range is pretty wide, speed is much less than stage 2
-in stage 4, range is extremely wide, speed is variable but not as high as stage 2
People who don't meditate seem to have range being narrow and speed being lower than any of the stages, but I'm not completely sure.
Also I never said I were alone in this. In fact we already know of two individuals that show these symptoms just in the pool of people who have read this thread.
Adelene did not assert that she was outside the model (like you did), but only that she thought she was partially enlightened without ever having formally meditated. That is completely consistent with the model. Her results on the cessation-of-consciousness test agree with what the model would predict for such a person. She claims that her everyday experience is similar to stage four (or mode four perception), which agrees with what I asserted about partial enlightenment (in Part 2).
Let me know if you try the cessation-of-consciousness test and are interested in sharing what happened.
About your experience, I'm not sure I'm following. Let me take a step back. You say that "location" means x,y,z coordinates. Before, you wrote
"I know that some kind of events take place inside my brain...and some happen outside of my brain, but other than location they don't seem any different and I get information about them through the same channel not sorted into two different piles like most people do.
If you visualize purple monkeys, what is the location of that and how do you know? Given how you know it, why does that method of knowing result in it seeming different than e.g. the way your feet look, on the basis of location, but not on any other basis?
Location doesn't sort into two piles, it sorts into an infinite amount of piles arranged in a hierarchy.
It seems that you're not talking about your actual experience (unless you assert that there is an actual infinity somewhere in your experience)?
I'm not sure what would be an "appropriate" response, visualizing is an action, output not input, and also "how do you feel" tends to be after longer term trends rather than the exact moment,
People can visualize spontaneously. (cf. e.g. hypnagogic imagery, daydreaming, other stuff).
Someone can say "I was happy for hours but all of a sudden I felt sad" and that makes sense.
Could "I feel purple monkeys" be an accurate response to "how do you feel this very second?" in the way that "I feel sad" could be, if you hadn't been visualizing purple monkeys for a long stretch leading up to the question? If so, it might be interesting to you to investigate how your experience differs from most people's, just for the sake of self-understanding.
I'd also guess the incidental skills and effects are probably involved in the stages phenomena.
My guess is that most of how the stages present is downstream from second-order recognizing and unmodeled personal factors, though 'concentration' can make a big difference here when formally meditating.
My working definition is somehting like "an agent that a correct and fully informed implementation of CEV would assign subjective experience and care about for it's own sake.".
According to your working definition, you don't know whether you count as a person, and are very far from knowing.
But this doesn't help, since you previously asserted that you are not one, and seemed to indicate that it has something to do with your ongoing "lack of self" experience.
Assuming what you meant was that you assume or believe with high probability that a good implementation of CEV would not count you as a person, why do you think so?
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-13T19:21:23.325Z · LW(p) · GW(p)
I am feeling very confused right now, and suddenly very uncertain about all this stuff.
I could guess, but the most honest answer to most of these is simple "I don't know.".
Replies from: DavidM↑ comment by DavidM · 2011-05-13T20:13:47.835Z · LW(p) · GW(p)
Fair enough. These issues can definitely be confusing.
If you'd like to pick up on this conversation in the future (or restart it), feel free.
Replies from: Armok_GoB↑ comment by AdeleneDawner · 2011-05-09T12:44:11.342Z · LW(p) · GW(p)
Here's a potentially-useful cue: How does your mind handle the question "what do you want"?
Anyone reading along might want to answer this question for themselves before continuing, of course.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-09T13:01:55.554Z · LW(p) · GW(p)
"ERROR; further context required Guess at context/translation: 'What utility function does the brain controlling the account User:Armok_GoB implement?' RETURNS: Unknown, currently working under the heuristic to act like it's indistinguishable from the CEV of humanity. "
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-05-09T13:05:40.876Z · LW(p) · GW(p)
Yes, exactly. Mine returns a similar 'insufficient data' error, though my default translation is slightly different.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-05-09T14:29:12.708Z · LW(p) · GW(p)
To clarify this a bit, the interesting bits are that Armok:
Found the question confusing
Noticed the confusion and stopped rather than generating a plausible-sounding answer (though the framing of the question makes this much less remarkable than it would otherwise be)
Rephrased the question in a way that avoids the usual ways of thinking about both 'me' and 'wanting'
It's also somewhat interesting that his response to the question refers primarily to other peoples' desires, though that's very plausibly (sub-)cultural.
Replies from: Armok_GoBcomment by CronoDAS · 2011-04-30T01:22:50.323Z · LW(p) · GW(p)
I would recommend not starting at all unless you are willing to see it through and not give up just because it seems to have made things temporarily suck: experience has shown that giving up while things suck is a great way to make things suck for a long time. (And experience has shown that commitment helps to avoid this problem.)
Given this warning, I think it might be best if I not start...
Replies from: khafra