Suffering as attention-allocational conflict
post by Kaj_Sotala · 2011-05-18T15:12:08.988Z · LW · GW · Legacy · 63 commentsContents
63 comments
I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.
I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.
An example is probably in order, so here goes. Last Friday, there was a Helsinki meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.
Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:
* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”
Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.
But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.
Imagine you were in the wilderness, and knew that if you weren't back in your village by dark you probably wouldn't make it. Now suppose a part of your brain was telling you that you had to turn back now, or otherwise you'd still be out when it got dark. What would happen if you just decided that the thought was uncomfortable, successfully pushed it away, and kept on walking? You'd be dead, that's what.
You wouldn't want to build a nuclear reactor that allowed its operators to just override and ignore warnings saying that their current course of action will lead to a core meltdown. You also wouldn't want to build a brain that could just successfully ignore critical messages without properly addressing them, basically for the same reason.
So I addressed the messages. I considered them and noted that they both had merit, but that honoring the prior obligation was more important in this situation. Having done that, the frustration mostly went away.
Another example: this is the second time I'm writing this post. The last time, I tried to save it when I'd gotten to roughly this point, only to have my computer crash. Obviously, I was frustrated. Then I remembered to apply the very technique I was writing about.
* The Crash Message: You just lost a bunch of work! You should undo the crash to make it come back!
* The Realistic Message: You were writing that in Notepad, which has no auto-save feature, and the computer crashed just as you were about to save the thing. There's no saved copy anywhere. Undoing the crash is impossible: you just have to write it again.
Attending to the conflict, I noted that the realistic message had it right, and the frustration went away.
It's interesting to note that it probably doesn't matter whether my analysis of the sources of the conflict is 100% accurate. I've previously used some rather flimsy evpsych just-so stories to explain the reasons for my conflicts, and they've worked fine. What's probably happening is that the attention-allocation mechanisms are too simple to actually understand the analysis I apply to the issues they bring up. If they were that smart, they could handle the issue on their own. Instead, they just flag the issue as something that higher-level thought processes should attend to. The lower-level processes are just serving as messengers: it's not their task to evaluate whether the verdict reached by the higher processes was right or wrong.
But at the same time, you can't cheat yourself. You really do have to resolve the issue, or otherwise it will come back. For instance, suppose you didn't have a job and were worried about getting one before you ran out of money. This isn't an issue where you can just say, ”oh, the system telling me I should get a job soon is right”, and then do nothing. Genuinely committing to do something does help; pretending to commit to something and then forgetting about it does not. Likewise, you can't say that "this isn't really an issue" if you know it is an issue.
Still, my experience so far seems to suggest that this framework can be used to reduce any kind of suffering. To some extent, it seems to even work on physical pain and discomfort. While simply acknowledging physical pain doesn't make it go away, making a conscious decision to be curious about the pain seems to help. Instead of flinching away from the pain and trying to avoid it, I ask myself, ”what does this experience of pain feel like?” and direct my attention towards it. This usually at least diminishes the suffering, and sometimes makes it go away if the pain was mild enough.
An important, related caveat: don't make the mistake of thinking that you could use this to replace all of your leisure with work, or anything like that. Mental fatigue will still happen. Subjectively experienced fatigue is a persistent signal to take a break which cannot be resolved other than by actually taking a break. Your brain still needs rest and relaxation. Also, if you have multiple commitments and are not sure that you can handle them all, then that will be a constant source of stress regardless. You're better off using something like Getting Things Done to handle that.
So far I have described what I call the ”content-focused” way to apply the framework. It involves mentally attending to the content of the conflicts and resolving them, and is often very useful. But as we already saw with the example of physical pain, not all conflicts are so easily resolved. A ”non-content-focused” approach – a set of techniques that are intended to work regardless of the content of the conflict in question – may prove even more powerful. For those, see this follow-up post.
I'm unsure of exactly how long I have been using this particular framework, as I've been experimenting with a number of related content- and non-content-focused methods since February. But I believe that I consciously and explicitly started thinking of suffering as ”conflict between attention-allocation mechanisms” and began applying it to everything maybe two or three weeks ago. So far, either the content- or non-content-focused method has always seemed to at least alleviate suffering: the main problem has been in remembering to use it.
63 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2011-05-18T16:21:10.578Z · LW(p) · GW(p)
Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.
Sounds familiar. When I have an idea or learn some fact of the kind that I would typically want to write down, it will gnaw on me and annoyingly take up attention until I do write it down (even if in this particular case, I don't need to), at which point it distinctly lets go. It's probably a good mechanism that makes sure the data doesn't get forgotten, but it's interesting how on one hand it knows the exact moment when to turn off, and on the other hand I can't just tell it to shut up without fulfilling the script.
(How typical is this?)
Replies from: gwern, Rain↑ comment by gwern · 2011-05-18T16:37:40.410Z · LW(p) · GW(p)
I think it's very typical. It seems to be a theme in a great many systems, from Getting Things Done to vipassana and other forms of meditation - that things can burden your mind until you devote a little attention to dealing with them.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-05-19T07:36:12.408Z · LW(p) · GW(p)
I think vipassana in most forms primarily involves a distinct low level skill that isn't really related to what Nesov was talking about. I may be wrong. Agree that GTD is an example though.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-05-19T08:29:54.506Z · LW(p) · GW(p)
Nerdy lament about social norms number 310583715083708567: I wish it were socially acceptable to say "I disagree, but don't want to bother explaining why" without sounding like a dick. It is better to give a little information than no information.
Replies from: handoflixue↑ comment by handoflixue · 2011-05-19T18:23:06.920Z · LW(p) · GW(p)
That social norm always made sense to me: Simple disagreement doesn't provide much information, unless you provide a reason. Even a short reason gives me a "hook" to evaluate why you might disagree, or to do research.
The exception would be if an expert's intuitive evaluation is saying "this seems wrong to me", at which point I have a good reason to dig in to it myself.
Besides, if you don't want to bother explaining why, then either you're trying to outsource the cognitive cost to me (in which case you probably don't care whether I change my mind), or you don't consider it worth the cognitive cost in the first place - either way, there's no reason for me to believe that your disagreement is worth following up on.
Especially here on LessWrong, the "Vote Down" button seems the simplest way to disagree without explaining any farther, and avoids the social issue entirely.
I think "I agree" is somewhat more acceptable because it at least adds a little emotional bonus of "yay, the tribe supports me!" whereas "I disagree" is a very mild hostile bump of "eek, the tribe might exile me!" That said, I've seen plenty of communities where "I agree" is considered a taboo statement, and I always find it sort of surprising how often I actually see it around here :)
tl;dr: a chorus of "I agree" / "I disagree" is simply adding noise to communication.
Replies from: michaelsullivan, Will_Newsome, thomblake↑ comment by michaelsullivan · 2011-05-20T12:41:53.108Z · LW(p) · GW(p)
I disagree that a vote down fulfills this function. A vote down does not say "i disagree", it says "I want to see less of some feature of this comment/article in the Less Wrong stream".
Sometimes that's because I disagree strongly enough to consider it foolishness and not worth discussion. But most of the time, a vote down is for other reasons. I do find that I am much more likely to vote down comments that I disagree with, and I suspect this is true for most/all Less Wrongers. But that's because I am more likely to be looking harder for problems in posts I disagree with due to all the various biases in my thinking. Disagreement alone is insufficient reason for a vote down from me, and I hope that is true for almost everyone here.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-20T12:48:37.062Z · LW(p) · GW(p)
Disagreement alone is insufficient reason for a vote down from me, and I hope that is true for almost everyone here.
Seconded. Agreement/disagreement shouldn't be a reason for up- or downvoting.
My personal policy is to never downvote immediate replies to my own comments. There's too much of a risk that I'll downvote something simply because I disagree with it.
↑ comment by Will_Newsome · 2011-05-20T04:55:21.179Z · LW(p) · GW(p)
I disagree. ;) Specifically because knowing who disagrees with me gives me way more evidence than just knowing that someone disagrees with me. Practically speaking I do not consider downvotes evidence that I am wrong (they are generally just evidence that people dislike what I say because it sounds pretentious or because it pattern matches to something that could be wrong), whereas I would consider a simple "I disagree" comment from e.g. Nick Tarleton some evidence that I was wrong and should spend effort finding out why. (This isn't mostly because Nick is a good thinker (though he is) but that I think it's a lot less likely he'll uncharitably misinterpret what I'm trying to say, whereas an "I disagree" comment from Vladimir Nesov is a lot less evidence that I'm wrong even though he is also an excellent thinker.)
Obviously "I disagree" plus a short reason is better and normally not much more difficult, but this would also be a lot easier to do in a community where just "I disagree" was acceptable.
Replies from: handoflixue, wedrifid↑ comment by handoflixue · 2011-05-20T20:48:20.344Z · LW(p) · GW(p)
nods If it's someone whose name I recognize and whose opinion I value, that tends to get handled similar to an "expert's intuitive evaluation". I don't know how tight-knit the community is here, and what percentage of "I disagree" messages end up triggering that here, but most communities I've seen aren't tight-knit enough for it to be meaningful except possibly when said by a "tribal leader".
You do make a good point about the acceptability of "I disagree" influencing the acceptability of "I disagree because of this brief reason" :)
↑ comment by wedrifid · 2011-05-20T09:18:55.676Z · LW(p) · GW(p)
Practically speaking I do not consider downvotes evidence that I am wrong (they are generally just evidence that people dislike what I say because it sounds pretentious or because it pattern matches to something that could be wrong), whereas I would consider a simple "I disagree" comment from e.g. Nick Tarleton some evidence that I was wrong and should spend effort finding out why. (This isn't mostly because Nick is a good thinker (though he is) but that I think it's a lot less likely he'll uncharitably misinterpret what I'm trying to say, whereas an "I disagree" comment from Vladimir Nesov is a lot less evidence that I'm wrong even though he is also an excellent thinker.)
Strongly agree on every point.
While downvotes always contain evidence that evidence contains more information about the social reality than the conceptual one. It is useful information, just not necessarily information about facts or accuracy.
↑ comment by thomblake · 2011-05-19T20:09:12.634Z · LW(p) · GW(p)
That said, I've seen plenty of communities where "I agree" is considered a taboo statement, and I always find it sort of surprising how often I actually see it around here
Some of us tried to enforce a noise-cancelling norm of squashing "I agree" type comments early on, but it was overridden by a concern that being generally unpleasant is no good for the community. See Why our kind can't cooperate (and for balance, Well-kept gardens die by pacifism)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-05-20T05:00:40.010Z · LW(p) · GW(p)
"I agree" is useful in cases where people think that their agreement would provide me with a non-negligible update, because they know that I respect their rationality or because they consider themselves experts in the domain in question. For example, an "I agree" from thomblake would provide me with non-negligible evidence if written in response to some speculations about academic philosophy. Of course, most people don't signal agreement for this reason, so it's not really what you were talking about.
Replies from: thomblake↑ comment by Rain · 2011-05-18T17:42:49.695Z · LW(p) · GW(p)
(How typical is this?)
I call it cognitive load, and I endeavor to structure my day such that I can think about more important things. This involves habit, routine, careful placement of objects for use, and writing down any idea I consider important. After being written, I often forget everything except the reference pointer (where I wrote it, what it was generally about).
comment by gwern · 2011-05-18T16:39:35.416Z · LW(p) · GW(p)
Still, my experience so far seems to suggest that this framework can be used to reduce any kind of suffering. To some extent, it seems to even work on physical pain and discomfort. While simply acknowledging physical pain doesn't make it go away, making a conscious decision to be curious about the pain seems to help. Instead of flinching away from the pain and trying to avoid it, I ask myself, ”what does this experience of pain feel like?” and direct my attention towards it. This usually at least diminishes the suffering, and sometimes makes it go away if the pain was mild enough.
The PRISM theory of consciousness seems relevant: http://www.rifters.com/crawl/?p=791
Replies from: None, Kaj_SotalaWhat’s the primitive, bare-bones, nuts-and-bolts thing that consciousness does once we’ve stripped away all the self-aggrandizing bombast? Morsella’s answer is delightfully mundane: it mediates conflicting motor commands to the skeletal muscles.
Morsella sees us as a series of systems, each with its own agenda: feeding, predator avoidance, injury prevention, and so on. Mostly these systems operate on their own, independently. We can’t voluntarily dilate our eyes, for example. We can’t consciously control our digestive processes, nor are we even generally aware of them — peristalsis, like the pupil reflex, is the purview of the smooth muscles (and no, gas production by gut bacteria is not the same thing). But when digestion is finished — when the rectum is full, and you’re ready to take the mother of all dumps, but you’re on the in-laws good living-room carpet and your incontinent uncle is hogging the toilet — then, sure as shit, you become conscious of the process. There’s a sphincter under voluntary control that’s just urging you to let go. There are other agendas suggesting that that would be a really bad idea. And I would challenge anyone who has ever been in that position to tell me that that situation is not one in which conscious awareness of one’s predicament is, to put it mildly, heightened.
↑ comment by [deleted] · 2011-05-19T15:26:14.082Z · LW(p) · GW(p)
This sounded very interesting, so I looked into the PRISM theory, and it turns out that some of the papers relating to this are available for free online at Morsella's university page here: http://bss.sfsu.edu/emorsella/publications.html
I'm reading some of http://bss.sfsu.edu/emorsella/images/MorsellaPsychRev.pdf right now and it mentions the PRISM specifically.
↑ comment by Kaj_Sotala · 2011-05-24T14:21:54.981Z · LW(p) · GW(p)
Just read the paper. Thank you - it's awesome. It helped me produce some extra insights relating to both this and our lack of strategic thought, which I need to write up as soon as I've cleared them a bit in my mind.
comment by Scott Alexander (Yvain) · 2011-05-19T19:30:30.382Z · LW(p) · GW(p)
Are you saying this is one potential source of suffering, or are you defining suffering as those things which fit this pattern?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-20T11:26:25.864Z · LW(p) · GW(p)
It currently seems to me like everything which is suffering fits this pattern, though I'm not sure if all the things which fit this pattern are suffering.
If I'm working on a complex problem or playing a difficult game, my attention may be drawn to several things at a time, but this doesn't cause suffering. I'm uncertain of whether this is because my attention-allocation systems are sufficiently well coordinated to "take turns" in that situation, or because a certain component of emotional urgency is lacking. I'm guessing the latter, because Buddhist-style detachment (one of the non-content-focused methods mentioned) seems to be very useful in avoiding suffering. On the other hand, multitasking for longer periods does make me feel worse.
comment by jimmy · 2011-05-18T19:20:53.012Z · LW(p) · GW(p)
It's interesting to note that it probably doesn't matter whether my analysis of the sources of the conflict is 100% accurate.
Have you pushed the limits of how flimsy they can be? Can you tell yourself in a serious mental tone of voice "My horoscope said I have to pick X" and have it go away?
Can you do the full analysis and have confident answer without getting it to go away?
My quick mental simulations say "yes" to both and that it's not so much 'having an explanation' as it is deliberately flipping the mental "dismiss alarm" button, which you do only when "you" are comfortable enough to.
Replies from: handoflixue, handoflixue, Kaj_Sotala↑ comment by handoflixue · 2011-05-19T18:06:48.989Z · LW(p) · GW(p)
Can you tell yourself in a serious mental tone of voice "My horoscope said I have to pick X" and have it go away?
Speaking personally, if I can create some part of myself that "believes" that, then yes, absolutely. I actually find a great deal of benefit from learning "magical" / New Age techniques for exactly that reason.
I've routinely done things because "my future self" was whispering to my telepathically and telling me that it would all work out, or because my "psychic senses" said it would work.
The rest of me thinks this is crazy, but it works so I let that little part of me continue with it's very interesting belief system :)
Replies from: jimmy↑ comment by jimmy · 2011-05-19T19:07:10.343Z · LW(p) · GW(p)
Speaking personally, if I can create some part of myself that "believes" that, then yes, absolutely. I actually find a great deal of benefit from learning "magical" / New Age techniques for exactly that reason.
Is this something you can explain? I'm looking into this kind of stuff now and trying to find out the basics so that I can put together a 1) maximally effective and 2) epistemically safe method.
It's hard to find people into this kind of stuff that even understand the map-territory distinction, so input from other LWers is valued!
Replies from: mutterc, handoflixue, Armok_GoB↑ comment by mutterc · 2011-05-19T19:46:41.320Z · LW(p) · GW(p)
I did Tai Chi lessons for a while, and enjoyed the "charging up with chi/The Force" feeling it would give me, from pucturing flows of energy throught the body and such. Of course the "real" cause of those positive feelings are extra blood oxygenation, meditation, clearing extraneous thoughts, etc.
I was OK with this disconnect between the map and the territory, because there was a linkage between them: the deep breathing, mental focusing, and let's not forget the placebo effect.
I suppose this is not too different in principle to the "mind hacks" bandied about around here.
↑ comment by handoflixue · 2011-05-19T19:42:06.147Z · LW(p) · GW(p)
I'm pretty sure I could explain it, given time, a few false starts, and a patient audience. I've been finding that more and more, the English language and US culture sucks as a foundation for trying to explain the processes in my head :)
With that said, here goes Attempt #1 :)
Feel around in your head for a few statements, and compare them. Some of them will feel "factual" like "France exists." Others will instead be assertions that you support - "killing is wrong", for example. Finally, you'll have assertions you don't support - "God exists in Heaven, and will judge us when we die."
The former category, "factual" matters, should have a distinctly different feel from the other two "beliefs". The beliefs you agree with should also have a distinctly different feel from the ones you disagree with. I often find that "beliefs I agree with" feel a lot like "factual" matters, whereas "beliefs I disagree with" have a very distinct feeling.
You'll probably run in to edge cases, or things that don't fit any of these categories; those are still interesting thoughts but you probably want to ignore them and focus on these simple, vivid categories. If some other set of groupings has a more distinct "feel" to it, is easier to separate out, feel free to use those. The point is simply to develop a sense of what the ideas in your head feel like, because we tend not to think about that at all.
Next, you need to help yourself hold two perspectives at once: I think Alicorn's City of Lights from her Luminousity sequence is probably a useful framework here. Divide yourself in to two selves, one who believes something, and one who doesn't, something like "I should study abroad in Australia" from the shiny story examples :)
Compare how those two parts of you process this, and see how the belief feels differently for each of them. If you can do this at all, then you've demonstrated to yourself that you CAN hold two mutually incompatible stances at the same time.
So, now you know what they feel like, and you know that you can hold two at the same time. I find that's an important framework, because now you can start believing absurd things, with the reassurance that a large part of you will still be perfectly sane, sitting on the sidelines and muttering about how much of a nutter you're being. (Being comfortable with the part of yourself which believes impossible things, and accepting that it'll be called a nutter is also helpful :))
The next step is to learn how to play around with the categorization you do. Try to imagine what it feels like when "France exists" is a belief instead of a fact. Remind yourself that you've never been to France. Remind yourself that millions of people insist they've witnessed God, and this is probably more people than have witnessed France. It doesn't matter if these points are absurd and irrational, they're just a useful framework for trying to imagine that France is all a big hoax, just like God is.
(If you believe in God, or don't believe in France, feel free to substitute appropriately :))
If all three of those steps went well, you should now be able to create a self which believes that France does not exist. Once you've done this, believing in your horoscope should be a reasonably trivial exercise.
Alright, that's Attempt #1. Let me know what was unclear, what didn't work, and hopefully eventually we'll have a working method! =)
↑ comment by Armok_GoB · 2011-05-20T20:16:21.270Z · LW(p) · GW(p)
I wonder if you could use some kind of "gödelian bomb" referencing decision theory that'll flag it as being currently handled and then crash so that the flag stays up without having to actually handle it. This'll probably be dangerous in different ways, possibly much more so, but epistemically wouldn't be one of then I think.
It seems fairly likely that the crash itself would be more unpleasant than what you're trying to cure with it thou.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-21T23:45:33.551Z · LW(p) · GW(p)
I do not understand this. It seems like if I did it would be interesting. Could you explain further? Perhaps just restating that slowly/carefully/formally might help.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-05-22T18:28:51.976Z · LW(p) · GW(p)
Say you want the X notification to go away.
The notification will go away when you have "sufficiently addressed" it.
You believe that the decision theory D states that if you want the notification to go away it'd probably be best if it went away.
This is not sufficient, since it's to direct and similar to "go away because I want you to" that evolution has specifically guarded against.
On the other hand, if you could prove that for this specific instance that the decision theory indeed says the notification should go away, it probably would, since you have high confidence in the decision theory.
Proving somehting like that would be hard and require a lot of creative ideas for every single thing, so it's not practical.
What might instead be possible is to come up with some sort of algorithm that in actuality isomorphic to the one that got caught in the filter, but long, indirect, and gradual enough that it hacks it's way past it.
The most likely structure for this is some circular justification that ALMOST says that everything it itself proves is true, but takes in just enough evidence each iteration to keep it from falling into the abyss of inconsistency that has been proved to be.
So it actually infinitely narrowly avoids being a Gödelian bomb, but it looks a lot like it.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-22T19:51:17.544Z · LW(p) · GW(p)
It may not work if you are aware that you are tricking yourself like that but then again it also may work.
That is certainly a very interesting idea.
Replies from: Armok_GoB↑ comment by handoflixue · 2011-05-19T18:14:13.651Z · LW(p) · GW(p)
deliberately flipping the mental "dismiss alarm" button
This actually sparks another thought: When I was a kid, I got very annoyed with the way my body let me know things. I understood that sometimes it would get hungry, or need to use the bathroom, but sometimes I had to wait before I could viably handle these needs. I thus started viewing these states as "mental alarms", and eventually managed to "install a dismiss alarm button." I now refer to it as my internal messaging system: My body sends me a message, and I get a little "unread message" indicator. I'll suffer until I read the message, but then I have no obligation to actually act on it. If I ignore the message, I usually get another one in an hour or so, since my body still has this need.
At first, a dismissed alarm would last ~5 minutes. Now I can actually dismiss my sense of hunger for a couple days if food just doesn't come up. Dismissing an alarm when I have easy access to take care of something (for instance, trying to ignore hunger when someone offers me a nice meal) is much, much harder.
It does run in to the failure state that I sometimes forget to do much of anything for hours, because I'm focused on my work and just automatically dismiss all of my alarms. This occasionally results in a couple hours of unproductive work until I pause, evaluate the reason I'm having trouble, and realize I haven't had anything to eat all day :)
Replies from: jimmy, Kaj_Sotala↑ comment by jimmy · 2011-05-19T19:03:31.208Z · LW(p) · GW(p)
I managed to use a visualization of a "car sickness switch" that helped tremendously, though the switch did keep turning itself on every couple minutes.
It does run in to the failure state that I sometimes forget to do much of anything for hours, because I'm focused on my work and just automatically dismiss all of my alarms.
I need to work on being more explicit with this- that happens to me without the interrupt flag ever being set.
Yesterday at the end of our LW meetup one of the attendee's was talking about how he hadn't eaten because he got sucked into the conversation, and we were giving him shit about it since we were meeting in the middle of the food court. I even asked myself I was hungry.. "nah, not really". As soon as I get home I get the message "you have a serious caloric deficit, eat a 2000kcal meal"
↑ comment by Kaj_Sotala · 2011-05-20T11:28:49.378Z · LW(p) · GW(p)
Can you use this for non-physical signals, such as purely emotional pain?
Replies from: handoflixue↑ comment by handoflixue · 2011-05-20T20:44:24.291Z · LW(p) · GW(p)
The "dismiss alarm" button doesn't work as well for pain of either sort - I can temporarily suppress it, but it will keep coming back until I do something to actually resolve it - for physical pain, this is generally pain killers. For emotional pain, some combination of "vegging out" on mindless activities (TV, WOW, etc.).
For mild pain, it's pretty easy to just dismiss alarm and ignore it. For moderate pain, I usually have to convert it in to something else. This is easier to do with physical pain, where I can tweak the sensation directly. I can induce specific emotional states, but it's harder and less stable. For intense pain, I'll usually be unable to function even if I'm doing this, and it will sometimes hit a point where I can't redirect it.
Long term, persistent pain is also much more exhausting to deal with; this is probably some of why emotional pain is more of an issue for me - it tends to be a lot less fleeting.
↑ comment by Kaj_Sotala · 2011-05-18T20:41:47.900Z · LW(p) · GW(p)
Can you tell yourself in a serious mental tone of voice "My horoscope said I have to pick X" and have it go away?
I haven't tested this, but I'm guessing yes.
Can you do the full analysis and have confident answer without getting it to go away?
Yes. That's when I usually switch to non-content-focused methods.
My quick mental simulations say "yes" to both and that it's not so much 'having an explanation' as it is deliberately flipping the mental "dismiss alarm" button, which you do only when "you" are comfortable enough to.
That does sound pretty plausible.
comment by pjeby · 2011-05-20T15:33:27.228Z · LW(p) · GW(p)
But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.
Actually, on seeing your post title, I thought you were just rephrasing my saying that "suffering is a divided mind". ;-)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-20T17:24:20.148Z · LW(p) · GW(p)
Was I?-) (I don't think I've read you saying exactly this, but then I may just have forgotten.)
Replies from: pjeby↑ comment by pjeby · 2011-05-21T04:59:10.291Z · LW(p) · GW(p)
I don't think I've read you saying exactly this, but then I may just have forgotten.)
In retrospect, it occurs to me that the vast majority of my discussion of this topic has been in paid-only materials, and more in lecture form than text.
However, Indecision Is Suffering (2006) and The Code of Owners (2007) carry a few tidbits of my early thinking on the subject.
Before I heard about the PRISM model, I was telling the Guild that "consciousness is like an error handler", and that that was why we become more self-conscious when things aren't going well.
(This also affects our perception of time, by the way -- it may be that time seems to crawl under conflict conditions simply because our brains are allocating more clock cycles to conscious processing! Credit for first pointing me in the direction of the conflict-equals-time link goes to a fellow named Stephen Randall, and his book, "Results In No Time".)
Anyway, after hearing about PRISM, it also occurred to me that you could perhaps exploit the physical aspects of consciousness in order to manipulate mental states in various ways... the most interesting of which so far is the use of continuous and fluid movement as a method of establishing or regulating a "flow" state in tasks that would otherwise be high-conflict (and thus high-suffering) activities. (I haven't prepared any materials on that topic yet, though.)
comment by handoflixue · 2011-05-19T18:02:19.488Z · LW(p) · GW(p)
Interesting. I actually use this technique on pain fairly regularly, and have found it works well. I've always mentally modeled it after Dune: Feel the pain, acknowledge it, and let it flow past me.
I've also found synaesthesia useful, although I may be unusual in being able to induce and control it - modelling pain as colors, or converting it in to a pleasant sensation is quite useful, and I've found I get better with practice. My body will often have about the same involuntary reactions - flinching, yelling out - but the emotional experience is quite different, and I can take quite a lot of pain without feeling terribly bothered.
It might also help that I've had some extremely painful medical situations, and didn't have much choice but to learn to deal with it ^^;
comment by Laoch · 2011-05-19T13:14:34.387Z · LW(p) · GW(p)
While simply acknowledging physical pain doesn't make it go away, making a conscious decision to be curious about the pain seems to help. Instead of flinching away from the pain and trying to avoid it, I ask myself, ”what does this experience of pain feel like?” and direct my attention towards it. This usually at least diminishes the suffering, and sometimes makes it go away if the pain was mild enough.
Have you heard of MBSR? I'm doing a course in mindfulness meditation at the moment and what you said above seems to fit very well with the techniques thought to us in the course.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-20T11:27:56.945Z · LW(p) · GW(p)
I hadn't heard of that particular course, but the general concept does seem similar. Are there any online materials for it?
Replies from: Laoch↑ comment by Laoch · 2011-05-20T15:05:33.374Z · LW(p) · GW(p)
All I can provide is two articles on the subject here and here. The website of the course I'm doing is here. I'm doing the course for the techniques mainly, I don't buy into everything said at the sessions naturally, but there are some merits to it making it worth my time. Hope that helps.
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-19T11:14:28.050Z · LW(p) · GW(p)
Great post!
- The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”
This is something I'm very familiar with. My Prior Obligation System is active in the extreme, perhaps overactive, to the point that I once cried for half an hour after sleeping through my alarm when I was supposed to cover a morning swim practice for the coach who was out of town. I feel awful when I have two conflicting commitments and have to choose between them. I'm pretty good at scheduling stuff so it doesn't conflict, which leads to me being ridiculously over-scheduled, because the long-term suffering of being exhausted a lot of the time is less than the short-term but much more intense suffering of giving something up. This is possibly something I'll have to address eventually, depending on whether I ever consider "my own happiness" a higher priority than "getting lots of stuff done and not disappointing anyone."
comment by Duke · 2011-05-19T07:17:52.939Z · LW(p) · GW(p)
I tend to treat anger and frustration as resulting from my map not matching the terrain somewhere. I suspect that your frustration is rooted in inaccurate mapping concerning the prior commitment that prevented you from meeting Patri. My guess is that you correctly assumed that there would be a small chance that something “better” than your commitment would pop-up that you would have to miss; but, you failed to properly assess the emotional impact this unlikely scenario would have on you. Now you can update your priors, do some re-mapping and be better prepared emotionally to deal with low-probability/high-annoyingness events.
Also, how similar is the present Patri-hysteria in Finland to the Beatles-hysteria in the 60's?
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2011-05-19T08:08:32.813Z · LW(p) · GW(p)
Also, how similar is the present Patri-hysteria in Finland to the Beatles-hysteria in the 60's?
One difference is that I'm aware that the former happened, but not that the latter would have.
(edit: by "former" and "latter" I mean the chronological order of events, not the order in which they were mentioned in the quoted comment :)
comment by lukeprog · 2011-05-24T16:30:15.416Z · LW(p) · GW(p)
I'm sure the neurophysiology of pain has some insight to offer as to whether or not this hypothesis is correct, but I haven't read enough to know. Bud Craig is one of the major researchers in this area. His paper on pain as a homeostatic emotion is here, but it's kinda old now (2003).
comment by sark · 2011-05-31T19:36:05.472Z · LW(p) · GW(p)
Suffering happens all too readily IMHO (or am I misjudging this?) for evolution to not have taken chronic attention-allocational conflict into account and come up with a fix.
To take an example for comparison, is the ratio of chronic to acute pain roughly equal to the ratio of chronic to acute attention-allocational conflict? My intuitions fail me here, but I seem to personally experience more chronic suffering than chronic pain. But then again I was diagnosed with mild depression before and hence not typical.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-31T20:03:47.767Z · LW(p) · GW(p)
Suffering happens all too readily IMHO (or am I misjudging this?) for evolution to not have taken chronic attention-allocational conflict into account and come up with a fix.
What's the problem we're thinking evolution might be trying to fix here?
The problem isn't that suffering feels bad. Evolution isn't trying to make us happy. If the hypothesis in the OP is true and suffering is the instinct that leads us to move away from situations that place conflicting demands on how we allocate attention, and people don't do well in those situations, then suffering might easily be a solution rather than a problem.
Replies from: sark↑ comment by sark · 2011-05-31T21:23:33.886Z · LW(p) · GW(p)
No, I didn't mean that the badness was bad and hence evolution would want it to go away. Acute suffering should be enough to make us focus on conflicts between our mental subsystems. It's as with pain, acute pain leads you to flinch you away from danger, but chronic pain is quite useless and possibly maladaptive since it leads to needless brooding and wailing and distraction which does not at all address the underlying unsolveable problem and might well exacerbate it.
Replies from: Kaj_Sotala, TimFreeman↑ comment by Kaj_Sotala · 2011-06-02T12:52:33.829Z · LW(p) · GW(p)
Our ancestors didn't have the benefit of modern medicine, so some causes of chronic pain may have just killed them outright. On the other hand, not all of the things causing chronic pain today were an issue back then. The incidence for both back pains and depression was probably a lot lower, for example.
Fixing the problem requires removing chronic pain without blocking acute pain when it's useful. This problem isn't necessarily trivial. If chronic pain was rare enough, then trade-offs making both chronic and acute pain less likely may simply not have been worth it.
Replies from: sark↑ comment by sark · 2011-06-02T14:10:08.352Z · LW(p) · GW(p)
Our ancestors didn't have the benefit of modern medicine, so some causes of chronic pain may have just killed them outright. On the other hand, not all of the things causing chronic pain today were an issue back then.
I was actually using pain as an analogy for suffering. I know that chronic pain simply wasn't as much of an issue back then. Which was why I compared chronic pain to chronic suffering. If chronic suffering was as rare as chronic suffering back then (they both sure seem more common now), then there is no issue.
Are the current attention-allocational conflicts us modern people experience somehow more intractable? Do our built in heuristics which usually spring into action when noticing the suffering signal fail in such vexing attention-allocational conflicts?
Why do we need to have read your post, then employed this quite conscious and difficult process of trying to figure out the attention-allocational conflict? Why didn't the suffering just do its job without us needing to apply theory to figure out its purpose and only then manage to resolve the conflict?
Fixing the problem requires removing chronic pain without blocking acute pain when it's useful.
I guess you can look at it as a type I - type II error tradeoff. But you could also simply improve your cognitive algorithms which respond to a suffering signal.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-06-02T20:33:20.682Z · LW(p) · GW(p)
Why do we need to have read your post, then employed this quite conscious and difficult process of trying to figure out the attention-allocational conflict? Why didn't the suffering just do its job without us needing to apply theory to figure out its purpose and only then manage to resolve the conflict?
It's a good question. I don't have a good answer for it, other than "I guess suffering was more adaptive in the EEA".
↑ comment by TimFreeman · 2011-05-31T23:15:24.834Z · LW(p) · GW(p)
Acute suffering should be enough to make us focus on conflicts between our mental subsystems.
I get your point, and I agree. At the moment I believe suffering fails to focus our attention in the right place because evolution hasn't had either the time or the capacity to give us the exact correct instincts.
I vaguely recall an experiment where someone (I don't recall who) made a horse suffer in the sense we're describing here. They trained it to do X when it was shown an ellipse with the vertical direction longer, and do Y when it was shown an ellipse with the horizontal direction longer, and gradually showed it ellipses that were more and more circular, so it had no way to decide which one to do. It did the "brooding and wailing and distraction" you're talking about.
comment by Academian · 2011-07-04T12:33:44.416Z · LW(p) · GW(p)
Data point: This characterization of suffering agrees with my experience, and I've eliminated a lot of suffering from my life my resolving the associated attentional conflicts. I also share the experience that it is applicable to many different kinds of suffering, including both mental and physical.
comment by brevitae · 2011-06-01T07:29:25.723Z · LW(p) · GW(p)
I've also been thinking about how to resolve 2 conflicting systems, as of late.
Seems like there are 2 paths to this:
Alternation: Take turns, each half gets it's fair share of time. Speed up or slow down frequency of alternation as best suited to teach situation. Digital.
Synthesis: Hegelian Dialectic. Put the 2 together, and break them into parts/colors/spectrum. Find the matching contextual patterns in both, and use that to form a new greater thing. Leave the individual content patterns alone. Turn the black and white into a grayscale gradient, some individuality, some shared. Analog.
Also, as for actual physical pain, I've been playing with this:
- If your back hurts, imagine and feel pleasure on your front (sternum).
- If your neck hurts on the left, invert it, and imagine/feel pleasure on the right. In general: Invert it. Flip the bits entirely. Same thing (contextually), but different thing (contentually).
comment by wstrinz · 2011-05-24T02:02:32.495Z · LW(p) · GW(p)
I get the feeling I'm not just completing a full application of the definition here, but where does this apply to serious, terrible, "let's just imagine they threw you in hell for a few days" suffering. Sure, one can say that it's mostly pain being imagined, and the massive overload of a sensor not designed for such environments is really what we're bothered by, but is there a way that the part of this we usually talk about as 'suffering' fits into the attention-allocation narrative? Or are we talking about two different things here?
All the same, I find this fascinating and am going to experiment with it in my daily life. Looking forward to the non-content focused post.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-24T14:23:50.641Z · LW(p) · GW(p)
serious, terrible, "let's just imagine they threw you in hell for a few days" suffering
Can you give a more concrete example?
Replies from: wstrinz↑ comment by wstrinz · 2011-05-24T15:19:26.562Z · LW(p) · GW(p)
sure, how about being in a village taken over by the Khmer Rouge, or a concentration camp in Nazi Germany? Someplace where you don't necessary die quickly but have to endure a long and very unpleasant time with some amount of psychological or physical pain.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-25T12:18:14.850Z · LW(p) · GW(p)
Well, to take the concentration camp example. Every day, you'll encounter various painful things, such as malnutrition, both physical and mental violence from the guards, generally unpleasant living conditions, seeing your companions killed, and so on. Each of these causes a You Should Really Stop This From Happening reaction from your brain, countered by a system saying I Have No Way Of Stopping This. On top of the individual daily events, your attention will also be constantly drawn to the fact that for as long as you stay here, these events will continue, and you'll suffer from also having your attention drawn to the fact that you can't actually get out.
Some of the coping mechanisms that've been identified in concentration camp inmates include strategies such as trying to find meaning in the experience, concentrating on day-to-day survival, fatalism and emotional numbing, and dreaming of revenge. Each of these could plausibly be interpreted as a cognitive/emotional strategy where the system sending the impossible-to-satisfy "you need to get out of here" message was quieted and the focus was shifted to something more plausible, therefore somewhat reducing the suffering.
comment by bgaesop · 2011-05-22T10:20:56.201Z · LW(p) · GW(p)
And here I thought using this as a pain management technique only worked because I'm masochistic! It actually is genuinely fascinating to learn this is common to people who don't share that trait. Though, actually, come to think of it, you never explicitly said whether you do or not. If it's not prying, are you?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-05-22T11:30:48.845Z · LW(p) · GW(p)
I'm not masochistic with regards to physical pain, though I do have the occasional fantasies with a masochistic emotional component.
comment by CronoDAS · 2011-05-19T05:16:32.962Z · LW(p) · GW(p)
Genuinely committing to do something does help; pretending to commit to something and then forgetting about it does not.
I often find myself failing to follow through on things that I've told people that I've committed myself to doing. I don't like having broken a declared commitment, so as a result I've become reluctant to declare myself committed to doing anything. This seems like the wrong solution to the problem. Any advice? (Note: Regardless of how good the advice is, I suspect I won't take it.)
comment by pnrjulius · 2012-04-04T21:07:57.887Z · LW(p) · GW(p)
Whoever wrote this apparently has never actually SUFFERED. They've never been in chronic pain, or watched a family member die of disease. If they had, they couldn't possibly think that suffering is just a matter of attention. The examples given of "suffering" are so trivial as to be outright insulting: Really? You think having to decide which event to go to is a good example of suffering?
(You might have a better argument if you were saying that meditation techniques, which are often at least partly attention-based, can be used to raise your hedonic set point so that even things like starvation don't feel like pain anymore. But that's not because starvation is an attentional conflict; in fact it's really a bug in human nature that we'd expect to get selected out. A being that can meditate itself to ecstasy is a being that never does anything useful and may as well just die.)
Something like "frustration" or "dilemma" might have to do with attention. But no, the agony of a full-blown migraine, or of weeks without food, or of feeling cancer eat you alive; no, that simply has nothing whatsoever to do with attention. You can focus on the pain all you want, and it will still hurt. You can try to focus on something else instead; it won't work, because your brain will keep pulling you back to the pain. That feeling of hopelessness isn't a result of what you're focusing on; it's a result of the fact that you've searched for solutions to this overwhelming problem and none of your attempts have worked. The pain isn't going to go away until the problem itself is fixed.
And maybe it can't be fixed: Sometimes suffering is the last thing you experience before the evolutionary "game over" of death.
The evolutionary causes of suffering are actually fairly transparent: Suffering is the constant threat your body holds over you for failing to meet crucial fitness objectives. The threat wouldn't be credible unless it were sometimes actually carried out. It's an interesting question as to why evolution spends more time motivating us through pain avoidance than through pleasure seeking---the pain of starvation is about a thousand times more intense than the joy of a good meal---but that doesn't change the basic principle that suffering is what our evolution uses to motivate us.
Replies from: thomblake↑ comment by thomblake · 2012-04-04T21:30:28.343Z · LW(p) · GW(p)
It sounds like you're using the word "suffering" to mean something like "really extreme pain/discomfort". This is especially apparent where you seem to equate pain and suffering in the last sentence.
That isn't what suffering means. Suffering does not need to be linked to extremes of pain and discomfort. If the word is used in that sense sometimes, just be aware that it is not being used that way in this post, or in most academic discourse about suffering.