Building up to an Internal Family Systems model
post by Kaj_Sotala · 2019-01-26T12:25:11.162Z · LW · GW · 86 commentsContents
Introduction Epistemic status Wanted: a robot which avoids catastrophes Introducing managers Putting together a toy model Consequences of the model The Internal Family Systems model Personalized characters Why should this technique be useful for psychological healing? The Self Final words None 86 comments
Introduction
Internal Family Systems (IFS) is a psychotherapy school/technique/model which lends itself particularly well for being used alone or with a peer. For years, I had noticed that many of the kinds of people who put in a lot of work into developing their emotional and communication skills, some within the rationalist community and some outside it, kept mentioning IFS.
So I looked at the Wikipedia page about the IFS model, and bounced off, since it sounded like nonsense to me. Then someone brought it up again, and I thought that maybe I should reconsider. So I looked at the WP page again, thought “nah, still nonsense”, and continued to ignore it.
This continued until I participated in CFAR mentorship training last September, and we had a class on CFAR’s Internal Double Crux [LW · GW] (IDC) technique. IDC clicked really well for me, so I started using it a lot and also facilitating it to some friends. However, once we started using it on more emotional issues (as opposed to just things with empirical facts pointing in different directions), we started running into some weird things, which it felt like IDC couldn’t quite handle… things which reminded me of how people had been describing IFS. So I finally read up on it, and have been successfully applying it ever since.
In this post, I’ll try to describe and motivate IFS in terms which are less likely to give people in this audience the same kind of a “no, that’s nonsense” reaction as I initially had.
Epistemic status
This post is intended to give an argument for why something like the IFS model could be true and a thing that works. It’s not really an argument that IFS is correct. My reason for thinking in terms of IFS is simply that I was initially super-skeptical of it (more on the reasons of my skepticism later), but then started encountering things which it turned out IFS predicted - and I only found out about IFS predicting those things after I familiarized myself with it.
Additionally, I now feel that IFS gives me significantly more gears [LW · GW] for understanding the behavior of both other people and myself, and it has been significantly transformative in addressing my own emotional issues. Several other people who I know report it having been similarly powerful for them. On the other hand, aside for a few isolated papers with titles like “proof-of-concept” or “pilot study”, there seems to be conspicuously little peer-reviewed evidence in favor of IFS, meaning that we should probably exercise some caution.
I think that, even if not completely correct, IFS is currently the best model that I have for explaining the observations that it’s pointing at [LW · GW]. I encourage you to read this post in the style of learning soft skills [LW · GW] - trying on this perspective, and seeing if there’s anything in the description which feels like it resonates with your experiences.
But before we talk about IFS, let’s first talk about building robots. It turns out that if we put together some existing ideas from machine learning and neuroscience, we can end up with a robot design that pretty closely resembles IFS’s model of the human mind.
What follows is an intentionally simplified story, which is simpler than either the full IFS model or a full account that would incorporate everything that I know about human brains. Its intent is to demonstrate that an agent architecture with IFS-style subagents might easily emerge from basic machine learning principles, without claiming that all the details of that toy model would exactly match human brains. A discussion of what exactly IFS does claim in the context of human brains follows after the robot story.
Wanted: a robot which avoids catastrophes
Suppose that we’re building a robot that we want to be generally intelligent. The hot thing these days seems to be deep reinforcement learning, so we decide to use that. The robot will explore its environment, try out various things, and gradually develop habits and preferences as it accumulates experience. (Just like those human babies.)
Now, there are some problems we need to address. For one, deep reinforcement learning works fine in simulated environments where you’re safe to explore for an indefinite duration. However, it runs into problems if the robot is supposed to learn in a real life environment. Some actions which the robot might take will result in catastrophic consequences, such as it being damaged. If the robot is just doing things at random, it might end up damaging itself. Even worse, if the robot does something which could have been catastrophic but narrowly avoids harm, it might then forget about it and end up doing the same thing again!
How could we deal with this? Well, let’s look at the existing literature. Lipton et al. (2016) proposed what seems like a promising idea for addressing the part about forgetting. Their approach is to explicitly maintain a memory of danger states - situations which are not the catastrophic outcome itself, but from which the learner has previously ended up in a catastrophe. For instance, if “being burned by a hot stove” is a catastrophe, then “being about to poke your finger in the stove” is a danger state. Depending on how cautious we want to be and how many preceding states we want to include in our list of danger states, “going near the stove” and “seeing the stove” can also be danger states, though then we might end up with a seriously stove-phobic robot.
In any case, we maintain a separate storage of danger states, in such a way that the learner never forgets about them. We use this storage of danger states to train a fear model: a model which is trying to predict the probability of ending up in a catastrophe from some given novel situation. For example, maybe our robot poked its robot finger at the stove in our kitchen, but poking its robot finger at stoves in other kitchens might be dangerous too. So we want the fear model to generalize from our stove to other stoves. On the other hand, we don’t want it to be stove-phobic and run away at the mere sight of a stove. The task of our fear model is to predict exactly how likely it is for the robot to end up in a catastrophe, given some situation it is in, and then make it increasingly disinclined to end up in the kinds of situations which might lead to a catastrophe.
This sounds nice in theory. On the other hand, Lipton et al. are still assuming that they can train their learner in a simulated environment, and that they can label catastrophic states ahead of time. We don’t know in advance every possible catastrophe our robot might end up in - it might walk off a cliff, shoot itself in the foot with a laser gun, be beaten up by activists protesting technological unemployment, or any number of other possibilities.
So let’s take inspiration from humans. We can’t know beforehand every bad thing that might happen to our robot, but we can identify some classes of things which are correlated with catastrophe. For instance, being beaten or shooting itself in the foot will cause physical damage, so we can install sensors which indicate when the robot has taken physical damage. If these sensors - let’s call them “pain” sensors - register a high amount of damage, we consider the situation to have been catastrophic. When they do, we save that situation and the situations preceding it to our list of dangerous situations. Assuming that our robot has managed to make it out of that situation intact and can do anything in the first place, we use that list of dangerous situations to train up a fear model.
At this point, we notice that this is starting to remind us about our experience with humans. For example, the infamous Little Albert experiment. A human baby was allowed to play with a laboratory rat, but each time that he saw the rat, a researcher made a loud scary sound behind his back. Soon Albert started getting scared whenever he saw the rat - and then he got scared of furry things in general.
Something like Albert’s behavior could be implemented very simply using something like Hebbian conditioning to get a learning algorithm which picks up on some features of the situation, and then triggers a panic reaction whenever it re-encounters those same features. For instance, it registers that the sight of fur and loud sounds tend to coincide, and then it triggers a fear reaction whenever it sees fur. This would be a basic fear model, and a “danger state” would be “seeing fur”.
Wanting to keep things simple, we decide to use this kind of an approach as the fear model of our robot. Also, having read Consciousness and the Brain [? · GW], we remember a few basic principles about how those human brains work, which we decide to copy because we’re lazy and don’t want to come up with entirely new principles:
- There’s a special network of neurons in the brain, called the global neuronal workspace. The contents of this workspace are roughly [LW(p) · GW(p)] the same as the contents of consciousness.
- We can thus consider consciousness a workspace which many different brain systems have access to. It can hold a single “chunk” of information at a time.
- The brain has multiple different systems doing different things. When a mental object becomes conscious (that is, is projected into the workspace by a subsystem), many systems will synchronize their processing around analyzing and manipulating that mental object.
So here is our design:
- The robot has a hardwired system scanning for signs of catastrophe. This system has several subcomponents. One of them scans the “pain” sensors for signs of physical damage. Another system watches the “hunger” sensors for signs of low battery.
- Any of these “distress” systems can, alone or in combination, feed a negative reward signal into the global workspace. This tells the rest of the system that this is a bad state, from which the robot should escape.
- If a certain threshold level of “distress” is reached, the current situation is designated as catastrophic. All other priorities are suspended and the robot will prioritize getting out of the situation. A memory of the situation and the situations preceding it are saved to a dedicated storage.
- After the experience, the memory of the catastrophic situation is replayed in consciousness for analysis. This replay is used to train up a separate fear model which effectively acts as a new “distress” system.
- As the robot walks around its environment, sensory information about the surroundings will enter its consciousness workspace. When it plans future actions, simulated sensory information about how those actions would unfold enters the workspace. Whenever the new fear model detects features in either kind of sensory information which it associates with the catastrophic events, it will feed “fear”-type “distress” into the consciousness workspace.
So if the robot sees things which remind it of poking at hot stove, it will be inclined to go somewhere else; if it imagines doing something which would cause it to poke at the hot stove, then it will be inclined to imagine doing something else.
Introducing managers
But is this actually enough? We've now basically set up an algorithm which warns the robot when it sees things which have previously preceded a bad outcome. This might be enough for dealing with static tasks, such as not burning yourself at a stove. But it seems insufficient for dealing with things like predators or technological unemployment protesters, who might show up in a wide variety of places and actively try to hunt you down. By the time you see a sign of them, you're already in danger. It would be better if we could learn to avoid them entirely, so that the fear model would never even be triggered.
As we ponder this dilemma, we surf the web and run across this blog post summarizing Saunders, Sastry, Stuhlmüller & Evans (2017). They are also concerned with preventing reinforcement learning agents from running into catastrophes, but have a somewhat different approach. In their approach, a reinforcement learner is allowed to do different kinds of things, which a human overseer then allows or blocks. A separate “blocker” model is trained to predict which actions the human overseer would block. In the future, if the robot would ever take an action which the “blocker” predicts the human overseer would disallow, it will block that action. In effect, the system consists of two separate subagents, one subagent trying to maximize rewards and the other subagent trying to block non-approved actions.
Since our robot has a nice modular architecture into which we can add various subagents which are listening in and taking actions, we decide to take inspiration from this idea. We create a system for spawning dedicated subprograms which try to predict and and block actions which would cause the fear model to be triggered. In theory, this is unnecessary: given enough time, even standard reinforcement learning should learn to avoid the situations which trigger the fear model. But again, trial-and-error can take a very long time to learn exactly which situations trigger fear, so we dedicate a separate subprogram to the task of pre-emptively figuring it out.
Each fear model is paired with a subagent that we’ll call a manager. While the fear model has associated a bunch of cues with the notion of an impending catastrophe, the manager learns to predict which situations would cause the fear model to trigger. Despite sounding similar, these are not the same thing: one indicates when you are already in danger, the other is trying to figure out what you can do to never end up in danger in the first place. A fear model might learn to recognize signs which technological unemployment protesters commonly wear. Whereas a manager might learn the kinds of environments where the fear model has noticed protesters before: for instance, near the protester HQ.
Then, if a manager predicts that a given action (such as going to the protester HQ) would eventually trigger the fear model, it will block that action and promote some other action. We can use the interaction of these subsystems to try to ensure that the robot only feels fear in situations which already resemble the catastrophic situation so much as to actually be dangerous. At the same time, the robot will be unafraid to take safe actions in situations from which it could end up in a danger zone, but are themselves safe to be in.
As an added benefit, we can recycle the manager component to also do the same thing as the blocker component in the Saunders et al. paper originally did. That is, if the robot has a human overseer telling it in strict terms not to do some things, it can create a manager subprogram which models that overseer and likewise blocks the robot from doing things which the model predicts that the overseer would disapprove of.
Putting together a toy model
If the robot does end up in a situation where the fear model is sounding an alarm, then we want to get it out of the situation as quickly as possible. It may be worth spawning a specialized subroutine just for this purpose. Technological unemployment activists could, among other things, use flamethrowers that set the robot on fire. So let’s call these types of subprograms dedicated to escaping from the danger zone, firefighters.
So how does the system as a whole work? First, the different subagents act by sending into the consciousness workspace various mental objects, such as an emotion of fear, or an intent to e.g. make breakfast. If several subagents are submitting identical mental objects, we say that they are voting for the same object. On each time-step, one of the submitted objects is chosen at random to become the contents of the workspace, with each object having a chance to be selected that’s proportional to its number of votes. If a mental object describing a physical action (an “intention”) ends up in the workspace and stays chosen for several time-steps, then that action gets executed by a motor subsystem.
Depending on the situation, some subagents will have more votes than others. E.g. a fear model submitting a fear object gets a number of votes proportional to how strongly it is activated. Besides the specialized subagents we’ve discussed, there’s also a default planning subagent, which is just taking whatever actions (that is, sending to the workspace whatever mental objects) it thinks will produce the greatest reward. This subagent only has a small number of votes.
Finally, there’s a self-narrative agent which is constructing a narrative of the robot’s actions as if it was a unified agent, for social purposes and for doing reasoning afterwards. After the motor system has taken an action, the self-narrative agent records this as something like “I, Robby the Robot, made breakfast by cooking eggs and bacon”, transmitting this statement to the workspace and saving it to an episodic memory store for future reference.
Consequences of the model
Is this design any good? Let’s consider a few of its implications.
First, in order for the robot to take physical actions, the intent to do so has to be in its consciousness for a long enough time for the action to be taken. If there are any subagents that wish to prevent this from happening, they must muster enough votes to bring into consciousness some other mental object replacing that intention before it’s been around for enough time-steps to be executed by the motor system. (This is analogous to the concept of the final veto in humans, where consciousness is the last place to block pre-consciously initiated actions before they are taken.)
Second, the different subagents do not see each other directly: they only see the consequences of each other’s actions, as that’s what’s reflected in the contents of the workspace. In particular, the self-narrative agent has no access to information about which subagents were responsible for generating which physical action. It only sees the intentions which preceded the various actions, and the actions themselves. Thus it might easily end up constructing a narrative which creates the internal appearance of a single agent, even though the system is actually composed of multiple subagents.
Third, even if the subagents can’t directly see each other, they might still end up forming alliances. For example, if the robot is standing near the stove, a curiosity-driven subagent might propose poking at the stove (“I want to see if this causes us to burn ourselves again!”), while the default planning system might propose cooking dinner, since that’s what it predicts will please the human owner. Now, a manager trying to prevent a fear model agent from being activated, will eventually learn that if it votes for the default planning system’s intentions to cook dinner (which it saw earlier), then the curiosity-driven agent is less likely to get its intentions into consciousness. Thus, no poking at the stove, and the manager’s and the default planning system’s goals end up aligned.
Fourth, this design can make it really difficult for the robot to even become aware of the existence of some managers. A manager may learn to support any other mental processes which block the robot from taking specific actions. It does it by voting in favor of mental objects which orient behavior towards anything else. This might manifest as something subtle, such as a mysterious lack of interest towards something that sounds like a good idea in principle, or just repeatedly forgetting to do something, as the robot always seems to get distracted by something else. The self-narrative agent, not having any idea of what’s going on, might just explain this as “Robby the Robot is forgetful sometimes” in its internal narrative.
Fifth, the default planning subagent here is doing something like rational planning, but given its weak voting power, it’s likely to be overruled if other subagents disagree with it (unless some subagents also agree with it). If some actions seem worth doing, but there are managers which are blocking it and the default planning subagent doesn’t have an explicit representation of them, this can manifest as all kinds of procrastinating behaviors and numerous failed attempts for the default planning system to “try to get itself to do something”, using various strategies. But as long as the managers keep blocking those actions, the system is likely to remain stuck.
Sixth, the purpose of both managers and firefighters is to keep the robot out of a situation that has been previously designated as dangerous. Managers do this by trying to pre-emptively block actions that would cause the fear model agent to activate; firefighters do this by trying to take actions which shut down the fear model agent after it has activated. But the fear model agent activating is not actually the same thing as being in a dangerous situation. Thus, both managers and firefighters may fall victim to Goodhart’s law [LW · GW], doing things which block the fear model while being irrelevant for escaping catastrophic situations.
For example, “thinking about the consequences of going to the activist HQ” is something that might activate the fear model agent, so a manager might try to block just thinking about it. This has obvious consequence that the robot can’t think clearly about that issue. Similarly, once the fear model has already activated, a firefighter might Goodhart by supporting any action which helps activate an agent with a lot of voting power that’s going to think about something entirely different. This could result in compulsive behaviors which were effective at pushing the fear aside, but useless for achieving any of the robot’s actual aims.
At worst, this could cause loops of mutually activating subagents pushing in opposite directions. First, a stove-phobic robot runs away from the stove as it was about to make breakfast. Then a firefighter trying to suppress that fear, causes the robot to get stuck looking at pictures of beautiful naked robots, which is engrossing and thus great for removing the fear of the stove. Then another fear model starts to activate, this one afraid of failure and of spending so much time looking at pictures of beautiful naked robots that the robot won’t accomplish its goal of making breakfast. A separate firefighter associated with this second fear model has learned that focusing the robot’s attention on the pictures of beautiful naked robots even more is the most effective action for keeping this new fear temporarily subdued. So the two firefighters are allied and temporarily successful at their goal, but then the first one - seeing that the original stove fear has disappeared - turns off. Without the first firefighter’s votes supporting the second firefighter, the fear manages to overwhelm the second firefighter, causing the robot to rush into making breakfast. This again activates its fear of the stove, but if the fear of failure remains strong enough, it might overpower its fear of the stove so that the robot manages to make breakfast in time...
Hmm. Maybe this design isn’t so great after all. Good thing we noticed these failure modes, so that there aren’t any mind architectures like this going around being vulnerable to them!
The Internal Family Systems model
But enough hypothetical robot design; let’s get to the topic of IFS. The IFS model hypothesizes the existence of three kinds of “extreme parts” in the human mind:
- Exiles are said to be parts of the mind which hold the memory of past traumatic events, which the person did not have the resources to handle. They are parts of the psyche which have been split off from the rest and are frozen in time of the traumatic event. When something causes them to surface, they tend to flood the mind with pain. For example, someone may have an exile associated with times when they were romantically rejected in the past.
- Managers are parts that have been tasked with keeping the exiles permanently exiled from consciousness. They try to arrange a person’s life and psyche so that exiles never surface. For example, managers might keep someone from reaching out to potential dates due to a fear of rejection.
- Firefighters react when exiles have been triggered, and try to either suppress the exile’s pain or distract the mind from it. For example, after someone has been rejected by a date, they might find themselves drinking in an attempt to numb the pain.
- Some presentations of the IFS model simplify things by combining Managers and Firefighters into the broader category of Protectors, so only talk about Exiles and Protectors.
Exiles are not limited to being created from the kinds of situations that we would commonly consider seriously traumatic. They can also be created from things like relatively minor childhood upsets, as long as the child didn’t feel like they could handle the situation.
IFS further claims that you can treat these parts as something like independent subpersonalities. You can communicate with them, consider their worries, and gradually persuade managers and firefighters to give you access to the exiles that have been kept away from consciousness. When you do this, you can show them that you are no longer in the situation which was catastrophic before, and now have the resources to handle it if something similar was to happen again. This heals the exile, and also lets the managers and firefighters assume better, healthier roles.
As I mentioned in the beginning, when I first heard about IFS, I was turned off by it for several different reasons. For instance, here were some of my thoughts at the time:
- The whole model about some parts of the mind being in pain, and other parts trying to suppress their suffering. The thing about exiles was framed in terms of a part of the mind splitting off in order to protect the rest of the mind against damage. What? That doesn’t make any evolutionary sense! A traumatic situation is just sensory information for the brain, it’s not literal brain damage: it wouldn’t have made any sense for minds to evolve in a way that caused parts of it to split off, forcing other parts of the mind to try to keep them suppressed. Why not just… never be damaged in the first place?
- That whole thing about parts being personalized characters that you could talk to. That… doesn’t describe anything in my experience.
- Also, how does just talking to yourself fix any trauma or deeply ingrained behaviors?
- IFS talks about everyone having a “True Self”. Quote from Wikipedia: IFS also sees people as being whole, underneath this collection of parts. Everyone has a true self or spiritual center, known as the Self to distinguish it from the parts. Even people whose experience is dominated by parts have access to this Self and its healing qualities of curiosity, connectedness, compassion, and calmness. IFS sees the therapist's job as helping the client to disentangle themselves from their parts and access the Self, which can then connect with each part and heal it, so that the parts can let go of their destructive roles and enter into a harmonious collaboration, led by the Self. That… again did not sound particularly derived from any sensible psychology.
Hopefully, I’ve already answered my past self’s concerns about the first point. The model itself talks in terms of managers protecting the mind from pain, exiles being exiled from consciousness in order for their pain to remain suppressed, etc. Which is a reasonable description of the subjective experience of what happens. But the evolutionary logic - as far as I can guess - is slightly different: to keep us out of dangerous situations.
The story of the robot describes the actual “design rationale”. Exiles are in fact subagents which are “frozen in the time of a traumatic event”, but they didn’t split off to protect the rest of the mind from damage. Rather, they were created as an isolated memory block to ensure that the memory of the event wouldn’t be forgotten. Managers then exist to keep the person away from such catastrophic situations, and firefighters exist to help escape them. Unfortunately, this setup is vulnerable to various failure modes, similar to those that the robot is vulnerable to.
With that said, let’s tackle the remaining problems that I had with IFS.
Personalized characters
IFS suggests that you can experience the exiles, managers and firefighters in your mind as something akin to subpersonalities - entities with their own names, visual appearances, preferences, beliefs, and so on. Furthermore, this isn’t inherently dysfunctional, nor indicative of something like Dissociative Identity Disorder. Rather, even people who are entirely healthy and normal may experience this kind of “multiplicity”.
Now, it’s important to note right off that not everyone has this to a major extent: you don’t need to experience multiplicity in order for the IFS process to work. For instance, my parts feel more like bodily sensations and shards of desire than subpersonalities, but IFS still works super-well for me.
In the book Internal Family Systems Therapy, Richard Schwartz, the developer of IFS, notes that if a person’s subagents play well together, then that person is likely to feel mostly internally unified. On the other hand, if a person has lots of internal conflict, then they are more likely to experience themselves as having multiple parts with conflicting desires.
I think that this makes a lot of sense, assuming the existence of something like a self-narrative subagent. If you remember, this is the part of the mind which looks at the actions that the mind-system has taken, and then constructs an explanation for why those actions were taken. (See e.g. the posts on the limits of introspection [LW · GW] and on the Apologist and the Revolutionary [LW · GW] for previous evidence for the existence of such a confabulating subagent with limited access to our true motivations.) As long as all the exiles, managers and firefighters are functioning in a unified fashion, the most parsimonious model that the self-narrative subagent might construct is simply that of a unified self. But if the system keeps being driven into strongly conflicting behaviors, then it can’t necessarily make sense of them from a single-agent perspective. Then it might naturally settle on something like a multiagent approach and experience itself as being split into parts.
Kevin Simler, in Neurons Gone Wild, notes how people with strong addictions seem particularly prone to developing multi-agent narratives:
This American Life did a nice segment on addiction a few years back, in which the producers — seemingly on a lark — asked people to personify their addictions. "It was like people had been waiting all their lives for somebody to ask them this question," said the producers, and they gushed forth with descriptions of the 'voice' of their inner addict:
"The voice is irresistible, always. I'm in the thrall of that voice."
"Totally out of control. It's got this life of its own, and I can't tame it anymore."
"I actually have a name for the voice. I call it Stan. Stan is the guy who tells me to have the extra glass of wine. Stan is the guy who tells me to smoke."
This doesn’t seem like it explains all of it, though. I’ve frequently been very dysfunctional, and have always found very intuitive the notion of the mind being split into very parts. Yet I mostly still don’t seem to experience my subagents anywhere near as person-like as some others clearly do. I know at least one person who ended up finding IFS because of having all of these talking characters in their head, and who was looking for something that would help them make sense of it. Nothing like that has ever been the case for me: I did experience strongly conflicting desires, but they were just that, strongly conflicting desires.
I can only surmise that it has something to do with the same kinds of differences which cause some people to think mainly verbally, others mainly visually, and others yet in some other hard-to-describe modality. Some fiction writers spontaneously experience their characters as real people who speak to them and will even bother the writer when at the supermarket, and some others don’t.
It’s been noted that the mechanisms which use to model ourselves and other people overlap - not very surprisingly, since both we and other people are (presumably) humans. So it seems reasonable that some of the mechanisms for representing other people, would sometimes also end up spontaneously recruited for representing internal subagents or coalitions of them.
Why should this technique be useful for psychological healing?
Okay, suppose it’s possible to access our subagents somehow. Why would just talking with these entities in your own head, help you fix psychological issues?
Let’s consider that a person having exiles, managers and firefighters is costly in the sense of constraining that person’s options. If you never want to do anything that would cause you to see a stove, that limits quite a bit of what you can do. I strongly suspect that many forms of procrastination and failure to do things we’d like to do are mostly a manifestation of overactive managers. So it’s important not to create those kinds of entities unless the situation really is one which should be designated as categorically unacceptable to end up in.
The theory for IFS mentions that not all painful situations turn into trauma: just ones in which we felt helpless and like we didn’t have the necessary resources for dealing with it. This makes sense, since if we were capable of dealing with it, then the situation can’t have been that catastrophic. The aftermath of the immediate event is important as well: a child who ends up in a painful situation doesn’t necessarily end up traumatized, if they have an adult who can put the event in a reassuring context afterwards.
But situations which used to be catastrophic and impossible for us to handle before, aren’t necessarily that any more. It seems important to have a mechanism for updating that cache of catastrophic events and for disassembling the protections around it, if the protections turn out to be unnecessary.
How does that process usually happen, without IFS or any other specialized form of therapy?
Often, by talking about your experiences with someone you trust. Or writing about them in private or in a blog.
In my post about Consciousness and the Brain [? · GW], I mentioned that once a mental object becomes conscious, many different brain systems synchronize their processing around it. I suspect that the reason why many people have such a powerful urge to discuss their traumatic experiences with someone else, is that doing so is a way of bringing those memories into consciousness in detail. And once you’ve dug up your traumatic memories from their cache, their content can be re-processed and re-evaluated. If your brain judges that you now do have the resources to handle that event if you ever end up in it again, or if it’s something that simply can’t happen anymore, then the memory can be removed from the cache and you no longer need to avoid it.
I think it’s also significant that, while something like just writing about a traumatic event is sometimes enough to heal, often it’s more effective if you have a sympathetic listener who you trust. Traumas often involve some amount of shame: maybe you were called lazy as a kid and are still afraid of others thinking that you are lazy. Here, having friends who accept you and are willing to nonjudgmentally listen while you talk about your issues, is by itself an indication that the thing that you used to be afraid of isn’t a danger anymore: there exist people who will stay by your side despite knowing your secret.
Now, when you are talking to a friend about your traumatic memory, you will be going through cached memories that have been stored in an exile subagent. A specific memory circuit - one of several circuits specialized for the act of holding painful memories - is active and outputting its contents into the global workspace, from which they are being turned into words.
Meaning that, in a sense, your friend is talking directly to your exile.
Could you hack this process, so that you wouldn’t even need a friend, and could carry this process out entirely internally?
In my earlier post [? · GW], I remarked that you could view language as a way of joining two people’s brains together. A subagent in your brain outputs something that appears in your consciousness, you communicate it to a friend, it appears in their consciousness, subagents in your friend’s brain manipulate the information somehow, and then they send it back to your consciousness.
If you are telling your friend about your trauma, you are in a sense joining your workspaces together, and letting some subagents in your workspace, communicate with the “sympathetic listener” subagents in your friend’s workspace.
So why not let a “sympathetic listener” subagent in your workspace, hook up directly with the traumatized subagents that are also in your own workspace?
I think that something like this happens when you do IFS. You are using a technique designed to activate the relevant subagents in a very specific way, which allows for this kind of a “hooking up” without needing another person.
For instance, suppose that you are talking to a manager subagent which wants to hide the fact that you’re bad at something, and starts reacting defensively whenever the topic is brought up. Now, one way by which its activation could manifest, is feeding those defensive thoughts and reactions directly into your workspace. In such a case, you would experience them as your own thoughts, and possibly as objectively real. IFS calls this “blending” [LW · GW]; I’ve also previously used the term “cognitive fusion” [LW · GW] for what’s essentially the same thing.
Instead of remaining blended, you then use various unblending / cognitive defusion techniques that highlight the way by which these thoughts and emotions are coming from a specific part of your mind. You could think of this as wrapping extra content around the thoughts and emotions, and then seeing them through the wrapper (which is obviously not-you), rather than experiencing the thoughts and emotions directly (which you might experience as your own). For example, the IFS book Self-Therapy suggests this unblending technique (among others):
Allow a visual image of the part [subagent] to arise. This will give you the sense of it as a separate entity. This approach is even more effective if the part is clearly a certain distance away from you. The further away it is, the more separation this creates.
Another way to accomplish visual separation is to draw or paint an image of the part. Or you can choose an object from your home that represents the part for you or find an image of it in a magazine or on the Internet. Having a concrete token of the part helps to create separation.
I think of this as something like, you are taking the subagent in question, routing its responses through a visualization subsystem, and then you see a talking fox or whatever. And this is then a representation that your internal subsystems for talking with other people can respond to. You can then have a dialogue with the part (verbally or otherwise) in a way where its responses are clearly labeled as coming from it, rather than being mixed together with all the other thoughts in the workspace. This lets the content coming from the sympathetic-listener subagent and the exile/manager/firefighter subagent be kept clearly apart, allowing you to consider the emotional content as you would as an external listener, preventing you from drowning in it. You’re hacking your brain so as to work as the therapist and client as the same time.
The Self
IFS claims that, below all the various parts and subagents, there exists a “true self” which you can learn to access. When you are in this Self, you exhibit the qualities of “calmness, curiosity, clarity, compassion, confidence, creativity, courage, and connectedness”. Being at least partially in Self is said to be a prerequisite for working with your parts: if you are not, then you are not able to evaluate their models objectively. The parts will sense this, and as a result, they will not share their models properly, preventing the kind of global re-evaluation of their contents that would update them.
This was the part that I was initially the most skeptical of, and which made me most frequently decide that IFS was not worth looking at. I could easily conceptualize the mind as being made up of various subagents. But then it would just be numerous subagents all the way down, without any single one that could be designated the “true” self.
But let’s look at IFS’s description of how exactly to get into Self. You check whether you seem to be blended with any part. If you are, you unblend with it. Then you check whether you might also be blended with some other part. If you are, you unblend from it also. You then keep doing this until you can find no part that you might be blended with. All that’s left are those “eight Cs”, which just seem to be a kind of a global state, with no particular part that they would be coming from.
I now think that “being in Self” represents a state where there no particular subagent is getting a disproportionate share of voting power, and everything is processed by the system as a whole. Remember that in the robot story, catastrophic states were situations in which the organism should never end up. A subagent kicking in to prevent that from happening is a kind of a priority override to normal thinking. It blocks you from being open and calm and curious because some subagent thinks that doing so would be dangerous. If you then turn off or suspend all those priority overrides, then the mind’s default state absent any override seems to be one with the qualities of the Self.
This actually fits at least one model of the function of positive emotions pretty well. Fredrickson (1998) suggests that an important function of positive emotions is to make us engage in activities such as play, exploration, and savoring the company of other people. Doing these things has the effect of building up skills, knowledge, social connections, and other kinds of resources which might be useful for us in the future. If there are no active ongoing threats, then that implies that the situation is pretty safe for the time being, making it reasonable to revert to a positive state of being open to exploration.
The Internal Family Systems Therapy book makes a somewhat big deal out of the fact that everyone, even most traumatized people, ultimately has a Self which they can access. It explains this in terms of the mind being organized to protect against damage, and with parts always splitting off from the Self when it would otherwise be damaged. I think the real explanation is much simpler: the mind is not accumulating damage, it is just accumulating a longer and longer list of situations not considered safe.
As an aside, this model feels like it makes me less confused about confidence. It seems like people are really attracted to confident people, and that to some extent it’s also possible to fake confidence until it becomes genuine. But if confidence is so attractive and we can fake it, why hasn’t evolution just made everyone confident by default?
Turns out that it has. The reason why faked confidence gradually turns into genuine confidence is that by forcing yourself to act in confident ways which felt dangerous before, your mind gets information indicating that this behavior is not as dangerous as you originally thought. That gradually turns off those priority overrides that kept you out of Self originally, until you get there naturally.
The reason why being in Self is a requirement for doing IFS, is the existence of conflicts between parts. For instance, recall the stove-phobic robot having a firefighter subagent that caused it to retreat from the stove into watching pictures of beautiful naked robots. This triggered a subagent which was afraid of the naked-robot-watching preventing the robot from achieving its goals. If the robot now tried to do IFS and talk with the firefighter subagent that caused it to run away from stoves, this might bring to mind content which activated the exile that was afraid of not achieving things. Then that exile would keep flooding the mind with negative memories, trying to achieve its priority override of “we need to get out of this situation”, and preventing the process from proceeding. Thus, all of the subagents that have strong opinions about the situation need to be unblended from, before integration can proceed.
IFS also has a separate concept of “Self-Leadership”. This is a process where various subagents eventually come to trust the Self, so that they allow the person to increasingly remain in Self even in various emergencies. IFS views this as a positive development, not only because it feels nice, but because doing so means that the person will have more cognitive resources available for actually dealing with the emergency in question.
I think that this ties back to the original notion of subagents being generated to invoke priority overrides for situations which the person originally didn’t have the resources to handle. Many of the subagents IFS talks about seem to emerge from childhood experiences. A child has many fewer cognitive, social, and emotional resources for dealing with bad situations, in which case it makes sense to just categorically avoid them, and invoke special overrides to ensure that this happens. A child’s cognitive capacities, models of the world, and abilities to self-regulate are also less developed, so she may have a harder time staying out of dangerous situations without having some priority overrides built in. An adult, however, typically has many more resources than a child does. Even when faced with an emergency situation, it can be much better to be able to remain calm and analyze the situation using all of one’s subagents, rather than having a few of them take over all the decision-making. Thus, it seems to me - both theoretically and practically - that developing Self-Leadership is really valuable.
That said, I do not wish to imply that it would be a good goal to never have negative emotions. Sometimes blending with a subagent, and experiencing resulting negative emotions, is the right thing to do in that situation. Rather than suppressing negative emotions entirely, Self-Leadership aims to get to a state where any emotional reaction tends to be endorsed by the mind-system as a whole. Thus, if feeling angry or sad or bitter or whatever feels appropriate to the situation, you can let yourself feel so, and then give yourself to that emotion without resisting it. As a result, negative emotions become less unpleasant to experience, since there are fewer subagents trying to fight against them. Also, if it turns out that being in a negative emotional state is no longer useful, the system as a whole can just choose to move back into Self.
Final words
I’ve now given a brief summary of the IFS model, and explained why I think it makes sense. This is of course not enough to establish the model as true. But it might help in making the model plausible enough to at least try out.
I think that most people could benefit from learning and doing IFS on themselves, either alone or together with a friend. I’ve been saying that exiles/managers/firefighters tend to be generated from trauma, but it’s important to realize that these events don’t need to be anything immensely traumatic. The kinds of ordinary, normal childhood upsets that everyone has had can generate these kinds of subagents. Remember, just because you think of a childhood event as trivial now, doesn’t mean that it felt trivial to you as a child. Doing IFS work, I’ve found exiles related to memories and events which I thought left no negative traces, but actually did.
Remember also that it can be really hard to notice the presence of some managers: if they are doing their job effectively, then you might never become aware of them directly. “I don’t have any trauma so I wouldn’t benefit from doing IFS” isn’t necessarily correct. Rather, the cues that I use for detecting a need to do internal work are:
- Do I have the qualities associated with Self, or is something blocking them?
- Do I feel like I’m capable of dealing with this situation rationally, and doing the things which feel like good ideas on an intellectual level?
- Do my emotional reactions feel like they are endorsed by my mind-system as a whole, or is there a resistance to them?
If not, there is often some internal conflict which needs to be addressed - and IFS, combined with some other practices such as Focusing and meditation [LW · GW] - has been very useful in learning to solve those internal conflicts.
Even if you don’t feel convinced that doing IFS personally would be a good idea, I think adopting its framework of exiles, managers and firefighters is useful for better understanding the behavior of other people. Their dynamics will be easier to recognize in other people if you’ve had some experience recognizing them in yourself, however.
If you want to learn more about IFS, I would recommend starting with Self-Therapy by Jay Earley. In terms of What/How/Why books [LW · GW], my current suggestions would be:
- How: Self-Therapy by Jay Earley.
- What: Internal Family Systems Therapy, by Richard Schwartz
- Why: The Power of Focusing, by Ann Weiser Cornell (technically not about IFS, but AWC’s variant of Focusing gets very close to IFS, and is excellent for conveying the right mindset for it)
This post was written as part of research supported by the Foundational Research Institute. Thank you to everyone who provided feedback on earlier drafts of this article: Eli Tyre, Elizabeth Van Nostrand, Jan Kulveit, Juha Törmänen, Lumi Pakkanen, Maija Haavisto, Marcello Herreshoff, Qiaochu Yuan, and Steve Omohundro.
86 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2019-01-29T01:29:11.129Z · LW(p) · GW(p)
Thanks for writing this! I am very excited that this post exists. I think what this model suggests about procrastination and addiction alone (namely, that they're things that managers and firefighters are doing to protect exiles) are already huge, and resonate strongly with my experience.
In the beginning of 2018 I experienced a dramatic shift that I still don't quite understand; my sense of it at the time was that there was this crippling fear / shame that had been preventing me from doing almost anything, that suddenly lifted (for several reasons, it's a long story). That had many dramatic effects, and one of the most noticeable ones was that I almost completely stopped wanting to watch TV, read manga, play video games, or any of my other addiction / procrastination behaviors. It became very clear that the purpose of all of those behaviors was numbing and distraction ("general purpose feeling obliterators" used by firefighters, as waveman says in another comment) from how shitty I felt all the time, and after the shift I basically felt so good that I didn't want or need to do that anymore.
(This lasted for awhile but not forever; I crashed hard in September (long story again) before experiencing a very similar shift again a few weeks ago.)
Another closely related effect is that many things that had been too scary for me to think about became thinkable (e.g. regrettable dynamics in my romantic relationships), and I think this is a crucial observation for the rationality project. When you have exile-manager-firefighter dynamics going on and you don't know how to unblend from them, you cannot think clearly about anything that triggers the exile, and trying to make yourself do it anyway will generate tremendous internal resistance in one form or another (getting angry, getting bored, getting sleepy, getting confused, all sorts of crap), first from managers trying to block the thoughts and then from firefighters trying to distract you from the thoughts. Top priority is noticing that this is happening and then attending to the underlying emotional dynamics.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-01-29T16:50:18.191Z · LW(p) · GW(p)
things that had been too scary for me to think about became thinkable (e.g. regrettable dynamics in my romantic relationships), and I think this is a crucial observation for the rationality project. When you have exile-manager-firefighter dynamics going on and you don't know how to unblend from them, you cannot think clearly about anything that triggers the exile, and trying to make yourself do it anyway will generate tremendous internal resistance in one form or another (getting angry, getting bored, getting sleepy, getting confused, all sorts of crap), first from managers trying to block the thoughts and then from firefighters trying to distract you from the thoughts. Top priority is noticing that this is happening and then attending to the underlying emotional dynamics.
Yes!
Valentine has also written some good stuff on this, in e.g. The Art of Grieving Well:
I think the first three so-called “stages of grief” — denial, anger, and bargaining — are avoidance behaviors. They’re attempts to distract oneself from the painful emotional update. Denial is like trying to focus on anything other than the hurt foot, anger is like clutching and yelling and getting mad at the situation, and bargaining is like trying to rush around and bandage the foot and clean up the blood. In each case, there’s an attempt to keep the mind preoccupied so that it can’t start the process of tracing the pain and letting the agonizing-but-true world come to feel true. It’s as though there’s a part of the psyche that believes it can prevent the horror from being real by avoiding coming to feel as though it’s real. [...]
In every case, the part of the psyche driving the behavior seems to think that it can hold the horror at bay by preventing the emotional update that the horror is real. The problem is, success requires severely distorting your ability to see what is real, and also your desire to see what’s real. This is a cognitive black hole — what I sometimes call a “metacognitive blindspot” — from which it is enormously difficult to return.
This means that if we want to see reality clearly, we have to develop some kind of skill that lets us grieve well — without resistance, without flinching, without screaming to the sky with declarations of war as a distraction from our pain.
We have to be willing to look directly and unwaveringly at horror.
and also in Looking into the Abyss:
It would be bad if pain weren’t automatically aversive and we had to consciously remember to avoid things that cause it. Instead, we have a really clever automatic system that notices when something is bad or dangerous, grabs our conscious attention to make us change our behavior, and often has us avoiding the problem unconsciously thereafter.
But because pain is an interpretation rather than a sensation, avoiding it acts as an approximation of avoiding things that are actually bad for us.
This can result in some really quirky behavior on beyond things like dangerously bending at the waist. For instance, moving or touching ourselves seems to distract us from painful sensations. So if the goal is to decrease conscious experience of pain, we might find ourselves automatically clutching or rubbing hurt body parts, rocking, or pounding our feet or fists in response to pain. Especially the latter actions probably don’t help much with the injury, but they push some of the pain out of mind, so many of us end up doing this kind of behavior without really knowing why.
Writhing in agony strikes me as a particularly loud example: if some touch and movement can block pain, then maybe more touch and movement can block more pain. So if you’re in extreme pain and the goal is to get away from it, large whole-body movements seem to make sense. (Although I think there might be other reasons we do this too.)
To me, this looks like a Red Queen race, with the two “competitors” being the pain system and the “distract from pain” reflex. First the pain system tries to get our attention and change our behavior (protect a body part, get help, etc.). This is unpleasant, so the look-away reflex grabs onto the nearest available way to stop the experience of pain, and muddles some of the sensation that’s getting labeled as pain. The pain system still perceives a threat, though, so it turns up the volume so to speak. And then the look-away reflex encourages us to look even more wildly for a way out, which causes pain’s volume to go up even more….
The bit about a Red Queen race sounds to me exactly like the description of an exile/firefighter dynamic, though of course there's a deeper bit there about some things being so painful as to trigger a firefighter response even if one didn't exist previously. Probably everyone has some "generic" firefighters built right into the psyche which are our default response to anything sufficiently uncomfortable - similar to the part in my robot design which mentioned that
If a certain threshold level of “distress” is reached, the current situation is designated as catastrophic. All other priorities are suspended and the robot will prioritize getting out of the situation.
even before I started talking about specialized firefighters dedicated to keeping some specific exiles actually exiled. And in the context of something like physical pain or fear of a predator, just having a firefighter response that's seeking to minimize the amount of experienced distress signal makes sense. The presence of the distress signal is directly correlated with the extent of danger or potential threat, so just having "minimize the presence of this signal" works as an optimization criteria which is in turn directly correlated with optimizing survival.
But when we get to things like "thinking about romantic success" or "thinking about existential risk", it's no longer neatly the case that simply not experiencing the stress of thinking about those things is useful for avoiding them...
comment by Raemon · 2019-03-05T21:14:51.055Z · LW(p) · GW(p)
Curated.
The internal family systems model has seen a lot of discussion in various rationalist and rationalist-adjaecent places, but:
a) usually among people who were already familiar with it,
b) usually with a vague disclaimer of being a fake-framework, without delving into the details of where the limits of the framework lay or how to contextualize it in a broader reductionist worldview.
I think it's been a long-time coming for someone to write up a comprehensive case for why the model is worth taking seriously, placing it in terms that can be concretely reasoned about, built off of and/or falsified.
comment by Hazard · 2020-12-04T22:37:00.313Z · LW(p) · GW(p)
Really what I want is for Kaj's entire sequence to be made into a book. Barring that, I'll settle for nominating this post.
Replies from: Raemon↑ comment by Raemon · 2020-12-04T23:03:41.327Z · LW(p) · GW(p)
I endorse people using nominations to specify things that don't-quite-fit into the schema we laid out. I think nominating an entire sequence is a reasonable thing to do, and figuring out how to fit that into our overall review/publishing system is an important question. I don't know of a better way to do that other than to just encourage people to spell out what they wish to happen, and then... see what ad-hoc systems we can think of while processing that.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-12-04T23:04:27.881Z · LW(p) · GW(p)
e.g. that happened with embedded agency and to some extent babble and prune
comment by David_Chapman · 2019-01-26T17:58:15.520Z · LW(p) · GW(p)
Have you read Minsky's _Society of Mind_? It is an AI-flavored psychological model of subagents that draws heavily on psychotherapeutic ideas. It seems quite similar in flavor to what you propose here. It inspired generations of students at the MIT AI Lab (although attempts to code it never worked out).
Replies from: Kaj_Sotala, eggsyntax, Kenny↑ comment by Kaj_Sotala · 2019-01-28T09:53:28.734Z · LW(p) · GW(p)
I looked at the beginning of it a bit before writing this post, but at least the beginning of it gave the impression that its subagents were very low-level (IIRC, it started with an example of building a tower of blocks, or taking some similar physical action, using many different subagents) and overall it had a strong vibe of 80's AI, so then it didn't feel like the most useful thing to be reading.
↑ comment by eggsyntax · 2024-03-22T00:47:08.984Z · LW(p) · GW(p)
It inspired generations of students at the MIT AI Lab (although attempts to code it never worked out).
Do you happen to recall where you got that information? I've wondered occasionally what later became of Minsky's approach; it's intuitively pretty compelling. I'd love to find a source of info on follow-up work.
Replies from: eggsyntax↑ comment by eggsyntax · 2024-03-22T20:58:58.627Z · LW(p) · GW(p)
Here's one later discussion I found, from 2003, by Push Singh at MIT's Media Lab. It attempts to summarize the implementable parts of the book, and talks about its history and more recent developments.
A couple of interesting things:
- Unlike David's source, it says that 'Despite the great popularity of the book The Society of Mind, there have been few attempts to implement very much of the theory.'
- It says that Minsky's, The Emotion Machine, forthcoming at the time, is in part a sequel to SoM. I haven't read it, so can't vouch for the accuracy of that statement.
comment by tadrinth · 2019-01-28T17:43:29.630Z · LW(p) · GW(p)
I've been attempting to use IFS for years without having read much more than brief summaries of it. This post put me on a much firmer footing with it and I was able to much more clearly categorize a bunch of things that have been happening over the past six months or so. Then over the weekend I had a low-level background internal screaming going on, and while my first couple rounds of attempts at resolving it only helped a little, I was finally able to isolate the issue and fix what turned out to be a massive misalignment. I have not felt this aligned in years.
So thank you very, very much for writing this.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-01-29T16:25:29.947Z · LW(p) · GW(p)
Whoa, glad you found it that useful! Thank you for letting me know. :)
I do recommend reading at least Self-Therapy too, it mentions a number of details which I left out of this explanation, and which might be useful to know about when addressing future issues.
comment by avturchin · 2019-01-26T19:17:39.605Z · LW(p) · GW(p)
My 2 cents:
1 cent: It seems that sub-personalities do not actually exist, but are created by the human mind at the moment of query. The best way to explain this is to look at improvisation theatre, as described in the post by Valentine Intelligent social web [LW · GW]. The consequence of this non-actual existence of the subpersonalities is that we could have different expectations about types of personalities, and still get therapeutically useful and consistently sounding results. For example, some people try to cure psychological problems by making a person to remember trauma-associated past lives. Human mind is very good in creating expected narrative, and plausible sounding stories about past lives could be immediately created by many people. I know it as I personally experimented with that practice and heard dozens of "past lives" stories, which obviously didn't provide any historically checkable information, but just recombined some background knowledge.
2 cent. In the similar "dialogue of voices" method which I practised, all these types of subpersonalities are postulated by a little bit different names, e.g. "exile"s are called "suppressed subpersonalities". However, in voice dialogue there is an overarching subpersonality of Controller which works as OS for different programs-subpersonalities and regulates when and how such subperonalities could be called to action. Controller is also a sum of all protectors-firefighters. It could be called by special procedure. Again, it doesn't actually exist.
Replies from: mr-hire, waveman, lcmgcd↑ comment by Matt Goldenberg (mr-hire) · 2019-01-27T12:17:27.171Z · LW(p) · GW(p)
I've come to a similar conclusion that subagents are something like belief clusters. Which themselves are a closer to the metal leaky abstraction if what's actually going on. However I'm open to the idea that Kajs model is the right one here.
Replies from: avturchin↑ comment by avturchin · 2019-01-27T16:18:03.904Z · LW(p) · GW(p)
In fact, different people have different level of schizotypy or, maybe, it would be better called fractionness of mind. On one side is pure monolithic humans, and on the another is people with genius multiple personality disorder, which is very rare.
↑ comment by waveman · 2019-01-27T03:04:49.197Z · LW(p) · GW(p)
It seems that sub-personalities do not actually exist, but are created by the human mind at the moment of query.
This is one good way to rationalize them. It doesn't really much matter whether this is true or not.
Replies from: pjeby↑ comment by pjeby · 2019-10-13T04:36:07.064Z · LW(p) · GW(p)
It doesn't really much matter whether this is true or not.
I think it matters from the perspective that if subagents are simulated at query time, then a non-subagent model should be able to produce similar results to IFS, with fewer complications.
My own experience comparing subagent-oriented approaches (e.g. IFS, Core Transformation) with non-subagent ones, the non-subagent ones generally require less work to figure out what is going on, because simulating parts that want to hide or deflect stuff is more energy-intensive and frustrating than just helping someone notice that they are hiding or deflecting things.
For example, when I segregate my own desires into parts, it increases the odds of an argument or of parts withholding information or motives, vs. presupposing that all my desires are mine and that I have good reasons even for doing apparently self-destructive things.
That being said, I can think of all kinds of situations where IFS as a metaphor would be superior to more direct approaches... but they all involve people for whom the subagent metaphor is an easier introduction to metacognition, and/or the stuff being dealt with is traumatic enough that you really want to keep it mostly out of consciousness until necessary. I mostly don't work with either group, so for me it's vastly more efficient to simply point to the hiding or deflecting and say, "that thing you just did there: don't do that," than to make up new parts for each thing being avoided.
I can take that stance, though, because I work with people who are highly motivated to change and have put me in a position to give them that type of feedback. But IFS therapists need to be able to work with people who don't always have that degree of trust and compliance with them, or who aren't as willing to accept the existence of their less-acceptable thoughts and desires. In that situation, you're going to have complications no matter what, so you might as well let people pretend they have subagents.
I'm actually kind of surprised that IFS seems so popular in rationalist-space, as I would've thought rationalists more likely to bite the bullet and accept the existence of their unendorsed desires as a simple matter of fact. In retrospect, I suppose that the ability to project (at least temporarily) those desires onto imaginary subagents would be helpful, at least as "training wheels" of a sort... and that the kind of people drawn to rationalism might be extra-likely to want to disavow all their "irrational"-seeming desires!
The reason I call it "training wheels" and dissociation is because imagined subagents allow you to disavow those desires from really being "you". If it's "5-year-old you" who has that desire (for example), then that can be more acceptable than admitting that you are the one who had -- and still has -- that desire. (My approach to this kind of thing is to first target whatever judgmental beliefs say the desire is only acceptable if you're a five year old (if at all), because if you don't reject the desire then disavowing it is no longer required, and you don't need to project it onto an imaginary agent.)
To put it another way, conscious negotiation between subagents is a kludge. The brain already has systems for mediating between desires, and they function normally except when people also have judgmental beliefs that reject those desires' validity and try to squash them. The underlying decision system still tries to run them, but runs into problems because the conscious mind has arranged its life in such a way as to not leave any opportunity for them to manifest, and rejects explicitly pursuing them. So they end up coming out in dysfunctional ways that allow for continued deniability.
Or to put it another way, if you desire something, and you also desire that you not desire it, your brain resolves the conflict by making the desire's fulfillment appear to be outside your control. Believing that there is a "subagent" or "part" in play, allows you to maintain this facade... which is useful if you're a therapist and don't want to piss off your unsophisticated clients who will think you're insulting them if you tell them all their desires are theirs, period.
But if you're working on yourself, there is IMO little use for maintaining this facade, since if you're going to negotiate successfully, you're ultimately going to have to accept the validity of the desires... so you might as well bite the bullet and start from there in the first place!
On the other hand, if I reflect further, there is actually one area where I do find subagent metaphors of a sort to be kind of useful, and that is when I feel "taken over" by some past experience that I'm flashing back to. In such cases it's helpful to see it as being temporarily possessed by a ghost of my past self, in order to detach from it, and step back into the "everyday self" that can weigh things rationally without being consumed by a past emotion. But these don't involve any negotiations; they're more like, "is this thing I'm feeling actually happening now?" or "is this the most useful mindset to be in right now?"
So it's less subparts and more, "who am I acting as right now, and who would I like to be acting as?" I don't assume that these aspects of me have agency of their own, because they are roles that I can play or not play, hats I can put on or take off at will. I think that's the closest thing to anything subagenty that I've actually found useful, personally, and IMO it's a more empowering metaphor than seeing oneself as a collection of squabbling not-really-you parts.
Replies from: Kaj_Sotala, mr-hire, Qiaochu_Yuan, DaystarEld, elityre↑ comment by Kaj_Sotala · 2019-10-14T16:34:46.355Z · LW(p) · GW(p)
I'm actually kind of surprised that IFS seems so popular in rationalist-space, as I would've thought rationalists more likely to bite the bullet and accept the existence of their unendorsed desires as a simple matter of fact.
Some reasons for the popularity of IFS which seem true to me, and independent of whether you accept your desires:
- It's the main modality that rationalists happen to know which lets you do this kind of thing at all. The other popular one is Focusing, which isn't always framed in terms of subagents, but in terms of the memory reconsolidation model [LW · GW] it basically only does accessing; de- and reconsolidation will only happen to the extent that the accessing happens to trigger the brain's spontaneous mismatch detection systems. (Also the Bio-Emotive Framework has gotten somewhat popular of late, but that's a very recent development.)
- Rationalists tend to really like reductionism, in the sense of breaking complex systems down into simpler parts that you can reason about. IFS is good at giving you various gears [LW · GW] about how minds operate, and e.g. turning previously incomprehensible emotional reactions into a completely sensible chain of parts triggering each other. (And this doesn't feel substantially different than thinking in terms of e.g. schemas the way Coherence Therapy does; one is subagent-framed and the other isn't, but one's predictions seem to be essentially the same regardless of whether you think of schemas setting off each other or IFS-parts doing it.)
- Many people have natural experiences of multiplicity, e.g. having the experience of an internal critic which communicates in internal speech; if your mind tends to natively represent things as subagents already, then it's natural to be drawn to an approach which lets you use the existing interface. On the other hand, even if someone doesn't experience natural multiplicity, especially if they've dealt with severely traumatized people [LW · GW], they are likely to have experienced something like part-switching in others.
- IFS seems to offer some advantages that non-subagent ones don't; as an example, I noticed a bug earlier today and used Coherence Therapy's "what does my brain predict would happen if I acted differently" technique to access a schema's prediction... but then I noticed that I was starting to get too impatient to disprove that belief before I had established sufficient access to it, so I switched to treating the schema as a subagent that I could experience compassion and curiosity towards, and that helped deal with the sense of urgency. In general, it feels like the "internal compassion" frame seems to help with a lot of things such as just wanting to rush into solutions, or figuring that some particular bug isn't so important to fix; and knowing about the qualities of Self and having a procedure for getting there is often helpful for putting those kinds of meta-problems to the side.
That said, I do agree that sometimes simulating subagents seems to get in the way; I've had some IFS sessions where I did make progress, but it felt like the process wasn't quite cutting reality at the joints, and I suspect that something like Coherence Therapy might have produced results quicker... and I also agree that
and that the kind of people drawn to rationalism might be extra-likely to want to disavow all their "irrational"-seeming desires!
is a thing. In my IFS training, it was said that "Self-like parts" (parts which pretend to be Self, and which mostly care about making the mind-system stable and bringing it under control) tend to be really strongly attracted towards any ideology or system which claims to offer a sense of control. I suspect that a many of the people who are drawn to rationalism are indeed driven by a strong part/schema which strongly dislikes uncertainty, and likes the promise of e.g. objectively correct methods of thinking and reasoning that you can just adopt. This would go hand in hand with wanting to reject some of your desires entirely.
Replies from: pjeby, elityre↑ comment by pjeby · 2019-10-14T19:14:02.013Z · LW(p) · GW(p)
I don't think IFS is good reductionism, though. That is, presupposing subagents in general is not a reduction in complexity from "you're an agent". That's not actually reducing anything! It's just multiplying entities contra Occam.
Now, if IFS said, "these are specific subagents that basically everyone has, that bias towards learning specific types of evolution-advanged behavior", then that would actually be a reduction.
If IFS said, "brains have modules for these types of mental behavior", (e.g. hiding, firefighting, etc.), then that would also be a reduction.
But dividing people into lots of mini-people isn't a reduction.
The way I reduce the same landscape of things is to group functional categories of mental behavior as standard modules, and treat the specific things people are reacting to and the actual behaviors as data those modules operate on. This model doesn't require any sort of agency, because it's just rules and triggers. (And things like "critical voices" are just triggered mental behavior or memories, not actual agents, which is why they can often be disrupted by changing the way they sound -- e.g. making them seductive or squeaky -- while keeping the content the same. If there were an "agent" desiring to criticize, this technique wouldn't make any sense.)
As for compassion, the equivalent in what I'm doing would be the connect stage in collect/connect/correct:
- "Collect" is getting information about the problem, determining a specific trigger and automatic emotional response (the thing that we will test post-reconsolidation to ensure we got it)
- "Connect" is surfacing the inner experience and memory or belief that drives the response, either as a prediction or learned evaluation
- "Correct" is the reconsolidation part: establishing contradiction and generating new predictions, before checking if the automatic response from "Collect" changed
All of these require one to be able to objectively observe and communicate inner experience without any meta-level processing (e.g. judging, objecting, explaining, justifying, etc.), but compassion towards a "part" is not really necessary for that, just that one suppress commentary. (There are some specific techniques that involve compassion in the "Correct" phase, but that's really part of creating a contradiction, not part of eliciting information.)
With respect to trauma and DID, I will only say that again, the subagent model is not reduction, because it doesn't break things down into simpler elements. (In contrast, the concept of state-specific memory is a simpler element that can be used in modeling trauma and DID.)
Replies from: Kaj_Sotala, Kaj_Sotala↑ comment by Kaj_Sotala · 2019-10-14T20:41:43.003Z · LW(p) · GW(p)
(adding to my other comment)
dividing people into lots of mini-people isn't a reduction.
And like, the post you're responding to just spent several thousand words building up a version of IFS which explicitly doesn't have "mini-people" and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...
Replies from: pjeby↑ comment by pjeby · 2019-10-15T19:31:58.902Z · LW(p) · GW(p)
And like, the post you're responding to just spent several thousand words building up a version of IFS
The presented model of Exiles, Managers, Firefighters, etc. all describes "parts" doing things, but the same ideas can be expressed without using the idea of "parts", which makes that idea redundant.
For example, here is a simpler description of the same categories of behavior:
Everyone experiences things that are so painful, we never want to experience them again, even as a possibility. Since we're not willing to experience them, behaviors that allow us to keep those experiences from consciousness are negatively reinforced. What gets reinforced varies depending on our previous experience, but typically we will learn to deny, deflect, rationalize, distract, or come up with long term goals (e.g. "I will be so perfect that nobody will ever reject me again") in order to avoid the painful experience being even a theoretical possibility.
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of "parts". I'm not saying this model is right and the IFS model is wrong, just that IFS isn't very good at reductionism and fails Occam's razor because it literally multiplies entities beyond necessity.
From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one's treatment of the "parts" (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. (I would hypothesize, though, that in most cases this is a considerably less efficient way to do it than directly going after the actual reconsolidation.)
Also, as I mentioned earlier, there are times when the UTE (thing we're Unwilling To Experience) is better kept conceptually dissociated rather than brought into the open, and in such a case the model of "parts" is a useful therapeutic metaphor.
But "therapeutic metaphor" and "reductionist model" are not the same thing. IFS has a useful metaphor -- in some contexts -- but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.
a version of IFS which explicitly doesn't have "mini-people" and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...
If I try to steelman this argument, I have to taboo "agent", since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to "thing", in which case I get "things which just try to prevent/achieve something", and now I have to figure out how to reduce "try"... is that try iteratively? When do they try? How do they know what to try?
As far as I can tell, the answers to all the important questions for actual understanding are pure handwavium here. And the numerical argument still stands, since new "things" are proposed for each group of things to prevent or achieve, rather than (say) a single "thing" whose purpose is to "prevent" other things, and one whose purpose is to "achieve" them.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-10-17T12:54:31.368Z · LW(p) · GW(p)
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of "parts".
If it was just that brief description, then sure, the parts metaphor would be unnecessary. But the IFS model contains all kinds of additional predictions and applications which make further use of those concepts.
For example, firefighters are called that because "they are willing to let the house burn down to contain the fire"; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that's involved with a typical firefighter response. This makes firefighters and managers typically polarized - mutually opposed - with each other.
Now, it's true that you don't need to use the "part" expression for explaining this. But if we only talked about various behaviors getting reinforced, we wouldn't predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing. Now, obviously it can still be explained in a more sophisticated reinforcement model, in which you talk about e.g. differing prioritizations in different situations, and some behavioral routines kicking in at different situations...
...but if at the end, this comes down to there being two distinct kinds of responses depending on whether you are trying to avoid a situation or are already in it, then you need names for those two categories anyway. So why not go with "manager" and "firefighter" while you're at it?
And sure, you could call it, say, "a response pattern" instead of "part" - but the response pattern is still physically instantiated in some collection of neurons, so it's not like "part" would be any less correct, or worse at reductionism. Either way, you still get a useful model of how those patterns interact to cause different kinds of behavior.
From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one's treatment of the "parts" (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. [...] But "therapeutic metaphor" and "reductionist model" are not the same thing. IFS has a useful metaphor -- in some contexts -- but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.
I agree that the practical usefulness of IFS is distinct from the question of whether it's a good model of behavior.
That said, if we are also discussing the benefits of IFS as a therapeutic method, then what you said is one aspect of what I think makes it powerful. Another is its conception of Self and unblending from parts.
I have had situations where for instance, several conflicting thoughts are going around my head, and identifying with all of them at the same time feels like I'm being torn into several different directions. However, then I have been able to unblend from each part, go into Self, and experience myself as listening to the concerns of the parts while being personally in Self; in some situations, I have been able to facilitate a dialogue and then feel fine.
IFS also has the general thing of "fostering Self-Leadership", where parts are gradually convinced to remain slightly on the side as advisors, while keeping Self in control of things at all times. The narrative is something like, this can only happen if the Self is willing to take the concerns of _all_ the parts into account. The system learns to increasingly give the Self leadership, not because they would agree that the Self’s values would be better than theirs, but because they come to trust the Self as a leader which does its best to fulfill the values of all the parts. And this trust is only possible because the Self is the only part of the system which doesn’t have its own agenda, except for making sure that every part gets what it wants.
This is further facilitated by there being distinctive qualities of being in Self, and IFS users developing a "parts detector" which lets them notice when parts have been triggered, helping them unblend and return back to Self.
I'm not saying that you couldn't express unblending in a non-partisan way. But I'm not sure how you would use it if you didn't take the frame of parts and unblending from them. To be more explicit, by "use it" here I mean "be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it".
(But of course, as you said, this is a digression to whether IFS is a useful mindhacking tool, which is distinct from the question of whether it's good reductionism.)
If I try to steelman this argument, I have to taboo "agent", since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to "thing", in which case I get "things which just try to prevent/achieve something", and now I have to figure out how to reduce "try"...
I said a few words about my initial definition of agent in the sequence introduction [LW · GW]:
One particular family of models that I will be discussing, will be that of multi-agent theories of mind. Here the claim is not that we would literally have multiple personalities. Rather, my approach will be similar in spirit to the one in Subagents Are Not A Metaphor:
Here’s are the parts composing my technical definition of an agent:
1. Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
2. World-Model
Degenerate case: stateless world model consisting of just sense inputs.
3. Search Process
Causal decision theory is a search process. “From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
This is a model that can be applied naturally to a wide range of entities, as seen from the fact that thermostats qualify. And the reason why we tend to automatically think of people - or thermostats - as agents, is that our brains have evolved to naturally model things in terms of this kind of an intentional stance; it’s a way of thought that comes natively to us.
Given that we want to learn to think about humans in a new way, we should look for ways to map the new way of thinking into a native mode of thought. One of my tactics will be to look for parts of the mind that look like they could literally be agents (as in the above technical definition of an agent), so that we can replace our intuitive one-agent model with intuitive multi-agent models without needing to make trade-offs between intuitiveness and truth. This will still be a leaky simplification, but hopefully it will be a more fine-grained leaky simplification, so that overall we’ll be more accurate.
I don't think that the distinction between "agent" and "rule-based process" really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance. A thermostat can be called a set of rules which adjusts the heating up when the temperature is below a certain value, and adjusts the heating down when the temperature is above a certain value; or it can be called an agent which tries to maintain a target temperature by adjusting the heating. Both make the same predictions, they're just different ways of describing the same thing.
Or as I discussed in "Integrating disagreeing subagents [? · GW]":
The frame that I’ve had so far is that of the brain being composed of different subagents with conflicting beliefs. On the other hand, one could argue that the subagent interpretation isn’t strictly necessary for many of the examples that I bring up in this post. One could just as well view my examples as talking about a single agent with conflicting beliefs.
The distinction between these two frames isn’t always entirely clear. In “Complex Behavior from Simple (Sub)Agents [LW · GW]”, mordinamael presents a toy model where an agent has different goals. Moving to different locations will satisfy the different goals to a varying extent. The agent will generate a list of possible moves and picks the move which will bring some goal the closest to being satisfied.
Is this a unified agent, or one made up of several subagents?
One could argue for either interpretation. On the other hand, mordinamael's post frames the goals as subagents, and they are in a sense competing with each other. On the other hand, the subagents arguably don’t make the final decision themselves: they just report expected outcomes, and then a central mechanism picks a move based on their reports.
This resembles the neuroscience model I discussed in my last post [? · GW], where different subsystems in the brain submit various action “bids” to the basal ganglia. Various mechanisms then pick a winning bid based on various criteria - such as how relevant the subsystem’s concerns are for the current situation, and how accurate the different subsystems have historically been in their predictions.
Likewise, in extending the model from Consciousness and the Brain [LW · GW] for my toy version of the Internal Family Systems model [LW · GW], I postulated a system where various subagents vote for different objects to become the content of consciousness. In that model, the winner was determined by a system which adjusted the vote weights of the different subagents based on various factors.
So, subagents, or just an agent with different goals?
Here I would draw an analogy to parliamentary decision-making. In a sense, a parliament as a whole is an agent. Various members of parliament cast their votes, with “the voting system” then “making the final choice” based on the votes that have been cast. That reflects the overall judgment of the parliament as a whole. On the other hand, for understanding and predicting how the parliament will actually vote in different situations, it is important to model how the individual MPs influence and broker deals with each other.
Likewise, the subagent frame seems most useful when a person’s goals interact in such a way that applying the intentional stance - thinking in terms of the beliefs and goals of the individual subagents - is useful for modeling the overall interactions of the subagents.
For example, in my toy Internal Family Systems model, I noted that reinforcement learning subagents might end up forming something like alliances. Suppose that a robot has a choice between making cookies, poking its finger at a hot stove, or daydreaming. It has three subagents: “cook” wants the robot to make cookies, “masochist” wants to poke the robot’s finger at the stove, and “safety” wants the robot to not poke its finger at the stove.
By default, “safety” is indifferent between “make cookies” and “daydream”, and might cast its votes at random. But when it votes for “make cookies”, then that tends to avert “poke at stove” more reliably than voting for “daydream” does, as “make cookies” is also being voted for by “cook”. Thus its tendency to vote for “make cookies” in this situation gets reinforced.
We can now apply the intentional stance to this situation, and say that “safety” has "formed an alliance" with “cook”, as it correctly “believes” that this will avert masochistic actions. If the subagents are also aware of each other and can predict each other's actions, then the intentional stance gets even more useful.
Of course, we could just as well apply the purely mechanistic explanation and end up with the same predictions. But the intentional explanation often seems easier for humans to reason with, and helps highlight salient considerations.Replies from: pjeby
↑ comment by pjeby · 2019-10-17T19:11:23.025Z · LW(p) · GW(p)
For example, firefighters are called that because “they are willing to let the house burn down to contain the fire”; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that’s involved with a typical firefighter response. This makes firefighters and managers typically polarized - mutually opposed - with each other.
In my experience, this distinction merely looks like normal reinforcement: you can be short-term reinforced to do things that are against your interests in the long-term. This happens with virtually every addictive behavior; in fact, Dodes’ theory of addiction is that people feel better the moment they decide to drink, gamble, etc., and it is that decision that is immediately reinforced, while the downsides of the action are still distant. (Indeed, he notes that people often make that decision hours in advance of the actual behavior.)
If we only talked about various behaviors getting reinforced, we wouldn’t predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing.
On the contrary, contradictions in reinforced behavior are quite normal and expected. Timing and certainty are quite powerful influencers of reinforcement. Also, imitation learning is a thing: we learn from caretakers what to monitor ourselves about and when to punish ourselves... but this has no bearing on what we’ve also been reinforced to actually do. (Think "belief in belief" vs "actual belief". We profess beliefs verbally about what's important that are distinct from what we actually, implicitly value or reward.)
So, you can easily get a person who keeps doing something they think is bad and punish themselves for, because they learned from their parents that punishing themselves was a good thing to do. Not because the punishment has any impact on their actual behavior, but because the act of self-punishing is reinforced, either because it reduces the frequency of outside punishment, or because we have hardware whose job it is to learn what our group punishes, so we can punish everyone else for it.
Anyway, it sounds like you think reinforcement has to have some kind of global coherence, but state-dependent memory and context-specific conditioning show that reinforcement learning doesn't have any notion of global coherence. If you were designing a machine to act like a human, you might try to build in such coherence, but evolution isn’t required to be coherent. (Indeed, reconsolidation theory shows that coherence testing can only happen with local contradiction, as there's no global automatic system checking for contradictions!)
you need names for those two categories anyway. So why not go with “manager” and “firefighter” while you’re at it? And sure, you could call it, say, “a response pattern” instead of “part” - but the response pattern is still physically instantiated in some collection of neurons, so it’s not like “part” would be any less correct, or worse at reductionism.
Because the categories are not of two classes of things, but two classes of behavior. If we assume the brain has machinery for them, it is more parsimonious to assume that the brain has two modules or modes that bias behavior in a particular direction based on a specific class of stimuli, with the specific triggers being mediated through the general-purpose learning machinery of the cortex. To assume that there is dedicated neural machinery for each instance of these patterns is not consistent with the ability to wipe them out via reconsolidation.
That is, I'm pretty sure you can't wipe out physical skills or procedural memory by trivial reconsolidation, but these other types of behavior pattern can be. That suggests that there is not individual hardwired machinery for each instance of a "part" in the IFS model, such that parts do not have physically dedicated storage or true parallel operation like motor skills have.
Compare to say, Satir's parts model, where the parts were generic roles like Blamer, Placater, etc. We can easily imagine dedicated machinery evolved to perform these functions (and present in other species besides us), with the selection criteria and behavioral details being individually learned. In such a model, one only needs one "manager" module, one "firefighter" module, and so on, to the extent that the behaviors are actually an evolved pattern and not merely an emergent property of reinforced behavior.
I personally believe we have dedicated systems for punishing, protesting, idealistic virtue signalling, ego defense, and so on. These are not "parts" in the sense that they weren't grown to cope with specific situations, but are more like mental muscles that sit there waiting to be trained as to when and how to act -- much like the primate circuitry for learning whether to fear snakes, and which ones, and how to respond to their presence.
An important difference, then, between a model that treats parts as real, vs one that treats "parts" as triggers wired to pre-existing mental muscles, is that in the mental muscles model, you cannot universally prevent that pattern from occurring. There is always a part of you ready to punish or protest, to virtue signal or ego-defend. In addition, it is not possible in such a model for that muscle to ever learn to do something else. All you can do is learn to not use that muscle, and use another one instead!
This distinction alone is huge when you look at IFS' Exiles. If you have an "exile" that is struggling to be capable of doing something, but only knows how to be in distress, it's helpful to realize that it's just the built-in mental muscle of "seeking care via distress", and that it will never be capable of doing anything else. It's not the distressed "part"'s job to do things or be capable of things, and never was. That's the job of the "everyday self" -- the set of mental muscles for actual autonomy and action. But as long as someone's reinforced pattern is to activate the "distress" muscle, then they will feel horrible and helpless and not able to do anything about it.
Resolving this challenge doesn’t require that one “fix” or “heal” a specific “part”, and this is actually a situation where it’s therapeutically helpful to realize there is no such thing as a “part”, and therefore nothing to be healed or fixed! Signaling distress is just something brains do, and it’s not possible for the part of your brain that signals distress to do anything else. You have to use a different part of the brain to do anything else.
The same thing goes for inner criticism: thinking of it as originating from a "part" suggests the idea that perhaps one can somehow placate this part to make it stop criticizing, when it is fact just triggering the mental muscle of social punishment, aimed at one's self. The hardware for criticizing and put-downs will always be there, and can't be gotten rid of. But one can reconsolidate the memories that tell it who's an acceptable target! (And as a side effect, you'll become less critical of people doing things similar to you, and less triggered by the behavior in others. Increased compassion comes about automatically, not as a practiced, "fake-it-till-you-make-it" process!)
I’m not saying that you couldn’t express unblending in a non-partisan way. But I’m not sure how you would use it if you didn’t take the frame of parts and unblending from them. To be more explicit, by “use it” here I mean “be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it”.
I think I've just presented such an expression. Unblending doesn't require that you have an individual part for every possible occurrence of behavior, only that you realize that your brain has dedicated machinery for specific classes of behavior. Indeed, I think this is a cleaner way to unblend, since it does not lend itself to stereotyped thoughts of agent-like behavior, such as trying to make an exile feel better or convince a manager you have things under control. It's validating to realize that as long as you are using the mental muscles of distress or self-punishment or self-promotion to try to accomplish something, it never would have worked, beacuse those muscles do not do anything except the preprogrammed thing they do.
When you try to negotiate with parts, you're playacting a complicated way to do something that's much simpler, and hoping that you'll hit the right combination of stimuli to accidentally accomplish a reconsolidation you could've targeted directly in a lot less time.
In IFS, you're basically trying to provide new models of effective caretaker behavior, in the hope that the person's brain figures out what rules this new behavior contradicts, and then reconsolidate. But if you directly reconsolidate the situationally-relevant memories of their actual caretaker's behaviors, you can create an immediate change in how it feels to be one's self, instead of painstakingly building up a set of rote behaviors and trying to make them feel natural.
I don't think that the distinction between "agent" and "rule-based process" really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance
Except that if you actually want to predict how a thermostat behaves, using the brain's built-in model of "thing with intentional stance", you're making your model worse. If you model the thermostat as, "thing that 'wants' the house a certain temperature", then you'll be confused when somebody sticks an ice cube or teapot underneath it, or when the temperature sensor breaks.
That's why the IFS model is bad reductionism: calling things agents brings in connotations that are detrimental to its use as an actual predictive model. To the extent that IFS works, it's actually accidental side-effects of the therapeutic behavior, rather than directly targeting reconsolidation on the underlying rules.
For example, when you try to do "self-leadership", what you're doing is trying to model that behavior through practice while counter-reinforcement is still in place. It's far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren't fighting your reinforced behaviors to do so.
So, at least in my experience, the failure of IFS to carve reality at the actual joint of "preprogrammed modules + individually-learned triggers", makes it more complex, more time-consuming, less effective, and more likely to have unintended side-effects than approaches that directly target the "individually-learned triggers".
In my own approach, rather than nominalizing behavior into parts, I try to elicit rules -- "when this, then that" -- and then go after emotionally significant examples, to break down implicit predictions and evaluations in the memory.
For example, my mother yelling at me that I need to take things seriously when I was being too relaxed (for her standards) about something important I needed to do. Though not explicitly stated, the presuppositions of my mother in this memory are that "taking things seriously" requires being stressed about them, and that further, if you don't do this, then you won't accomplish your goal or be a responsible person, because obviously, if you cared at all, you would be freaked out.
To reconsolidate, I first establish that these things aren't actually true, and then predict a mother who realizes these things aren't true, and ask how she would have behaved if she didn't believe those things. In my imagination, I realize that she probably would've told me she wanted me to work on the thing for an hour a day before dinner, and that she wanted me to show her what I did, so she can track my progress. Then I imagine how my life would've been different, growing up with that example.
Boom! Reconsolidation. My whole outlook on accomplishing long-term goals changes instantly from "be stressed until it's done, or else you're a bad person" to "work on it regularly and keep an eye on your progress". I don't have to practice "self-leadership", because I now feel different when I think about long-term goals than I did before. Instead of triggering the less-useful muscles of self-punishment, the ones I need are triggered instead.
But if I had tried to model the above pattern as parts... I'm not sure how that would have gone. Probably would've made little progress trying to persuade a "manager" based on my mother to act differently if I couldn't surface the assumptions involved, because any solution that didn't involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it's the therapist's job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist... and presumably, a sufficiently-good therapist could use any modality and still get the result they're after, eventually. So what is IFS adding in that case?
Further, when compared to reconsolidation targeting specific schemas, the IFS process is really indirect. You're trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won't simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.
(Also notice that in practice, a lot of things IFS calls "parts" as if they were aspects of the client, are in fact mental models of other people, i.e. "what would mom or dad do in this situation?", as a proxy for "what should I do in this situation?". Changing the model of what the other people would do or should have done then immediately changes one's sense of what "I" should do also.)
Anyway, the main part of IFS that I have found useful is merely knowing which behaviors are a good idea for caregivers to exemplify, as this is valuable in knowing what parts of one's schemas are broken and what they should be changed to. But the actual process of changing them in IFS is really suboptimal compared to directly targeting those schema... which is more evidence suggesting that IFS as a theory is incorrect, in spite of its successes.
Replies from: Kaj_Sotala, elityre, elityre↑ comment by Kaj_Sotala · 2019-10-20T12:13:23.046Z · LW(p) · GW(p)
The content of this and the other comment thread [LW(p) · GW(p)] seems to be overlapping, so I'll consolidate (pun intended) my responses to this one. Before we go on, let me check that I've correctly understood what I take to be your points.
Does the following seem like a fair summary of what you are saying?
Re: IFS as a reductionist model:
- Good reductionism involves breaking down complex things into simpler parts. IFS "breaks down" behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn't simplifying anything.
- Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don't have. If you say that a thermostat "wants" the temperature to be something in particular, or that a part "wants" to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
- The real mechanisms behind emotional issues aren't really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
- Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
- On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
- The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
- The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn't just entirely change its behavior through changing earlier learning.
- Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don't need the metaphor of "parts", and you also have a more correct model which does actual reduction to simpler components.
Re: the practical usefulness of IFS as a therapeutic approach:
- Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
- Thinking about experiences like "being in distress" or "inner criticism" as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it's useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
- Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
- If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
- "Negotiating with parts" buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
- "Developing self-leadership" involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
- IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one's head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
- Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
- Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model [LW(p) · GW(p)], and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Replies from: pjeby
↑ comment by pjeby · 2019-10-20T22:48:40.276Z · LW(p) · GW(p)
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
IFS "breaks down" behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche.
I wouldn't say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer's intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is "supposed to" do, in order to observe what the program is actually doing... which might be quite wrong and/or stupid.
In the same way, I believe that ascribing "agency" to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn't match the actual thing. We're made up of lots of code, and our problems can be considered bugs in the code... even if the behavior the code produces was "working as intended" when it was written. ;-)
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
I don't claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That's more than enough to view it as not reducing "agency".
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Technically, it's possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it's quite possible for an IFS therapist to notice the filtering or distortions taking place, if they're skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to "heal" said part, which then might or might not result in reconsolidation.
So I'm not claiming that IFS can't work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
Transformational change of the kind addressed here—the true disappearance of long-standing, distressing emotional learning—of course occurs at times in all sorts of psychotherapies that involve no design or intention to implement the transformation sequence by creating juxtaposition experiences.
After all, reconsolidation isn't some super-secret special hack or unintended brain exploit, it's how the brain normally updates its predictive models, and it's supposed to happen automatically. It's just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
Replies from: Kaj_Sotala, Kaj_Sotala↑ comment by Kaj_Sotala · 2019-10-29T15:01:19.778Z · LW(p) · GW(p)
Here's my reply [LW · GW]! Got article-length, so I posted it separately.
↑ comment by Kaj_Sotala · 2019-10-22T08:44:32.443Z · LW(p) · GW(p)
Thanks for the clarifications! I'll get back to you with my responses soon-ish.
↑ comment by Eli Tyre (elityre) · 2019-10-18T23:05:33.659Z · LW(p) · GW(p)
This is a great comment, and I glad you wrote it. I'm rereading it several times over to try and get a handle on everything that you're saying here.
In particular, I really like the "muscle" vs. "part" distinction. I've been pondering lately, when I should just squash an urge or desire, and when I should dialogue with it, and this distinction brings some things into focus.
I have some clarifying questions though:
For example, when you try to do "self-leadership", what you're doing is trying to model that behavior through practice while counter-reinforcement is still in place. It's far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren't fighting your reinforced behaviors to do so.
I don't know what you mean by this at all. Can you give (or maybe point to) an example?
---
But if I had tried to model the above pattern as parts... I'm not sure how that would have gone. Probably would've made little progress trying to persuade a "manager" based on my mother to act differently if I couldn't surface the assumptions involved, because any solution that didn't involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it's the therapist's job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist... and presumably, a sufficiently-good therapist could use any modality and still get the result they're after, eventually. So what is IFS adding in that case?
This is fascinating. When I read your stressing out example, my thought was basically "wow. It seems crazy-difficult to surface the core underlying assumptions".
But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.
In practice, how do you go about eliciting the rules and then emotionally significant instances?
Maybe in the context of this example, how do you get from "I seem to be overly stressed about stuff" to the memory of your mother yelling at you?
---
You're trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won't simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.
I'm trying to visualize someone doing IFS or IDC, and connect it to what you're saying here, but so far, I don't get it.
What are the "examples"? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn't or wasn't true?)
---
but state-dependent memory and context-specific conditioning show that reinforcement learning doesn't have any notion of global coherence.
Given that, doesn't it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say "part 1 is doing X-action, and part 2 is doing Y-action."
...But you would say that it is even better to say "this system, as a whole is doing both X-action, and Y-action"?
Replies from: pjeby↑ comment by pjeby · 2019-10-19T03:42:01.655Z · LW(p) · GW(p)
I don't know what you mean by this at all. Can you give (or maybe point to) an example?
So, let's take the example of my mother stressing over deadlines. Until I reconsolidated that belief structure... or hell, since UTEB seems to call it a "schema", let's just call it that. I had a schema that said I needed to be stressed out if the goal was serious. I wasn't aware of that, though: it just seemed like "serious projects are super stressful and I never know what to do", except wail and grind my teeth (figuratively speaking) until stuff gets done.
Now, I was aware I was stressed, and knew this wasn't helpful, so I did all sorts of things to calm down. People (like my wife) would tell me everything was fine, I was doing great, go easier/don't be so hard on yourself, etc. I would try practicing self-compassion, but it didn't do anything, except maybe momentarily, because structurally, being not-stressed was incompatible with my schema.
In fact, a rather weird thing happened: the more I managed to let go of judgments I had about how well I was doing, and the better I got at being self-compassionate, the worse I felt. It wasn't the same kind of stress, but it was actually worse, despite being differently flavored. It was like, "you're not taking this seriously enough" (and implicitly, "you're an awful person").
As it happened, the reason I got better at self-compassion was not because I was practicing it as a mode of operation, but because I used my own mindhacking methods to remove the reasons I had for self-judgment. In truth, over the last decade or two I have tried a ridiculous number of self-help and/or therapist-designed exercises intended to send love or compassion to parts or past selves or inner children etc., and what they all had in common was that they almost never clicked for me... and the few times they did, I ended up developing alternative techniques to produce the same kind of result without trying to fake the love, compassion, or care that almost never felt real to me.
In retrospect, it's easy to see that the reason those particular things clicked is that in trying to understand the perspective from which the exercise(s) were written, I stumbled on contradictions to my existing schema, and thus fixed another way in which I was judging myself (and thus unable to have self-compassion).
Anyway, my point is that most counteractive interventions (to use the term from UTEB) involve a therapist modeling (and coaching the client to enact) helpful carer behavior. If the client's problem is merely that they aren't familiar with that type of behavior, then this is merely adding a new skill to their repertoire, and might work nicely.
But, if the person comes from a background where they not only didn't receive proper care, but were actively taught say, that they were not worth being cared for, that they were bad or selfish for having normal human needs, etc., then this type of training will be counterproductive, because it goes against the client's schemas, where being good and safe means repressing needs, judging themselves, etc.
As a result, their schema creates either negative reinforcement or neutralizing strategies. They don't do their assignments, they stop coming to therapy. Or they develop ways to neutralize the contradiction between the schema and the new experience, e.g. by defining it as "unreal", "you're being nice because that's your job", etc.
Or, there's the neutralizing strategy I used for many years, which was to frame things in my head as, "okay, so I'm going to be nice to my weak self so that it can shape up and do what it's supposed to now". (This one has been popular with some of my clients, too, as it allows you to keep punishing and diminishing yourself in the way you're used to, while technically still completing the exercises you're supposed to!)
So these are things that traditional therapists call all sorts of things, like transference and resistance and so on. But these are basically ways to say in effect, "the therapy is working but the client isn't".
This is fascinating. When I read your stressing out example, my thought was basically "wow. It seems crazy-difficult to surface the core underlying assumptions".
But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.
In practice, how do you go about eliciting the rules and then emotionally significant instances?
Maybe in the context of this example, how do you get from "I seem to be overly stressed about stuff" to the memory of your mother yelling at you?
The overall framework I call "Collect, Connect, Correct", and it's surprisingly similar to the "ABC123V" framework described in UTEB. (Actually, I guess it shouldn't be that surprising, since the results they describe from their framework are quite similar to the kind I get.)
In the first stage, I collect information about the when/where/how of the problem, and attempt to pin down a repeatable emotional response, i.e. think about X, get emotional reaction Y. If it's not repeatable, it's not testable, which makes things a lot harder.
In the case of being stressed, the way that I got there was that I was laying down one afternoon, trying to take a nap and not being able to relax. When I'd think of trying to let go and actually sleep, I kept thinking something along the lines of, "I should be doing something, not relaxing".
A side note: my description of this isn't going to be terribly reliable, due to the phenomenon I call "change amnesia" (which UTEB alludes to in case studies, but doesn't give a name, at least in the chapters I've read so far). Change amnesia is something that happens when you alter a schema. The meanings that you used to ascribe to things stop making sense, and as a result it's hard to get your mind into the same mindset you used to have, even if it was something you were thinking just minutes before making the change!
So, despite the fact I still remember lying there and trying to go to sleep (as the UTEB authors note, autobiographical memory of events isn't affected, just the meanings associated with them), I am having trouble reconstructing the mindset I was in, because once I changed the underlying schema, that mindset became alien to me.
Anyway, what I do remember was that I had identified a surface level idea. It was probably something like, "I should be doing something", but because those words don't make me feel the sense of urgency they did before, it's hard to know if I am correctly recalling the exact statement.
But I do remember that the statement was sufficiently well-formed to use The Work on. The Work is a simple process for actually performing reconsolidation, the "123" part of UTEB's ABC123V framework, or the "Correct" in my Collect-Connect-Correct framework.
But when I got to question 4 of the Work, there was an objection raised in my mind. I was imagining not thinking I should be doing something (or whatever the exact statement was), and got a bad feeling or perhaps a feeling that it wasn't realistic, something of that sort. A reservation or hesitation in this step of the work corresponds to what UTEB describes as an objection from another schema, and as with their method, so too does mine call for switching to eliciting the newly-discovered schema, instead of continuing with the current one.
So at either that level, or the next level up of "attempt reconsolidation, spot objection, switch", I had the image or idea come up of my mother being upset with me for not being stressed, and I switched from The Work to my SAMMSA model.
SAMMSA stands for "Surface, Attitude, Model, Mirror, Shadow, Assumptions", and it is a tool I developed to identify and correct implicit beliefs encoded as part of an emotionally significant memory. It's especially useful in matters relating to self-image and self-esteem, because AFAICT we learn these things almost entirely through our brain's interpretation of other people's behavior towards us.
In the specific instance, the "surface" is what my mother said and did. The Attitude was impatience and anger. The Model was, "when there is something important to be done, the right thing to do is be stressed". The Mirror was, "if I don't get you to do this, then you will never learn to take things seriously; you'll grow up to be careless". The Shadow (injected to my self image) was the idea that: "you're irresponsible/uncaring". And the Assumptions (of my mother) were ideas like "I'm helpless/can't do anything to make this work", "somebody needs to do something", and "it's a serious emergency for things to not be geting done, or for there to be any problems in the doing".
The key with a stack like this is to fix the Shadow first, unless the Assumptions get in the way. Shadow beliefs are things that say what a person is not, and by implication never will be. They tend to lock into place all the linked beliefs, behaviors, and assumptions, like a lynchpin to the schema that formed around them.
The contradiction, then, is to 1) remember and realize that I did care, as a matter of actual fact, and was not intentionally being irresponsible or "bad". I wanted to get the thing done, and just didn't know how to go about it. Then, I imagined "how would my mother have acted if she knew that for a fact?" At which point I then imagine growing up with her acting that way... which I was surprised to realize could be as simple as telling me to work on it daily and checking my progress. (I did initially have to work through an objection that she couldn't just leave me to it, and couldn't just tell me to work on it and not follow-up, but these were pretty straightforward to work out.)
I think I also had some trouble during parts of this due to some of the Assumptions, so I had to deal with a couple of those via The Work. I may also be misremembering the order I did these bits in. (Order isn't super important as long as you test things to make sure that none of the beliefs seem "real" any more, so you can clean up any that still do.)
Notice, here, the difference between how traditional therapy (IFS included) treats the idea of compassion or loving but firm caregivers, etc., vs the approach I took here. I do not try to act out being compassionate to my younger self or to my self now. I don't try to construct in my mind some idealized parental figure. Instead, what I did was identify what was broken in (my mental model of) my mother's beliefs and behavior, and correct that in my mental model of my mother, which is where my previous behavior came from.
This discovery was the result of studying a metric f**k-ton of books on developmental psychology, self-compassion, inner child stuff, shadow psychology, and even IFS. :) I had discovered that sometimes I could change things by remaigining parental behavior more in-line with the concepts from those books, but not always. Trying to divine the difference, I finally noticed that the issue was that sometimes I simply could not, no matter how hard I tried, make a particular visualization of caring behavior feel real, and thus trigger a memory mismatch to induce reconsolidation.
What I discovered was that for such visualizations, my brain was subtly twisting the visualizations in such a way as to match a deeper schema -- like the idea that I was incompetent or uncaring or unlovable! -- so that even though the imagined parent was superficially acting different, the underlying schema remained unchanged. It was like they were thinking, "well, I guess I'm supposed to be nice like this in order to be a good parent to this loser". (I'm being flippant here since change amnesia has long since wiped most of the specifics of these scenarios from easy recollection.)
I dubbed this phenomenon "false belief change", and found my clients did it, too. I initially had a more intuitive and less systematic way of figuring out how to get past it, but in order to teach other people to do it I gradually worked out the SAMMSA mnemonic and framework for pulling out all the relevant bits, and later still came to realize that there are only three fundamental failures of trust that define Shadows, which helps a lot in rapidly pinning them down.
That's why, this huge wall of text I've described for changing how I feel about important, "serious" projects is something that took maybe 20-30 minutes, including the sobbing and shaking afterward.
(Yeah, that's a thing that happens, usually when I'm realizing all the sh** I've gone through in my life that was completely unnecessary. I assume it's a sort of "accelerated grief" happening when you notice stuff like, "oh hey, I've spent months and years stressing out when I could've just worked on it each day and checked on my progress... so much pain and missed opportunities and damaged relationships and..." yeah. It can be intense to do something like that, if it's been something that affected your life a lot.
As I said above, I did also have to tackle some of the Assumptions, like not being able to do anything and needing somebody else to do it, that any problem equals an emergency, and so on. These didn't take very long though, with the schema's core anchor having been taken out. I think I did one assumption before the shadow, and the rest after, but it's been a while. Most of the time, Assumptions don't really show up until you've at least started work on fixing the shadow, either blocking it directly, or showing up when you try to imagine what it would've been like to grow up with the differently-thinking parent.
When I work with clients, the actual SAMMSA process and reconsolidation is similarly something that can be done in 20-30 minutes, but it may take a couple hours to get up to that point, as the earlier Collect and Connect phases can take a while, getting up to the point where you can surface a relevant memory. I was lucky with the "going to sleep" problem because it was something I had immediate access to: a problem that was actually manifesting in practice. In contrast, with clients it usually takes some time to even pin down the equivalent of "I was trying to get to sleep and kept thinking I should be doing something", especially since most of the time the original presenting problem is something quite general and abstract.
I also find that individuals vary considerably in how easy it is for them to get to emotionally relevant memories; recently I've had a couple of LessWrong readers take up my free session offer, and have been quite surprised at how quickly they were able to surface things. (As it turned out, they both had prior experience with Focusing, which helps a whole heck of a lot!)
The UTEB book describes some things that sound similar to what I do to stimulate access to such memories, e.g. their phrase of "symptom deprivation" describes something kind of similar in function and intent to some of my favorie "what if?" questions to ask. And I will admit that there is some degree of art and intuition to it that I have not put into a formal framework (at least yet). But since I tend to develop frameworks in response to trying to teach things, it hasn't really come up. Example and osmosis has generally sufficed for getting people to get the hang of doing this kind of inward access, once their meta-issue with it (if any) gets pinned down.
What are the "examples"? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn't or wasn't true?)
I think I've answered this above, but in case I haven't: IFS has the therapist and/or client act out examples of caring behavior, compassion, "self-leadership", etc. They do this by paying attention, taking parts' needs seriously, and so on. My prediction is that for some people, some of the time, this would produce results similar to those produced by reconsolidation. Specifically, in the cases where someone doesn't have a schema silently twisting everything into a "false belief change", but the behavior they're shown or taught does contradict one of their problematic schema.
But if the person is internally reframing everything to, "this is just the stupid stuff I have to do to take care of these stupid needy parts", then no real belief change is taking place, and there will be almost no lasting benefit past the immediate reconciliation of the current conflict being worked on, if it's even successfully resolved in the first place.
So, I understand that this isn't what all IFS sources say they are doing. I'm just saying that, whatever you call the process of enacting these attitudes and behaviors in IFS, the only way I would expect it to ever produce any long-term effects is as the result of it being an example that triggers a contradiction in the client's mental model, and therefore reconsolidation. (And thereby producing "transformative" change, as the UTEB authors call it, as opposed to "counteractive" change, where somebody has to intentionally maintain the counteracting behavior over time in order to sustain the effect.)
Given that, doesn't it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say "part 1 is doing X-action, and part 2 is doing Y-action."
...But you would say that it is even better to say "this system, as a whole is doing both X-action, and Y-action"?
I don't know what you mean by "parts" here. But I do focus on the smallest possible things, because it helps to keep an investigation empirically grounded. The only reason I can go from "not wanting to go to sleep" to "my mother thinks I'm irresponsible" with confidence I'm not moving randomly or making things up, is because each step is locally verifiable and reproducible.
It's true that there are common cycles and patterns of these smaller elements, but I stick as much as possible to dealing in repeatable stimulus-response pairs, i.e., "think about X, get feeling or impression Y". Or "adjust the phrasing of this idea until it reaches maximum emotional salience/best match with inner feeling". All of these are empirical, locally-verifiable, and theory-free phenomena.
In contrast, "parts" are something I've struggled to work with in a way that allows that kind of definitiveness. In particular, I never found my "parts" to have repeatable behavior, let alone verifiable answers to questions. I could never tell if what I seemed to be getting was real, or was just me imagining/making stuff up. In contrast, the modality of "state an idea or imagine an action, then notice how I feel" was eminently repeatable and verifiable. I was able to quickly learn the difference betwen "having a reaction" and "wondering if I'm reacting", and was then able to test different change techniques to see what they did. If something couldn't change the way I automatically responded, I considered it a dud, because I wanted to change me on the inside, not just how I act on the outside. I wanted to feel differently, and once I settled on using this "test-driven" approach, I began to be able to, for the first time in my life.
So if psychology is alchemy, testing automatic emotional responses is my stab at atomic theory, and I'm working on sketches of parts of the periodic table. (With the caveat that given myself as the primary audience, and my client list being subject to major selection effects, it is entirely possible that the scope of applicability of my work is just smart-but-maybe-too-sensitive, systematically-thinking people with certain types of inferiority complexes. But that worry is considerably reduced by the stuff I've read so far in UTEB, whose authors work's audience does not appear as limited, and whose approach seems fairly congruent with my own.)
↑ comment by Eli Tyre (elityre) · 2019-10-18T23:21:15.120Z · LW(p) · GW(p)
This distinction alone is huge when you look at IFS' Exiles. If you have an "exile" that is struggling to be capable of doing something, but only knows how to be in distress, it's helpful to realize that it's just the built-in mental muscle of "seeking care via distress", and that it will never be capable of doing anything else. It's not the distressed "part"'s job to do things or be capable of things, and never was. That's the job of the "everyday self" -- the set of mental muscles for actual autonomy and action. But as long as someone's reinforced pattern is to activate the "distress" muscle, then they will feel horrible and helpless and not able to do anything about it.
I wonder how much of this discussion comes down to a different extensional referent of the word "part".
According to my view, I would call "the reinforced pattern to activate the 'distress' muscle [in some specific set of circumstances]" a part. That's the thing that I would want to dialogue with.
In contrast, I would not call the "distress muscle" itself a part, because (as you say) the distress muscle doesn't haven anything like "beliefs" that could update.
In that frame, do you still have an objection?
Replies from: pjeby↑ comment by pjeby · 2019-10-19T04:05:12.758Z · LW(p) · GW(p)
According to my view, I would call "the reinforced pattern to activate the 'distress' muscle [in some specific set of circumstances]" a part. That's the thing that I would want to dialogue with.
And I don't understand how you could "dialogue" with such a thing, except in the metaphorical sense where debugging is a "dialogue" with the software or hardware in question. I don't ask a stimulus-respponse pattern to explain itself, I dialogue with the client or with my inner experience by trying things or running queries, and the answers I get back are whatever the machine does in response.
I don't pretend that the behavior pattern is a coherent entity with which I can have a conversation in English, as for me that approach has only ever resulted in confusion, or at best some occasionally good but largely irreproducible results.
And I specifically coach clients not to interpret those responses they get, but just to report the bare fact of what is seen or felt or heard, because the purpose is not to have a conversation but to conduct an investigation or troubleshooting process.
A stimulus-response pattern doesn't have goals or fears; goals or fears are things we have, that we get from our SR rules as emergent properties. That's why treating them as intentional agents makes no sense to me: they're what our agency is made of, but they themselves are not a class of thing that could even comprehend such a thing as the notion of agency.
Schemas are mental models, not utilitarian agents... not even in a theoretical sense! Humans don't weigh utility, we have an action planner system that queries our predictive model for "what looks like something good to do in this situation", and whatever comes back fastest tends to win, with emotionally weighted stuff or stuff tagged by certain mental muscles getting wired into faster routes.
To put it another way, I think the thing you're thinking you can dialogue with is actually a spandrel of sorts, and it's a higher-level unit than what I work with. IFS, in ascribing intention, necessarily has to look at more complex elements than raw, miniscule, almost "atomic" stimulus-response patterns, because that's what's required if you want to make a coherent-sounding model of an entire cycle of symptoms.
In contrast, for me the top-down view of symptom cycles is merely a guide or suggestion to begin an empirical investigation of specific repeatable responses. The larger pattern, after all, is made of things: it doesn't just exist on its own. It's made of smaller, simpler things whose behaviors are much more predictable and repeatable. The larger behavior cycles inevitably involve countless minor variations, but the rules that generate the cycles are much more deterministic in nature, making them more amenable to direct hacking.
↑ comment by Kaj_Sotala · 2019-10-14T19:40:57.703Z · LW(p) · GW(p)
If IFS said, "brains have modules for these types of mental behavior", (e.g. hiding, firefighting, etc.), then that would also be a reduction.
I'm not sure why IFS's exile-manager-firefighter model doesn't fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something -> exile being triggered and getting anxious -> gaming firefighter seeking to suppress the anxiety with a game -> inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as "little people". They're basically just simple trigger-action rules too, like "if there is something that Kaj should be doing and he isn't getting around doing it, start ramping up an increasing level of reminders".
There's also Janina Fisher's model [LW(p) · GW(p)] of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.
but compassion towards a "part" is not really necessary for that, just that one suppress commentary.
Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.
Replies from: pjeby↑ comment by pjeby · 2019-10-15T19:04:31.898Z · LW(p) · GW(p)
I'm not sure why IFS's exile-manager-firefighter model doesn't fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something -> exile being triggered and getting anxious -> gaming firefighter seeking to suppress the anxiety with a game -> inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as "little people".
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is "count(subject matter) times count(strategies)" instead of "count(subject matter) plus count(strategies)". By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
And that's not even looking at the brain as a whole. If you model "inner criticism" as merely reinforcement-trained internal verbal behavior, you don't need even one dedicated brain module for inner criticism, let alone one for each kind of thing being criticized!
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they're reinforced. So you get "firefighting" for free as a side-effect of the brain being able to learn from reinforcement, without needing to posit a firefighting agent for each kind of deflecting behavior.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior. Turning a human from one agent into multiple agents doesn't reduce anything.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-10-17T11:05:18.408Z · LW(p) · GW(p)
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is "count(subject matter) times count(strategies)" instead of "count(subject matter) plus count(strategies)". By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn't seem to object to the description of schemas; does your objection also apply to them?
IFS in general is very vague about how exactly the parts are implemented on a neural level. It's not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they're reinforced. So you get "firefighting" for free as a side-effect of the brain being able to learn from reinforcement
I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [1 [LW · GW] 2 [LW · GW]]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn't just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get "stuck on" way past the time when it has stopped being beneficial. Such as when I've slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn't any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple "behaviors get reinforced" model, but it is more consistent with a "parts can get stuck on after they have been activated" model.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior.
Not sure what you mean by agency?
Replies from: pjeby
↑ comment by pjeby · 2019-10-17T19:44:08.616Z · LW(p) · GW(p)
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn't seem to object to the description of schemas; does your objection also apply to them?
AFAICT, there's a huge difference between UTEB's "schema" (a "mental model of how the world functions", in their words) and IFS' notion of "agent" or "part". A "model" is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn't have any goals, it just blindly maps situations to "things that might be good to do or avoid". An "agent" is implicitly active and goal-seeking, whereas a model is not. "Model" implies a thing that one might change, whereas an "agent" might be required to change itself, if a change is to happen.
UTEB also describes the schema as "wordlessly [defining] how the world is" -- which is quite coherent (no pun intended) with my own models of mindhacking. I'm actually looking forward to reading UTEB in full, as the introduction makes it sound like the models I've developed of how this stuff works, are quite similar to theirs.
(Indeed, my own approach is specifically targeted at changing implicit mental models of "how things are" or "how the world is", because that changes lots of behaviors at once, and especially how one feels or relates to the world. So I'm curious to know if they've found anything else I might find useful.)
IFS in general is very vague about how exactly the parts are implemented on a neural level. It's not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
What I'm arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns. It's bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
To put it another way, if there are "agents" (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life. But if you instead have mental models that predict certain behaviors would be a good idea, and so you feel drawn or pushed towards them, then that is a model that still validates your experience, but doesn't require you to fight or negotiate or whatever. Reconsolidation allows you to be more you, by gaining more choices.
But that's a values argument. You're asking what I'm against, and I'm not "against" IFS per se. What I am saying, and have been saying, is that nominalizing behavior patterns as "parts" or "agents" is bad reductionism, independent of its value as a therapeutic metaphor.
Over the course of this conversation, I've actually become slightly more open to the use of parts as a metaphor in casual conversation, if only as a stepping stone to discarding it in favor of learned rules and mental muscles.
But, the reason I'm slightly more open to it is exactly the same reason I oppose it!
Specifically, using terms like "part" or "agent" encourages automatic, implicit, anthropomorphic projection of human-like intention and behavior.
This is both bad reductionism and good metaphor. (Well, in the short term, anyway.) As a metaphor, it has certain immediate effects, including retaining disidentification with the problem (and therefore validation of one's felt lack of agency in the problem area).
But as reductionism, it fails for the very same reason, by not actually reducing the complexity of what is being modeled, due to sneaking in those very same connotations.
Unfortunately, even as a metaphor, I think it's short-term good, but long-term bad. I have found that people love to make things into parts, precisely because of the good feelings of validation and disidentification, and they have to be weaned off of this in order to make any progress at direct reconsolidation.
In contrast, describing learned rules and mental muscles seems to me to help people with unblending, because of the realization that there's nothing there -- no "agent", not even themselves(!), who is actually "deciding" or pursuing "goals". There's nothing there to be blended with, if it's all just a collection of rules!
But that's a discussion about a different topic, really, because as I said from the outset, my issue with IFS is that it's bad reductionism. And I think this article's attempt at building IFS's model from the bottom up fails at reductionism because it's specifically trying to justify "parts", rather than looking at what is the minimal machinery needed to produce the observations of IFS, independent of its model. (The article also pushes a viewpoint from design, rather than evolution, further weakening its argument.)
For example, I read Healing The Fragmented Selves Of Trauma Survivors a little over a year ago, and found in it a useful refinement: Fisher described five "roles" that parts play, and one of them was something I'd not accounted for in my rough list of "mental muscles". But the very fact that you can exhaustively enumerate the roles that parts "play", strongly suggests that the so-called roles are in fact the thing represented in our hardware, not the "parts"!
In other words, IFS has it precisely backwards: parts don't "play roles", mental modules play parts. When viewed from an evolutionary perspective, going the other way makes no sense, especially given that the described functions (fight/vigilance, flight/escape, freeze/fear, submit/shame, attach/needy), are things that are pretty darn universal in mammals.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get "stuck on" way past the time when it has stopped being beneficial. Such as when I've slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn't any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple "behaviors get reinforced" model, but it is more consistent with a "parts can get stuck on after they have been activated" model.
I think you are confusing reinforcement and logic. Reinforcement learning doesn't work on logic, it works on discounted rewards. The gaming behavior can easily become intrinsically motivating, due to it having been reinforced by previously reducing pain. (We can learn to like something "for its own sake" precisely because it has helped us avoid pain in the past, and if it produces pleasure, too, all the better!)
However, your anticipation that "continuing to play will cause me pain", will at best be a discounted future event without the same level of reinforcement power... assuming that that's really you thinking that at all, and not simply an internal verbal behavior being internally reinforced by a mental model of such worrying being what a "good" or "responsible" person would do! (i.e., internal virtue-signalling)
It is quite possible in my experience to put one's self through all sorts of mental pain... and still have it feel virtuous, because then at least I care about the right things and am trying to be a responsible person... which then excuses my prior failure while also maintaining hope I can succeed in the future.
And despite these virtue-signaling behaviors seeming to be about the thing you're doing or not doing, in my experience they don't really include thinking about the actual problem, and so have even less impact on the outward behavior than one would expect from listening to the supposed subject matter of the inner verbalization(s).
So yeah, reinforcement learning is 100% consistent with the failure modes you describe, once you include:
- negative reinforcement (that which gets us away from pain is reinforced)
- secondary reinforcement (that which is reinforced, becomes "inherently" rewarding)
- discounted reinforcement (that which is near in time and space has more impact than that which is far)
- social reinforcement (that which signals virtue may be more reinforcing than actual virtue, due to its lower cost)
- verbal behavior (what we say to ourselves or others is subject to reinforcement, independent of any actual meaning ascribed to the content of those verbalizations!)
- imitative reinforcement (that which we see others do is reinforced, unless our existing learning tells us the behavior is bad, in which case it is punished instead)
All of these, I believe, are pretty well-documented properties of reinforcement learning, and more than suffice to explain the kinds of failure modes you've brought up. Given that they already exist, with all but verbal behavior being near-universal in the animal kingdom, a parsimonious model of human behavior needs to start from these, rather than designing a system from the ground up to account for a specific theory of psychotherapy.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-10-18T23:48:04.377Z · LW(p) · GW(p)
What I'm arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns.
Cool. That makes sense.
It's bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of "akrasia" and they'll conceptualize it, more or less, as "my system 1 is stupid and doesn't understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing."
And then I might suggest that they try on the frame where "the akrasia part", is actually an intelligent "agent" trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
[I'm obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.
[I do want to note that you explicitly said, "What I am saying, and have been saying, is that nominalizing behavior patterns as "parts" or "agents" is bad reductionism, independent of its value as a therapeutic metaphor."]
---
To put it another way, if there are "agents" (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life.
This doesn't seem right in my personal experience, because the "agents" are all me. I'm conceptualizing the parts of myself as separate from each other, because it's easier to think about that way, but I'm not disowning or disassociating from any of them. It's all me.
Replies from: pjeby
↑ comment by pjeby · 2019-10-19T04:33:48.453Z · LW(p) · GW(p)
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of "akrasia" and they'll conceptualize it, more or less, as "my system 1 is stupid and doesn't understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing."
So my response to that is to say, "ok, let's get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?" Or, "What happens if you don't work harder at your job?"
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
And then I might suggest that they try on the frame where "the akrasia part", is actually an intelligent "agent" trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
I usually don't bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I've asked them to imagine what happens if they get their wish and are now working harder at their job... and they notice that they feel awful or whatever. And then I don't need to address the intentionality at all.
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren't a good person unless they work more, so it doesn't matter how terrible it is... but also, the very fact that they're guilty about not working more may be precisely the thing they're avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn't work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn't understand things like "logical negation" or "implicative reasoning", it just processes things based on their emotional tags. (i.e., "bad = run away")
[I'm obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
And I'm also not saying I never do anything that's a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain's intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern ("meant to feel/made to feel" - a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or "intent" of our brain modules. For example, distress behavior is "intended" to generate caring responses from parents. But this isn't about what the person intends, it's about what their brain is built to do. When you were a crying baby, "you" didn't even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I'm treating it as, "in this context, this behavior pattern would produce this result" (producing reinforcement or gene propagation), not "this thing is trying to produce this result, so it has this behavior pattern in this context." Given the fact that my intention is always to reduce to the actual "wires" or "lines of code" producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don't assume that parts have positive, functional intentions, even if they arose out of the positive "design intentions" of the system as a whole. After all, the plan for achieving that original "intention" may no longer be valid! (insofar as there even was one to begin with.)
That's why I don't think of the thermostat as being something that "wants" temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I'm always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)
↑ comment by Eli Tyre (elityre) · 2019-10-18T23:06:26.802Z · LW(p) · GW(p)
it feels like the "internal compassion" frame seems to help with a lot of things such as just wanting to rush into solutions
+1.
↑ comment by Matt Goldenberg (mr-hire) · 2019-10-17T15:56:56.361Z · LW(p) · GW(p)
This is confusing Dissociation and Integration. I made a 2x2 that helps disambiguate.
http://mattgoldenberg.net/wp-content/uploads/2019/10/2x2s-Integration-vs.-Association-2x2.jpg
Replies from: pjeby↑ comment by pjeby · 2019-10-17T20:32:32.076Z · LW(p) · GW(p)
Interesting diagram. I don't really understand it, though, because to me it looks like Focusing is on the wrong side, since Focusing deals in a unified "felt sense" rather than disparate parts -- at least to my understanding of it.
Actually, I'm generally confused because without the mental state used by Focusing, Core Transformation, the Work, and Sedona don't work properly, if at all. So I don't understand how it could be separate. Similarly, I can see how CBT could be considered dissociated, but not Focusing.
Anyway, when I referred to "dissociating", above, I meant it in the casual sense of people wanting to dis-associate, as in, "I'm not with him..." Not the technical sense of a dissociative experience or D.I.D., though one can also have the desire to detach or disconnect from one's experience in a dissociative way.
In general, I was using the term to suggest something like, "the spectrum of ways people try to make an experience unreal or to deny its significance", which includes a variety of strategies including disavowal, denial, and deflection, as well as actual dissociation in the technical sense.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-10-19T14:33:32.653Z · LW(p) · GW(p)
Focusing focuses on a single "felt sense", rather than an integrated system of felt senses that aren't viewed as seperate.
In general I think you're quite confused about how most people use the parts terminology if you think felt senses aren't referring to parts, which typically represent a "belief cluster" and visual, kinesthetic, or auditory representation of that belief cluster, often that's anthropomorphized. Note that parts can be different sizes, and you can have a "felt sense" related to a single belief, or clusters of beliefs.
Actually, I'm generally confused because without the mental state used by Focusing, Core Transformation, the Work, and Sedona don't work properly, if at all. So I don't understand how it could be separate. Similarly, I can see how CBT could be considered dissociated, but not Focusing.
You're confusing dissociation and integration here again, so I'll just address the dissociation part. Note that all the things I'm saying here are ORTHOGONAL to the issue of "parts".
Yes, focusing is in one sense embodied and experiential as opposed to something like CBT. However, this stuff exists on a gradient, and in focusing the embodiment is explicitly dissociated from and viewed as other. Here's copypasta from twitter:
Here's a quote from http://focusing.org that points towards a dissociative stance: " When some concern comes, DO NOT GO INSIDE IT. Stand back, say "Yes, that’s there. I can feel that, there." Let there be a little space between you and that."
I've heard an acquaintance describe a session with Anne Weiser-Cornell where they kept trying to say "this is my feeling" and she kept correcting to "this feeling in my body", which again is more of a dissociative stance.
Now, is focusing looking to CAUSE dissociation? No, it's using dissociation as a tool because oftentimes people get so caught up in the trees they can't see the forest. For those people, that small bit of dissociation is useful.
Similarly, tools that are associated are often useful for people who tend to view themselves as "other". If people tend to dissociate, it can be useful to realize that this is "me".
> Anyway, when I referred to "dissociating", above, I meant it in the casual sense of people wanting to dis-associate, as in, "I'm not with him..." Not the technical sense of a dissociative experience .
Me as well. I still maintain that viewing things as parts rather than a whole is orthogonal to whether you view yourself as associated (Core Transformation) or dissociated (Focusing) from a part, or associated or dissociated (CT Charting, DID) from a whole.
↑ comment by pjeby · 2019-10-19T16:46:28.093Z · LW(p) · GW(p)
I've heard an acquaintance describe a session with Anne Weiser-Cornell where they kept trying to say "this is my feeling" and she kept correcting to "this feeling in my body", which again is more of a dissociative stance.
I was under the impression that IFS calls that "unblending", just as ACT calls it "de-fusing". I personally view it more as a stance of detachment or curiosity neutral observation. But I don't object to someone saying "I feel X", because that's already one step removed from "X"!
If somebody says, "everything is awful" they're blended or fused or whatever you want to call it. They're taking the map as equivalent to the territory. Saying, "It feels like everything is awful" or "I feel awful" is already one level of detachment, and an okay place to start from.
In common psychotherapy, I believe the term "dissociation" is usually associated with much greater levels of detachment than this, unless you're talking about NLP. The difference in degree is probably why ACT and IFS and others have specialized terms like "unblending" to distinguish between this lesser level of detachment, and the type of dissociative experience that comes with say, trauma, where people experience themselves as not even being in their body.
Honestly, if somebody is so "in their head" that they don't experience their feelings, I have to go the opposite route of making them more associated and less detached, and I have plenty of tools for provoking feelings in order to access them. I don't want complete dissociation from feelings, nor complete blending with them, and ISTM that almost everything on your chart is actually targeted at that same sweet spot or "zone" of detached-but-not-too-detached. In touch with your experience, but neither absorbed by it nor turning your back on it.
Anyway, I think maybe I understand the terms you're using now, and hopefully you understand the ones I'm using. Within your model I still don't know what you'd call what I'm doing, since my "Collect" and "Connect" phases would seem to be in the quadrant with Focusing, while my "Correct" phase explicitly uses The Work and variations on it. And my model doesn't have a notion of parts outside of mental muscles or a metaphorical description of the emergent properties of rules and schemas, such that I sort-of deny the existence of parts or an integrated whole!
To the extent that people have denial or disidentification of certain aspects of themselves, I consider that itself to be a behavior. It can be modeled as a simple conditioned response to the idea of the thing, as seen through a relevant predict/evaluate model. In Core Transformation and IFS, the approach is to treat that as if there is actually a part that has been exiled, but in my view it makes more sense to focus on the schema driving the rejection, than to act as if either metaphorical "part" actually exists.
The difference between my approach and most psychological models outside of NLP, is that I don't even view people as agents, let alone any of their parts as such! I teach them to look at the actuality of what their brain is doing (or at least what parts we are able to observe), and it is much more like rooting a cellphone or hacking a website or debugging a program you didn't write (and whose source code you don't have!), than anything involving interaction between agents. The only "conversation" is one of probing the system and seeing what responses you get, and for me that applies to techniques found in both your "embodied self" and "dissociated parts" quadrants.
Which is why I find the diagram confusing. Because if I understand your model, The Work and Sedona should be in the "dissociated parts" model, if you consider Focusing to be that. Or conversely, Focusing should be in the "embodied self" quadrant. Or alternately, the thing that I'm doing with people and calling Focusing isn't what you mean by Focusing, because all three of those techniques, AFAICT in practice, require precisely the same amount of detachment from one's feelings in order to operate, and none of the three require either the assumption or rejection of the idea of a part existing as a persistent entity, vs. simply responding to present experience, regardless of whether you treat that experience as a metaphorical "part".
I mean, even Sedona requires you to at least have the amount of detachment to say things like, "And could I judge that a little less?" You can't do that if you're fused with the thing. Same for The Work, as you can't consider whether a belief is true, without first defusing enough to consider it to be a "belief" rather than simply "how things are".
I suppose that's probably where our communication is breaking down, because the divisions on your diagram seem kind of academic, in that they don't tell me anything useful about actually doing those techniques successfully. Detachment is effectively required by all of the techniques on the diagram, at least to the extent of not being fused with one's experience. It's necessary to observe the experience as a thing other than the observer or "reality" in order to even conceive of performing some sort of operation upon it, if that makes sense.
So, technically, doesn't that make everything on that chart dissociative, in your use of the term? I mean, the unit I work with is in size and shape a lot like CBT and similar therapies' notion of ANTs, except I deal with them on an emotional/embodied basis rather than analyzing the logical content, and in practical terms I use methods from Focusing and The Work, so I don't see where my approach actually belongs on your diagram, other than "everywhere". ;-)
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-10-20T01:09:38.498Z · LW(p) · GW(p)
so I don't see where my approach actually belongs on your diagram, other than "everywhere". ;-)
I think a proper method should be everywhere. There's not a "correct" box, only a correct box for a given person at a given time in a given situation.
↑ comment by Qiaochu_Yuan · 2019-10-13T21:04:29.916Z · LW(p) · GW(p)
Wow, thank you for writing this. This really clarified something for me that I'm in the process of digesting.
↑ comment by DaystarEld · 2019-10-13T21:03:07.483Z · LW(p) · GW(p)
I will note that, in my own practice, IFS and subagents are never presented as "separate from you," but rather "parts of you." What you're describing sound more like what Narrative Therapy sometimes does, in externalizing and personifying the Anger or Addiction or whatever, and then working to better understand its influences on you and your ability to influence it and so on, though the framing on that can also vary greatly between one practitioner and another.
Insofar as some people use IFS to "other" their internal desires or behaviors, this feels like it's naturally determined by the "client" more than anything. Some people just find the idea of breaking themselves down into sub-agents or "child vs teenage vs adult self" really clicks with the way they relate to their competing desires and goals, without quite giving up "responsibility" for them... but that opens up a new conversation about how important the sense of "responsibility" for our flaws actually is toward addressing them, which also probably depends a lot on how motivated the client is toward change.
Replies from: pjeby↑ comment by pjeby · 2019-10-13T23:15:32.136Z · LW(p) · GW(p)
I will note that, in my own practice, IFS and subagents are never presented as "separate from you," but rather "parts of you."
Yes, but "part of you" can still be disowning/deflection. It allows one to remain disidentified from the "part", i.e., "oh, it's just that part of me, it's not really me". It allows you to disclaim endorsement of the "part's" values.
Some people just find the idea of breaking themselves down into sub-agents or "child vs teenage vs adult self" really clicks with the way they relate to their competing desires and goals, without quite giving up "responsibility" for them... but that opens up a new conversation about how important the sense of "responsibility" for our flaws actually is toward addressing them, which also probably depends a lot on how motivated the client is toward change.
I can see how it might work for some people. I just avoid it because the clients I work with usually have a metric ton of stuff they're other-ing or judging themselves about or disavowing, so dealing with that issue is already on the critical path for getting done what they came to me for. (And the people who come to me talking about how wonderful IFS is, frequently seem to be the ones with the worst denial issues, so that's probably why I get a bit passionate about explaining why, at least for them, it's a really bad idea to keep doing that.)
But yeah, any modality can be abused by anybody in order to keep themselves from changing, and all self-help advice can be trivially weaponized for self-destruction.
After all, somebody could easily take what I'm saying about IFS and turn it into ammunition to punish themselves more, because they need to "take responsibility" for all their awful, awful parts. ;-)
That being said, I don't say that people need to "take responsibility, just that they need to admit the truth about what they want. It's okay to wish you didn't want something you want, but trying to pretend you don't want it or that it's not you who wants it isn't always a viable coping strategy, and in fact is often crazy-making.
That is, the brain's decision-making system appears to be able to handle, "I want this but it's not a good idea", much better and more sanely than it handles "I want to not know that I want this"! The latter is just begging to end up with compulsive behaviors outside of conscious control (because if they could control the behavior, it would mean that they're the one who's doing the wanting).
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-10-19T00:05:12.766Z · LW(p) · GW(p)
And the people who come to me talking about how wonderful IFS is, frequently seem to be the ones with the worst denial issues
Huh. This does not resonate with my experience, but I will henceforth be on the lookout for this.
Replies from: pjeby↑ comment by pjeby · 2019-10-19T17:03:47.875Z · LW(p) · GW(p)
Huh. This does not resonate with my experience, but I will henceforth be on the lookout for this.
To be fair, I doubt that my sample size of such individuals is statistically significant. But since in the few times a client has brought up IFS and either enthusiastically extolled it or seemed to be wanting me to validate it as something they should try, it seemed to me to be related to either the person's schema of helplessness (i.e., these parts are doing this to me), or of denial (i.e., I would be successful if I could just fix all these broken parts!), which IMO are both treating the parts metaphor as a way to support and sustain the very dysfunctions that were causing their problems in the first place.
In general, I suspect people are naturally attracted to the worst possible modes of therapy for fixing their problems, at least if they know anything about the therapy in question!
(And I include myself in that, since I've avoided therapy generally since a bad experience with it in college, and for a long time avoided any self-help modality that involved actually being self-compassionate or anything other than supporting my "fix my broken stuff so I can get on with life" attitude. It's possible that with the right approach and therapist I could potentially have changed faster, once you count all the time I spent researching and developing my methods, all the failures and blind alleys. But I'm happy with the outcome, since more people are being helped than just me, and getting people out of the kinds of pain I suffered is rewarding in its own way.)
↑ comment by Eli Tyre (elityre) · 2019-10-18T23:06:47.118Z · LW(p) · GW(p)
presupposing that all my desires are mine and that I have good reasons even for doing apparently self-destructive things
I've always disliked the term "subagent", but this sentence seems to capture what I mean when I'm talking about psychological "parts".
So I think I agree with you about the ontological status of parts, but I can't tell, if you're making some bolder claim.
What are you imagining would be the case if IFS was literally true, and subagents were real, instead of "just a metaphor"?
. . .
In fact, I dislike the word "subagent", because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.
To my understanding the key idea of the "parts" framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not "mindless": it can update.
Overall this makes your comment read to me as "these things are not really [subagents], they're just reactions that have [these specific properties of subagents]."
Replies from: pjeby
↑ comment by pjeby · 2019-10-19T04:53:27.008Z · LW(p) · GW(p)
What are you imagining would be the case if IFS was literally true, and subagents were real, instead of "just a metaphor"?
Well, for one thing, that they would intelligently shift their behavior to achieve their outcomes, rather than stupidly continuing things that don't work any more. That would be one implication of agency.
Also, if IFS were literally true, and "subagents" were the atomic unit of behavior, then the UTEB model shouldn't work, and neither should mine or many other modalities that operate on smaller, non-intentional units.
In fact, I dislike the word "subagent", because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.
Ah! Now we're getting somewhere. In my frame, an urge, desire or impulse is a reaction. The "response" in stimulus-response. Which is why I want to pin down "when does this thing happen?", to get the stimulus part that goes with it.
To my understanding the key idea of the "parts" framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not "mindless": it can update.
I see it differently: we have mental models of the world, that contain "here are some things that might be good to do in certain situations", where "things to do" can include "how you should feel, so as to bias towards a certain category of behaviors that might be helpful based on what we know". (And the actions or feelings listed in the model can be things other people did or felt!)
In other words, the desire or urge is the output of a lookup table, and the lookup table can be changed. But both the urge and the lookup table are dumb, passive, and prefer not to update if at all possible. (To the extent that information processed through the lookup table will be distorted to reinforce the validity of what's already in the lookup table.)
Even in the cases where somebody makes a conscious decision to pursue a goal, (e.g. a child thinking "I'll be good so my parents will love me", or "I'll be perfect so nobody can reject me"), that's just slapping an urge or desire into the lookup table, basically. It doesn't mean we pursue it in any systematic or even sane way!
So, what you're seeing as a coherent "part", I see as a collection of assorted interacting machinery that, when it works, could maybe be seen as an intelligent goal-seeking agent... but mostly is dumb machinery subject to all kinds of weird breakage scenarios, turning us all into neurotic f**kups, full of hypocrisy and akrasia. ;-)
↑ comment by lukehmiles (lcmgcd) · 2020-02-19T19:31:32.875Z · LW(p) · GW(p)
As with real and fake memories, I think if you’re careful then you can mainly deal with real ones
comment by johnpeterwest · 2020-12-15T09:33:34.082Z · LW(p) · GW(p)
Wow. So glad I ended up on a Goodreads review for the IFS main book and this article was recommended. Just wanted to say thank you for the metaphor presented, really helpful.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-12-15T10:54:06.759Z · LW(p) · GW(p)
Glad it was of use! :)
comment by ioannes (ioannes_shade) · 2019-01-26T17:55:12.940Z · LW(p) · GW(p)
So I finally read up on it, and have been successfully applying it ever since.
Could you give some examples of where you've been applying IFS and how it's been helpful in those situations?
Replies from: Kaj_Sotala, waveman↑ comment by Kaj_Sotala · 2019-01-28T11:18:44.634Z · LW(p) · GW(p)
So I find IFS, Focusing, IDC, and some aspects of TMI-style meditation to basically have blended together into one big hybrid technique for me; they all feel like different aspects of what's essentially the same skill of "listening to what your subagents want and bringing their desires into alignment with each other"; IFS has been the thing that gave me the biggest recent boost, but it's not clear to me that I'm always doing "entirely pure IFS", even though I think there's nearly always a substantial IFS component. (Probably most important has been the part about getting into Self, which wasn't a concept I explicitly had before this.)
That said, a few examples. I already mentioned a few in an earlier post [LW · GW]:
My experience is that usually if I have an unpleasant emotion, I will try to do one of two things: either reject it entirely and push it out of my mind, or buy into the story that it’s telling and act accordingly. Once I learned the techniques for getting into Self, I got the ability to sort of… just hang out with the emotion, neither believing it to be absolutely true nor needing to show it to be false. And then if I e.g. had feelings of social anxiety, I could keep those feelings around and go into a social situation anyway, making a kind of mental move that I might describe as “yes, it’s possible that these people all secretly hate me; I’m going to accept that as a possibility without trying to add any caveats, but also without doing anything else than accepting its possibility”.
The consequence has been that this seems to make the parts of my mind with beliefs like “doing this perfectly innocuous thing will make other people upset” actually update their beliefs. I do the thing, the parts with this belief get to hang around and observe what happens, notice that nobody seems upset at me, and then they are somewhat less likely to bring up similar concerns in the future.
In terms of global workspace theory, my model here is that there’s a part of the mind that’s bringing up a concern that should be taken into account in decision-making. The concern may or may not be justified, so the correct thing to do is to consider its possibility, but not necessarily give it too much weight. Going into Self and letting the message stay in consciousness this way seems to make it available for decision-making, and often the module that’s bringing it up is happy to just have its message received and evaluated; you don’t have to do anything more than that, if it’s just holding it up as a tentative consideration to be evaluated.
If I had to name one single biggest object-level benefit from IFS, it would be this one: a gradual reduction of my remaining unfounded social anxieties, which is still ongoing but seems to be pretty well on track to eliminating all of them.
This ties into the more meta-level thing that there's less and less of a feeling that negative emotions are something that I need to avoid, or that I would need to fight against my own mind. Now I don't claim to be entirely Zen at all times, and there's still stuff like stress or exhaustion that can make me feel miserable, but at least assuming relatively "normal" conditions... there's increasingly the feeling that if I find myself experiencing procrastination, or feeling bad about something, then that involves some subagents not being in agreement about what to do, and I can just fix that. (Again, this is not to say that this process would cause me to only feel positive emotions at all times: sometimes feeling a negative emotion is the mind-system's endorsed response to a situation. But then when the system as a whole agrees with it, it doesn't feel bad in the same way.)
There are a bunch of examples of minor fixes along the lines of the example from the same post:
E.g. a while back I was having a sense of loneliness as I laid down for a nap. I stepped into the part’s perspective to experience it for a while, then unblended; now I felt it as a black ice hockey puck levitating around my lower back. I didn’t really do anything other than let it be there, and maintained a connection with it. Gradually it started generating a pleasant warmth, and then the visualization transformed into a happy napping cartoon fox, curled up inside a fireball that it was using as its blanket. And then I was no longer feeling lonely.
This has gotten to the slightly annoying point that I often find myself "no longer being able" to say things like "I have a mental block/emotional aversion against doing X" or "I feel bad because Y", because if I have a good enough handle on the situation to be able to describe it in such detail, then I can often just fix it right away, without needing to talk about it to someone else. Recent fixes in this category include:
- Recognizing that I should get more exercise and getting a subscription to the nearby gym, after living within a five minute walk of it for almost a year and never getting around visiting it before.
- Managing to actually write my previous post [? · GW] in this sequence, which felt like a relatively boring thing to do since I was just summarizing someone else's work; several blocks came up in the process of doing that, which I then dealt with one at a time, until I could just finish it relatively painlessly.
- Emotional issues relating to things like being too sensitive to the pain of others, to the point of being frequently upset about various specific things in the world which are horrible, and having difficulties setting my own boundaries if it felt like I could sacrifice some of my well-being in order to make someone else better off.
Some exceptions to the "I can just fix it when I'm feeling bad" thing include:
- if the issue is actually caused by someone else, e.g. someone else is acting in a way which is preventing me from achieving my needs
- the problem is caused by a physical issue that I have, such as being hungry, low on sleep, or having such a low level of physical arousal that I get stuck on low-activation energy behaviors
- there's something else in the external environment that causes an actual concrete problem that I don't have e.g. the skills to deal with myself, so can't just institute an internal fix
Also, I used to think that I'd lost out because when I had the chance to experience some things, I failed to realize that chance and didn't get them and now it's too late. For instance, a chance to focus on my studies free of stress, or experiencing a happy and lasting relationship when young and growing up together with a close partner.
But after doing some IFS and TMI work around those things, I've sometimes been spontaneously experiencing the same kinds of Self-like emotional sensations ("felt senses", to use the Focusing term) that I previously thought that I would only have had if I'd gotten those things.
So I suspect that my "I had the chance to experience X, but lost it because of life circumstance Y" better translates to "I previously had access to a certain aspect of being in Self, which frequently happened in the context of X, but had that access blocked after Y". Examples:
1) A chance to focus on my studies free of stress. When I graduated high school, I was really into learning and studying, and excited about the possibility of spending several years in university doing just that. And for a while it was like that and I really enjoyed it. But then I got a burnout and the rest of it was just desperately trying to catch up on my studies and there was a lot of stress, and I have never again had that opportunity to just focus on nothing but studying and being free to think about nothing else.
Except about a month ago I started reading a textbook, with that study time being squarely sandwiched between a dozen other things I should be doing, and... that felt sense of being able to just focus on studies and nothing else, was there again. Apparently it didn't require the freedom to spend a years at a time just studying, just being able to time-box a few hours from a day was enough. But of course, I hadn't previously re-gotten that feeling from just a few hours. Now it felt more like just enjoying learning, in a way which I hadn't remembered for a long time.
So apparently there was something like, previously being able to just focus on the pleasure of learning had been one way to get myself into Self, but afterwards there had been a priority override which had been left active and blocked that access. After I did things to address that override, I could get into Self that way again, and it turned out that feeling this way wasn't a unique opportunity specific to one part of my life which I had now forever lost.
2) The relationship thing is harder to explain, but there's something analogous to the study thing in that... I recalled experiencing a feeling of openness and optimism towards another person, specifically in the context of my first crushes and with my first girlfriend, which I had never quite experienced the same way afterwards. And the way I interpreted that was something like, that was the experience you get when you consider potential or actual partners with the unique openness of being young, when I was still quite naive about things but also not particularly cynical or jaded.
And there was an implicit notion of... I didn't dissect this so explicitly until recently, but I think that a part of me was making the assumption that if I'd ended up in a lasting relationship with someone back then, then that relationship would somehow have preserved that felt sense of openness, which I didn't experience as surviving into my later relationships. Of course, I didn't explicitly think that it would have preserved that felt sense. Rather it was more that the memory of that felt sense was associated with my memory of how I experienced romance back then, and the combination of those memories was associated with a sense of loss of what could have been.
Until about a month ago, when that felt sense of openness and optimism towards a person suddenly popped when I was talking with 1) my housemate about random stuff for 15 minutes and 2) an old acquaintance in the bus for 5 minutes. And also lingering generally around in a milder form when I wasn't even in anyone's company, just doing stuff by myself.
So I think that, my mind had recalled that there was a really nice felt sense associated with my teenage crushes, and made the assumption that if I'd had managed to get into a lasting relationship back then, that would have preserved the felt sense in question. But actually 1) the relationship itself wasn't the point, the nice felt sense was 2) the felt sense wasn't solely about romantic relationships in the first place, it was about having a particular quality of Self which had since then gotten blocked due to some ongoing override.
(I still haven't permanently addressed this override; it seems like it came back since then, and those specific sensations of Self have again been missing. But I expect to eventually be able to figure out how to integrate the specific managers and exiles which are behind those sensations being blocked.)
A somewhat different framing of this would be in terms of emotional unclogging. Something like: as a teenager there were some aspects of me that were less clogged, though I still needed the context of a romantic relationship to unclog them enough to access those aspects. Afterwards access to those aspects of me got more clogged, so that I couldn't access them even in the context of a relationship anymore, so I thought that I'd lost my chance of ever experiencing those feelings again. And then I did some more unclogging work with IFS and related techniques, and suddenly I started having access to those feelings even when talking with somewhat random people.
↑ comment by waveman · 2019-01-27T03:01:24.219Z · LW(p) · GW(p)
I am not OP but I can give an example.
As background there are some activities that are general purpose feeling obliterators and thus are commonly used by firefighters: binge-eating, drinking alcohol, drugs, sex, TV, video games...
I have been fighting with my weight for many (26!) years. I did lose a lot of weight but still at BMI 26 and could not get off that last 7kg. Using the IFS process I identified the firefighters which used eating to make various feelings go away:
Social stress, anxiety about food being available (from when I was young = "Jimmi"), feelings of emotional deprivation (childhood situation), feelings of frustration when I could not understand something, feeling tired, feeling frightened (childhood situation)
Once I connected with these protectors and made friends with them, connected (with their permission) with the original exiles, and established that the problems have solutions, I have been able to stick to my diet for 50 days straight and lose 2.5kg in less than two months. This takes me almost half way to my target.
As an example how much has changed I have had a packet of chocolate biscuits in my refrigerator for the last few weeks with no drama at all about being tempted to eat them (https://en.wikipedia.org/wiki/Tim_Tam).
Why do I have a packet of tim-tams in the fridge?
This is a possibly interesting aspect of the IFS process. Having satisfied all the exiles that their problem is solved you are supposed to check in with them every day for a week. You should also check in with the protectors every day, that they are happy also and that they are liking the new roles they have chosen for themselves.
Well the character Jimmi above on the second check-in said that he bought in theory that nowadays I can always get the food I need but he wanted actual proof. So we went and bought various foods that 8 year old Jimmi liked. Thus the tim-tams. This then satisfied him. But I ate them as part of my diet e.g. this morning I had two tim-tams as my carb/fat portion of breakfast. They were delicious!
I give this as an example of where thinking of the parts as characters can sometimes help. How you rationalize them is less important.
LWers can get too hung up on the theory of things. "I know it works in practice but does it work in theory" as one economist said.
All models are wrong but some are useful. I find this one useful.
As OP pointed out, IFS is very useful for understanding other people. Additionally if you model someone's bad behavior as a part flaring up, it can help you to be more compassionate.
comment by Unnamed · 2021-01-25T08:25:36.284Z · LW(p) · GW(p)
The back-and-forth (here and elsewhere) between Kaj & pjeby was an unusually good, rich, productive discussion, and it would be cool if the book could capture some of that. Not sure how feasible that is, given the sprawling nature of the discussion.
comment by Multicore (KaynanK) · 2020-12-02T12:52:29.667Z · LW(p) · GW(p)
Nomination for 2019 review:
I originally tried to read Self-Therapy, but bounced off of it because it was aimed too much at people with major life-impacting traumas. This post was much more approachable, and I liked the robot metaphor. Since reading it, I started to notice the ways in which my own mind is behaving like a manager or firefighter with respect to embarrassing incidents in the past.
comment by rk · 2019-02-20T14:55:19.261Z · LW(p) · GW(p)
I came back to this post because I was thinking about Scott's criticism of subminds where he complains about "little people who make you drink beer because they like beer".
I'd already been considering how your robot model is nice for seeing why something submind-y would be going on. However, I was still confused about thinking about these various systems as basically people who have feelings and should be negotiated with, using basically the same techniques I'd use to negotiate with people.
Revisiting, the "Personalized characters" section was pretty useful. It's nice to see it more as a claim that '[sometimes for some people] internal processes may be represented using social machinery' than 'internal agents are like fighting people'.
comment by rk · 2019-01-26T14:53:00.411Z · LW(p) · GW(p)
I really enjoyed this post and starting with the plausible robot design was really helpful for me accessing the IFS model. I also enjoyed reflecting on your previous objections as a structure for the second part.
The part with repeated unblending sounds reminiscent of the "Clearing a space" stage of Focusing, in which one acknowledges and sets slightly to the side the problems in one's life. Importantly, you don't "go inside" the problems (I take 'going inside' to be more-or-less experiencing the affect associated with the problems). This seems pretty similar to stopping various protectors from placing negative affect into consciousness.
I noticed something at the end that it might be useful to reflect on: I pattern matched the importance of childhood traumas to woo and it definitely decreased my subjective credence in the IFS model. I'm not sure to what extent I endorse that reaction.
One thing I'd be interested in expansion on: you mention you think that IFS would benefit most people. What do you mean by 'benefit' in this case? That it would increase their wellbeing? Their personal efficacy? Or perhaps that it will increase at least one of their wellbeing and personal efficacy but not necessarily both for any given person?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-01-28T11:23:08.046Z · LW(p) · GW(p)
I really enjoyed this post and starting with the plausible robot design was really helpful for me accessing the IFS model. I also enjoyed reflecting on your previous objections as a structure for the second part.
Thanks, that's very nice and specific feedback. :)
The part with repeated unblending sounds reminiscent of the "Clearing a space" stage of Focusing, in which one acknowledges and sets slightly to the side the problems in one's life.
Yeah, these feel basically like the same kind of thing. I find that Focusing and IFS have basically blended into some hybrid technique for me, with it being hard to tell the difference anymore.
you mention you think that IFS would benefit most people. What do you mean by 'benefit' in this case? That it would increase their wellbeing? Their personal efficacy? Or perhaps that it will increase at least one of their wellbeing and personal efficacy but not necessarily both for any given person?
Possibly combined with other related practices, such as Focusing: Elimination of internal conflicts, increased well-being due to improved access to Self, better ability to do things which feel like worth doing. The personal examples in my other comment [LW(p) · GW(p)] may give a better idea.
comment by plex (ete) · 2021-01-08T16:07:50.548Z · LW(p) · GW(p)
I've read a lot of books in the self-help/therapy/psychology cluster, but this is the first which gives a clear and plausible model of why the mental structure they're all working with (IFS exiles, EMDR unprocessed memories, trauma) has enough fitness-enhancing value to evolve despite the obvious costs.
comment by Peter Chang · 2020-08-16T16:52:34.600Z · LW(p) · GW(p)
I'm a little late to the party, but I just read through and did the exercises of The Self Therapy last week and feeling very excited about how many components of the model "clicked" with me. Reading this post gave me insights into why those components resonated with me, so thank you very much for taking the time to write up this supremely helpful post!
The one aspect of the model that I've been having a lot of trouble with, which I view as problematic since the entire model essentially hinges on this practice, is to have an "organic" conversation with different parts. After identifying a part that I want to work with, I immediately intellectualize that part and build a predictive model of what the part may possibly respond to some inquiries that I have in mind.
As a result, I don't often have the sort of emotional catharsis that I observe in the myriad transcripts of how Jay Earley uses this model with his patients in the book. More often, the process goes like this for me: I identify some part A and try my best to personify it. I know the basic questions I will ask him, and I will think of his/her possible responses. Since part A isn't an "organic" character that is independent of my thought process, I can't spontaneously produce the "other side" of the conversation, and hence it feels more like I'm talking to myself than with another person. Thus, I am uncertain whether I will be able to uncover some deep, sub-conscious trauma through this process since I am heavily intellectualizing the process.
For example, the thought process behind trying to address the trailhead of procrastination goes as follows:
- Is Procrastination its own part? Maybe so. I'll give him a character. I had a roommate ("John") who had a lot of issues with procrastination, so his visual image feels appropriate.
- I'll try talking with John. "Hey John, what are you afraid will happen if you stop procrastinating?"
- No response.
- Of course, there is no response; John only exists in my imagination! It's foolish to expect a spontaneous response from a part of myself.
- Let's see. What would John possibly respond to a question like that? Why do I procrastinate?
- I think I procrastinate because I am scared of commitments. If I am distracted and explore different topics on LessWrong, I will be able to avoid commitments. Okay, that seems like a reasonable response that John may have.
- John: "I am afraid if I don't protect you, you will commit to a career that you will end up resenting."
- Okay, good. Now I have to learn about the exile that John is protecting.
- "Fair enough. I hear your concern. Would it be okay for you to step aside for a few minutes so I can get to know the exile that you are protecting?"
- How would John respond? I don't get any spontaneous reaction to the question, so I'll think about this. Hopefully he will say yes. Since I want to help myself get better, John being an extension of myself, would also want to help myself get better.
- John: "Yes, I'll step aside."
- I visualize John getting up from his couch and walking away. Now, where would the exile be? Probably under the cushion that he was lying on. I lift up the cushion.
- No spontaneous "discovery" of an exile hiding under the cushion.
- Who could reasonably be hiding under the protection of procrastination? Maybe I had a childhood trauma where I felt a lot of anxiety over having to commit to a particular choice. Let's see. My dad had to leave the country for a year when I was six years old, and I had to decide whether I wanted to stay with my mom or my dad. That was probably a traumatic experience. So, the exile is probably my six-year-old self. Okay.
- I imagine a six-year-old Peter hiding underneath the cushion.
- "Hey, Peter!"
- No spontaneous response.
- How would a six-year-old wounded child respond to this? Let's think....
And so on... If anyone who's been benefitting from IFT, I'd really appreciate a tip for me!
Even without the organic discovery of trauma and experiencing a spontaneous catharsis, it's been very helpful to try to fit my experience into the IFS framework, but I would love to see if I'm doing anything wrong and if I can implement IFS better as I continue practicing it! Thank you.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-08-16T20:57:25.979Z · LW(p) · GW(p)
Happy to hear that the post was useful to you!
After identifying a part that I want to work with, I immediately intellectualize that part and build a predictive model of what the part may possibly respond to some inquiries that I have in mind
First piece of advice: don't do that. :-) I feel pretty comfortable saying that this approach is guaranteed not to produce any results. Intellectualizing parts will basically only give you the kind of information that you could produce by intellectual analysis, and for intellectual analysis you don't need IFS in the first place. Even if your guesses are right, they will not produce the kind of emotional activation [LW · GW] that's necessary for change.
A few thoughts on what to do instead...
Is Procrastination its own part? Maybe so. I'll give him a character. I had a roommate ("John") who had a lot of issues with procrastination, so his visual image feels appropriate.
It sounds (correct me if I'm wrong) like you are giving the part a visual appearance by thinking of the nature of the problem, and choosing an image which seems suitably symbolic of it; then you try to interact with that image.
In that case, you are picking a mental image, but the image isn't really "connected" to the part, so the interaction is not going to work. What you want to do is to first get into contact with the part, and then let a visual image emerge on its own. (An important note: parts don't have to have a visual appearance! I expect that one could do IFS even if one had aphantasia. If you try to get a visual appearance and nothing comes up, don't force it, just work with what you do have.)
So I would suggest doing something like this:
- Think of some concrete situation in which you usually procrastinate. If you have a clear memory of a particular time, let that memory come back to mind. Or you could imagine that you are about to do something that you've usually been procrastinating on. Or you could just pick something that you've been procrastinating on and try doing it right now, to get that procrastination response.
- Either way, what you are going for are the kinds of thoughts, feelings, and bodily sensations that are normally associated with you procrastinating. Pay particular attention to any sensations in your body. Whatever it is that you are experiencing, try describing it out loud. For example: "when I think of working on my project, I get an unpleasant feeling in my back... it's a kind of nervous energy. And when I try to focus my thoughts on what I'm supposed to do, I... my attention just keeps kind of sliding off to something else."
- The ellipses in that example are to highlight that there's no rush. Take your time settling into the sensations. Often, if you start with a coarse description, such as "an unpleasant feeling", you might get more details if you just keep your attention on it and see whether you could describe it more precisely: "... it's a kind of nervous energy".
- You're not thinking about parts yet. You're just imagining yourself in a situation and then describing whatever sensations and thoughts are coming up.
- If you find yourself describing everything very quickly, you are probably not paying attention to the actual sensations. If you find yourself pausing, looking for the right word, finding a word that's almost it but having an even better one lurking on the tip of your tongue... then you're much more likely doing it right.
- Sometimes you don't get bodily sensations, but you might get various thoughts, mental images, or desires. That's fine too. Describe them in a similar way.
- If you find yourself being too impatient to do this properly, working with a friend whose only job is to sit there and listen often helps. You can think of yourself as doing your best to communicate the experience to your friend.
- Once you have a good handle on the sensations, you can let your attention rest on them and ask yourself, "if these sensations had a visual appearance, what would it be?".
- Don't try to actively come up with an answer. Just keep your attention on the sensations, ask yourself the question, and see if any visual image emerges on its own. If you get a sense of something but it's vague, you can try saying a few words of what you do manage to make out and see if that brings out additional details.
- "Ask yourself" here doesn't mean that you would need to address any external entity, or do anything else special. Rather, just... kind of let your mind wonder about the question, and see if any answer emerges.
- The image doesn't need to look like anything in particular. It doesn't need to be a human, or even a living being. Though it can be! But it can be a swirling silver vortex, or a wooden duck, or whatever feels right.
- If no visual image emerges, don't sweat it, and don't try to force one. Just stay with the sensations.
- At this point, you can see if you could give this bundle of sensations and (maybe) images a name. Again, don't think about it too intellectually, just see if there would be anything that fits your experience. If you had a nervous energy in your back, maybe it's called "nervousness". If the mental image you got was of a swirling silver vortex in your back, maybe it's "silver vortex".
- Now you can start doing things like seeing if you could communicate with this part, check how you feel towards it, etc.
- When you are asking the part questions, its answers don't need to actually be any kind of mental speech. For instance, if you ask it what it is trying to do, you might get a vague intuition, a flash of memory, or a mental image. The answer might feel cryptic at first. If so, you can again describe it out loud, and wait to see if more details emerge.
- If you think you have a hunch of what it's about, you can try asking the part whether you've understood correctly. Asking verbally is one way, but you can also just kind of... hold up your current understanding against the part, and see whether you get a sense of it resonating.
- If the part tells you that you did understand it correctly, you can then use the same approach to ask it whether you've understood everything about this, or whether there are still more pieces that you are missing.
- Generally avoid the temptation to go into intellectual analysis to figure out what this is about. (You can ask any intellectualizing parts to move aside.) Often there's an emotional logic which will make sense in retrospect, but which is impossible to figure out on an intellectual level beforehand. If you - say - get a particular memory which you recognize but don't understand how it's related to this topic, just stay with the memory, maybe describe it out loud, and see whether more details would emerge.
- It's okay if you don't figure it out during one session. Let your brain process it.
- You might arrive at something like a "classic IFS" situation, where a part has a distinct anthropomorphic appearance and you are literally having a conversation with it. Or your parts might be nothing like this, and be just a bundle of sensations whose "answers" consist of more sensations and memories coming to your mind. Either one is fine.
- Throughout the process, the main thing is to work with that which comes naturally, and not try to force anything. (If you do feel a desire to force things into a particular shape, or guide the process to happen in a particular way, that's a part. See what it's trying to do and whether it would be willing to move aside.)
↑ comment by Peter Chang · 2020-08-17T04:21:15.922Z · LW(p) · GW(p)
Thank you so much for your detailed response!
That makes a lot of sense. I think I need to focus on working with my "impatience" part before I can truly get into the kind of patient and tolerant Self that you are describing.
I think I might have gotten a bit derailed due to my experience training for memory competitions. I had to come up with 2700+ very specific visual images of characters each corresponding to a pair of playing cards, and so I've developed this sometimes-annoying habit of quickly making a tenuous association between any information I process and some figure familiar to me.
Paying careful attention to the relatively-reliable physical sensations that are triggered with particular trailheads and starting from there sounds like a great idea.
Thanks again!
comment by MrAnalogy@gmail.com · 2020-05-06T14:21:50.290Z · LW(p) · GW(p)
Seems like directly entering a Catastrophic situation (burning hand on hot stove) without going through Distress would lead to a more severe Manager (or Exile) like PTSD. I.e, a soldier walking into a firefight & being vs. being shot by sniper. Related: losing a limb suddenly vs. having it amputated (with advance warning) seems to make it more likely you'd have Phantom Limb pain b/c your mind never registered the limb was missing.
comment by ioannes (ioannes_shade) · 2019-08-05T16:30:41.841Z · LW(p) · GW(p)
I'm finding it fruitful to consider the "exiles" discussion in this post alongside Hunting the Shadow.
Replies from: Kaj_Sotala, Kaj_Sotala↑ comment by Kaj_Sotala · 2019-08-05T18:03:32.553Z · LW(p) · GW(p)
It doesn't really fit nicely into the simplified version of IFS that I presented in this post, but in the context of Hunting the Shadow, it's worth noting that some protector parts can get exiled too.
↑ comment by Kaj_Sotala · 2019-08-07T12:36:46.965Z · LW(p) · GW(p)
(I now talk about exiled protectors a bit in "Subagents, neural Turing machines, thought selection, and blindspots [LW · GW]"; quite relevant for the topic of hunting one's shadow, if I may say so myself)
comment by Kenny · 2019-02-20T16:17:46.294Z · LW(p) · GW(p)
This is a great post; particularly in how you narrate bouncing off of it and then building a model by which it or something like it is plausible.
I actually had the luck of having an in-person demonstration of this (IFS-style therapy) from someone in the LW/rationalist community years ago and I've been discussing it and recommending it to others ever since.
comment by sampe · 2019-02-17T13:28:20.382Z · LW(p) · GW(p)
Wow, this is all very interesting.
I have been using this framework for a bit and I think I have found some important clues about some exile-manager-firefighter dynamics in myself. Although I'm just starting and I still have to clarify my next steps, I feel hopeful that this is the right direction.
There are some things which I would like to know more about. Feel free to answer any.
Which agent should the sympathetic listener be talking to? The manager, the exile, or both?
Assuming that one correctly identifies which thoughts (and ultimately, which situations) a manager deems dangerous, and that one successfully does cognitive defusion, to what extent is it feasible, in your opinion, to have the manager (the exile) update by just talking to them vs by experiencing the dangerous situation again but positively? To what extent is it possible that despite a sympathetic listener talks with the manager/exile, they still don't update easily until they directly see some experiences which contradict what they believe? Which things make updating by talking/experiencing harder/easier?
↑ comment by Kaj_Sotala · 2019-02-17T16:41:02.498Z · LW(p) · GW(p)
Glad to hear it's been of use!
Which agent should the sympathetic listener be talking to? The manager, the exile, or both?
First with any of the managers which might be protecting the exiles. Eventually they might give access to the exile, but it's important to not try to rush through them. You only go to the exile after the managers have agreed to give you access to it: bypassing them risks causing damage because the managers had concerns which weren't taken into account. (Self-Therapy has detailed instructions on this.) You might e.g. end up exposing an exile in a situation where you don't have the resources to handle it, and then instead of healing the exile, you end up worsening the original trauma. That will also have the added effect of making your managers less likely to trust you with access to the exile again.
Though sometimes I've had exiles pop up pretty spontaneously, without needing to negotiate with managers. In those situations I've just assumed that all managers are fine with this, since there's no sense of a resistance to contacting the exile. If that happens then it's probably okay, but if it feels like any managers are getting in the way, then address their concerns as much as possible. (As the instructor said in an IFS training I did: "to go fast, you need to go slow".)
IFS also recommends checking back with the managers after healing the exile, so that they can see that the exile is actually healed now and that they can behave differently in the future. Also, you may want to keep checking back with the exile for a while afterwards, to ensure that it's really been healed.
Assuming that one correctly identifies which thoughts (and ultimately, which situations) a manager deems dangerous, and that one successfully does cognitive defusion, to what extent is it feasible, in your opinion, to have the manager (the exile) update by just talking to them vs by experiencing the dangerous situation again but positively?
Depends. I think that either are possible, but I don't have a hard and fast rule: usually I've just gone with whatever felt more right. But I'd guess that in the situations where you can get parts to update just by talking to them, it's in situations where you've already accumulated plenty of evidence about how things are, and the relevant parts just need to become aware of them. E.g. if you had some challenge which was very specifically about your childhood environment, then it shouldn't be too hard to let your parts know that you're no longer in that environment.
On the other hand, for some issues (e.g. social anxiety), the parts might have kept you from ever testing the safety of most situations. For instance, if you're scared of talking to strangers, then you generally won't be talking to strangers. And when you do, you will have parts screaming at you to get out of that situation, which makes it intrinsically unpleasant and won't let you experience it as safe. In that case, you won't actually have collected the evidence needed for making the update, so you need to first persuade the parts to agree that collecting it is sufficiently safe. Then you can go out and get it.
Replies from: Elo, sampe↑ comment by Elo · 2019-02-17T19:00:30.155Z · LW(p) · GW(p)
One of the skills here is an open minded flow of discussion between parts.
To get to an open minded discussion, the agents who are shutting down discussions need to form an agreement to discuss. That means no distraction, no sleepiness, no anxiety around the conversation.
This open discussion can be done for one part at a time or for the global, "discussions are safe" paradigm.
If "discussions are safe", then it's possible to ask the question, "what can't we talk about?" and find content/parts there. (there's still things I don't need to talk about very much, but I have no problem with them and talking about them. For example I prefer to look in an optimistic direction and point my mind there but I have no problem digging up all the fears, doubts and discomforts if that's needed)
comment by [deleted] · 2019-02-02T23:53:27.914Z · LW(p) · GW(p)
Really enjoyed the post, thanks!
I started the Earley book and it's definitely a struggle. I usually can handle "soft skills" books like this one without getting frustrated by the vague, hand-wavy models—I really enjoyed Gendlin's Focusing, for example—but this one's been especially hard. That said, having your model in mind while I'm reading has kept me going as I'm using it as a sort of Rosetta's stone for some of Earley's claims.
comment by avturchin · 2019-01-31T20:52:45.609Z · LW(p) · GW(p)
When I first read the post, I expected that "family systems" are related to Hellinger's family constellations: this is a different method of psychotherapy which assumes completely different set of "subagents" to define human mind and its problems. In the Hellinger's constellation method is assumed that actual family relations of a person has the biggest impact on the person's wellbeing (and motivation), and that the family structure is somehow internalised. This family structure could be invoked by group of people (assigned by a psychotherapist) playing role of "father", "mother" etc. and this group could be reorganised to be more healthy.
https://en.wikipedia.org/wiki/Family_Constellations
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-01-31T22:07:56.228Z · LW(p) · GW(p)
Wow. I didn't expect to see a therapy approach based on morphic fields.
Replies from: avturchin↑ comment by avturchin · 2019-02-01T09:33:04.676Z · LW(p) · GW(p)
I don't think its rational part is based on any "morphic fields". If a person thinks that her mother is god, her father was a devil and suppressed any thoughts about the grandfather, it is expected (but damaged) family structure imprinted in her brain and she will repeat it again when she will try to built her own relations. The best way to learn more about family constellations is just try in ones in a local group - at least, in my case, it helped me to solve long conflict with my mother. The less effective may be to read Bert Hellinger's early books: it provides a theory, but without some experience it may look a little strange.
comment by MrAnalogy@gmail.com · 2020-05-20T13:43:26.317Z · LW(p) · GW(p)
A visceral, real world example:
Workers who are killed who can't let go of their tools because it's part of their identity. I suspect there is a Part (in IFS parlance) that tells them "this is your identity".
From the book Range (highly recommended):
In four separate fires in the 1990s, twenty-three elite wildland firefighters refused orders to drop their tools and perished beside them. Even when Rhoades eventually dropped his chainsaw, he felt like he was doing something unnatural. Weick found similar phenomena in Navy seamen who ignored orders to remove steel-toed shoes when abandoning a ship, and drowned or punched holes in life rafts; fighter pilots in disabled planes refusing orders to eject; and Karl Wallenda, the world-famous high-wire performer, who fell 120 feet to his death when he teetered and grabbed first at his balance pole rather than the wire beneath him. He momentarily lost the pole while falling, and grabbed it again in the air. “Dropping one’s tools is a proxy for unlearning, for adaptation, for flexibility,” Weick wrote. “It is the very unwillingness of people to drop their tools that turns some of these dramas into tragedies.”
comment by Demon_Vanveen · 2020-12-08T18:51:00.602Z · LW(p) · GW(p)
Gensler is a practical/applied framework of Freud, whose influence continues to grow in the humanities (outside of the psychology department, wherever that chimera sits). Most of the commentary above would benefit from a basic understanding of primary Freud (Interpretation of Dreams, Ego and Id, Basic Introduction, Civilization and its Discontents). The key to Freud is his dogged insistence on the importance of non-empirical structures (metaphor, analogy) to human thought. My personal belief is that these are incidental artifacts of the development of language ~200-300kya, but on the other hand they have come both to define and to enable the entire consciousness system.
Not only can you talk to these "agents," you should. You must--it's the only tool you have available.
The water that "science of mind" swims in is Freud's, like it or not. It's not particularly clean or clear, but a swimmer can't ignore it. In short, a lot of the work has been done a long time ago, and we are living in a world that is the direct result of that work. For this issue, there's no greater What/How/Why.
comment by Mikhail Zybin · 2020-12-02T07:52:55.593Z · LW(p) · GW(p)
This is very similar to the Lifespan Integration Therapy which I had in April 2020. The logic of this therapy is to connect you with your memories and dissolve the past traumas. I think I greatly benefited from it because I have stopped being afraid of certain moments of my life associated with having depression.
In general, I am reading this sequence because one of my dreams is to understand what consciousness and enlightenment are. There are few gears in my current models of these phenomena.
comment by MrAnalogy@gmail.com · 2020-05-19T17:25:44.308Z · LW(p) · GW(p)
A psychologist told me that the newer "version" of this is Coherence Therapy. I've only just started to read up on this.
I've gotten enormous benefit just from being aware of the my "parts" without even distinguishing b/t what role they play. Just realizing that what they aren't having the effect they THINK they are.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-05-19T18:18:21.751Z · LW(p) · GW(p)
See my later post [LW · GW] for a discussion of coherence therapy and its connection to IFS. :)
Replies from: MrAnalogy@gmail.com↑ comment by MrAnalogy@gmail.com · 2020-05-21T15:09:12.085Z · LW(p) · GW(p)
yep, read that GREAT post.
Any other suggestions for a starting point on Coherence Therapy?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-05-26T11:55:58.941Z · LW(p) · GW(p)
Thanks! I got some value out of this training guide, though it's primarily aimed at people who already have some therapy training.