Kaj's shortform feed
post by Kaj_Sotala · 2018-03-31T13:02:47.793Z · LW · GW · 35 commentsContents
35 comments
Similar to other people's shortform feeds, short stuff that people on LW might be interested in, but which doesn't feel like it's worth a separate post. (Will probably be mostly cross-posted from my Facebook wall.)
35 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2019-12-28T20:34:49.744Z · LW(p) · GW(p)
Occasionally I find myself nostalgic for the old, optimistic transhumanism of which e.g. this 2006 article is a good example. After some people argued that radical life extension would increase our population too much, the author countered that oh, that's not an issue, here are some calculations showing that our planet could support a population of 100 billion with ease!
In those days, the ethos seemed to be something like... first, let's apply a straightforward engineering approach to eliminating aging, so that nobody who's alive needs to worry about dying from old age. Then let's get nanotechnology and molecular manufacturing to eliminate scarcity and environmental problems. Then let's re-engineer the biosphere and human psychology for maximum well-being, such as by using genetic engineering to eliminate suffering and/or making it a violation of the laws of physics to try to harm or coerce someone.
So something like "let's fix the most urgent pressing problems and stabilize the world, then let's turn into a utopia". X-risk was on the radar, but the prevailing mindset seemed to be something like "oh, x-risk? yeah, we need to get to that too".
That whole mindset used to feel really nice. Alas, these days it feels like it was mostly wishful thinking. I haven't really seen that spirit in a long time; the thing that passes for optimism these days is "Moloch hasn't entirely won (yet [LW · GW])". If "overpopulation? no problem!" felt like a prototypical article to pick from the Old Optimistic Era, then Today's Era feels more described by Inadequate Equilibria and a post saying "if you can afford it, consider quitting your job now so that you can help create aligned AI before someone else creates unaligned AI and kills us all [LW · GW]".
Today's philosophy seems more like "let's try to ensure that things won't be quite as horrible as they are today, and if we work really hard and put all of our effort into it, there's a chance that maybe we and all of our children won't die." Most of the world-saving energy seems to have gone into effective altruism, where people work on issues like making the US prison system suck less or distributing bednets to fight malaria. (Causes that I thoroughly support, to be clear, but also ones where the level of ambition seems quite a bit lower than in "let's make it a violation of the laws of physics to try to harm people".)
I can't exactly complain about this. Litany of Tarski and alll: if the Old Optimistic Era was hopelessly naive and over-optimistic, then I wish to believe that it was hopelessly naive and over-optimistic, and believe in the more realistic predictions instead. And it's not clear that the old optimism ever actually achieved much of anything in the way of its grandiose goals, whereas more "grounded" organizations such as GiveWell have achieved quite a lot.
But it still feels like there's something valuable that we've lost.
Replies from: ozziegooen, vanessa-kosoy, gworley↑ comment by ozziegooen · 2019-12-28T21:48:25.577Z · LW(p) · GW(p)
For what it's worth, I get the sense that the Oxford EA research community is pretty optimistic about the future, but generally seem to believe the risks are just more pragmatic to pay attention to.
Anders Sandberg is doing work on the potential of humans (or related entities) expanding through the universe. The phrase "Cosmic Endowment" is said every here and there. Stuart Armstrong recently created a calendar [LW · GW] of the year 12020.
I personally have a very hard time imagining exactly what things will be like post-AGI or what we could come up with now that would make them better, conditional on it going well. It seems like future research could figure a lot of those details out. But I'm in some ways incredibly optimistic about the future. This model [LW · GW] gives a very positive result, though also a not very specific one.
Replies from: ozziegooen↑ comment by ozziegooen · 2019-12-28T21:53:12.386Z · LW(p) · GW(p)
I think my personal view is something like, "Things seem super high-EV in expectation. In many ways, we as a species seem to be in a highly opportunistic setting. Let's generally try to be as careful as possible to make sure we don't mess up."
Note that high-EV does not mean high-probability. It could be that we have a 0.1% chance of surviving, as a species, but if we do, there would be many orders of magnitude net benefit. I use this not because I believe we have a 0.1% chance, but rather because I think it's a pretty reasonable lower bound.
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-12-28T21:28:45.002Z · LW(p) · GW(p)
I think that although the new outlook is more pessimistic, it is also more uncertain. So, yes, maybe we will become extinct, but maybe we will build a utopia.
↑ comment by Gordon Seidoh Worley (gworley) · 2019-12-29T17:25:51.152Z · LW(p) · GW(p)
It likely reflects a broader, general trend towards pessimism in our culture. Futurism was similarly pessimistic in the 1970s, and turned more generally optimistic in the 1980s. Right now we're in a pessimistic period, but as things change in the future we can probably expect more optimism, including within futurism, if the zeitgeist becomes more optimistic.
comment by Kaj_Sotala · 2019-11-12T09:23:05.314Z · LW(p) · GW(p)
Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake.
Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people".
Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way.
At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any *other* way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew.
Now someone comes and happens to mention that I "don't seem to try to understand other people".
I get upset and defensive because I totally do, this person hasn't understood me at all!
And in one sense, I'm right - it's true that there's a dimension of "trying to understand other people" that I've put a lot of effort into, in which I've probably invested more than other people have.
And in another sense, the other person is right - while I was good at one dimension of "trying to understand other people", I was severely underinvested in others. And I had not really even properly acknowledged that "trying to understand other people" had other important dimensions too, because I was justifiably proud of my investment in one of them.
But from the point of view of someone who *had* invested in those other dimensions, they could see the aspects in which I was deficient compared to them, or maybe even compared to the median person. (To some extent I thought that my underinvestment in those other dimensions was *virtuous*, because I was "not making assumptions about people", which I'd been told was good.) And this underinvestment showed in how I acted.
So the mistake is that if there's a vaguely defined, multidimensional skill and you are strongly invested in one of its dimensions, you might not realize that you are deficient in the others. And if someone says that you are not good at it, you might understandably get defensive and upset, because you can only think of the evidence which says you're good at it... while not even realizing the aspects that you're missing out on, which are obvious to the person who *is* better at them.
Now one could say that the person giving this feedback should be more precise and not make vague, broad statements like "you don't seem to try to understand other people". Rather they should make some more specific statement like "you don't seem to try to make guesses about other people based on how they compare to other people you know".
And sure, this could be better. But communication is hard; and often the other person *doesn't* know the exact mistake that you are making. They can't see exactly what is happening in your mind: they can only see how you behave. And they see you behaving in a way which, to them, looks like you are not trying to understand other people. (And it's even possible that *they* are deficient in the dimension that *you* are good at, so it doesn't even occur to them that "trying to understand other people" could mean anything else than what it means to them.)
So they express it in the way that it looks to them, because before you get into a precise discussion about what exactly each of you means by that term, that's the only way in which they can get their impression across.
It's natural to get defensive when someone says that you're bad at something you thought you were good at. But the things we get defensive about, are also things that we frequently have blindspots around. Now if this kind of a thing seems to happen to me again, I try to make an effort to see whether the skill in question might have a dimension that I've been neglecting.
Once I've calmed down and stopped being defensive, that is.
(see also this very related essay by Ferrett)
comment by Kaj_Sotala · 2019-09-26T13:08:28.693Z · LW(p) · GW(p)
The essay "Don't Fight Your Default Mode Network" is probably the most useful piece of productivity advice that I've read in a while.
Basically, "procrastination" during intellectual work is actually often not wasted time, but rather your mind taking the time to process the next step. For example, if I'm writing an essay, I might glance at a different browser tab while I'm in the middle of writing a particular sentence. But often this is actually *not* procrastination; rather it's my mind stopping to think about the best way to continue that sentence. And this turns out to be a *better* way to work than trying to keep my focus completely on the essay!
Realizing this has changed my attention management from "try to eliminate distractions" to "try to find the kinds of distractions which don't hijack your train of thought". If I glance at a browser tab and get sucked into a two-hour argument, then that still damages my workflow. The key is to try to shift your pattern towards distractions like "staring into the distance for a moment", so that you can take a brief pause without getting pulled into anything different.
I kept coming back to the Real Work about 1-20 minutes later. Mostly on the short end of that range. And then it didn’t feel like there was an obstacle to continuing anymore. I’d feel like I was holding a complete picture of what I was doing next and why in my head again. There’s a sense in which this didn’t feel like an interruption to Real Work I was doing.
While writing this, I find myself going blank every couple of sentences, staring out the window, half-watching music videos. Usually for less than a minute, and then I feel like I have the next thing to write. Does this read like it was written by someone who wasn’t paying attention?
comment by Kaj_Sotala · 2024-02-02T10:49:39.990Z · LW(p) · GW(p)
I only now made the connection that Sauron lost because he fell prey to the Typical Mind Fallacy [LW · GW] (assuming that everyone's mind works the way your own does). Gandalf in the book version of The Two Towers:
The Enemy, of course, has long known that the Ring is abroad, and that it is borne by a hobbit. He knows now the number of our Company that set out from Rivendell, and the kind of each of us. But he does not yet perceive our purpose clearly. He supposes that we were all going to Minas Tirith; for that is what he would himself have done in our place. And according to his wisdom it would have been a heavy stroke against his power.
Indeed he is in great fear, not knowing what mighty one may suddenly appear, wielding the Ring, and assailing him with war, seeking to cast him down and take his place. That we should wish to cast him down and have no one in his place is not a thought that occurs to his mind. That we should try to destroy the Ring itself has not yet entered into his darkest dream. In which no doubt you will see our good fortune and our hope. For imagining war he has let loose war, believing that he has no time to waste; for he that strikes the first blow, if he strikes it hard enough, may need to strike no more. So the forces that he has long been preparing he is now setting in motion, sooner than he intended. Wise fool. For if he had used all his power to guard Mordor, so that none could enter, and bent all his guile to the hunting of the Ring, then indeed hope would have faded: neither Ring nor Bearer could long have eluded him.
comment by Kaj_Sotala · 2019-08-28T14:41:03.500Z · LW(p) · GW(p)
I was thinking of a friend and recalled some pleasant memories with them, and it occurred to me that I have quite a few good memories about them, but I don't really recall them very systematically. I just sometimes remember them at random. So I thought, what if I wrote down all the pleasant memories of my friend that I could recall?
Not only could I then occasionally re-read that list to get a nice set of pleasant memories, that would also reinforce associations between them, making it more likely that recalling one - or just being reminded of my friend in general - would also bring to mind all the others.
(This was in part inspired by Steve Andreas's notion of building a self-concept. There you build self-esteem by taking memories of yourself where you exhibited some positive quality, and intentionally associate them together under some heading such as "lovable" or "intelligent", so that they become interconnected exemplars of a quality that you have rather than being isolated instances.)
So I did, and that usual thing happened where I started out with just three or so particularly salient memories, but then in the process of writing them down my mind generated a few more, until I had quite a long list. It felt really good; now I want to write similar lists about all my close friends.
Interestingly I noticed that the majority of the memories on my list were ones where I'd helped my friend and they'd been happy as a result, rather than the other way around. This does say something about me finding it easier to help people than to ask for help, but might also be related to the finding that I've heard quoted, that giving a gift makes people happier than receiving one.
Replies from: eigen↑ comment by eigen · 2019-08-28T22:10:35.254Z · LW(p) · GW(p)
This is a great idea!
I also had somewhat the inclination to do this, when I first read about Anki on Michael Nielsen's -Aumenting Cognition, he speaks about using Anki to store memories and friends' characteristics such as food preferences (he talks about this on the section: "The challenges of using Anki to store facts about friends and family").
I did not do this because I did not want to meddle with Anki and personal stuff but I found another similar solution which is MONICA a "Personal Relationship Manager", the good thing about it is that it's open source and easy to set up. I did use it for a bit and found that it was very easy to use and had all the things one may want.
I ended up not going through using the app at the time, but considering the post and the fact that people love when you remember facts about them (I also'd like to remember things about them!) I may pick it up again.
comment by Kaj_Sotala · 2018-04-05T19:10:43.642Z · LW(p) · GW(p)
For a few weeks or so, I've been feeling somewhat amazed at how much less suffering there seems to be associated with different kinds of pain (emotional, physical, etc.), seemingly as a consequence of doing meditation and related practices. The strength of pain, as measured by something like the intensity of it as an attention signal, seems to be roughly the same as before, but despite being equally strong, it feels much less aversive.
To clarify, this is not during some specific weird meditative state, but feels like a general ongoing adjustment even when I feel otherwise normal (or otherwise like shit).
I can't help but to wonder whether the difference in intuitions for/against suffering-focused ethics is a consequence of different people's brains being naturally differently configured with regard to their pain:suffering ratio. That is, some people will experience exactly the same amount of pain, unpleasant emotions etc. during their life as others, but for some people the same intensity of pain will translate to a different intensity of suffering. And then we will have people who say things like "life *is* suffering and possibly a net negative for many if not most" as well as people who say things like "suffering isn't any big deal and a pretty uninteresting thing to focus on", staring at each other in mutual incomprehension.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-04-05T19:49:05.252Z · LW(p) · GW(p)
Interesting, I wonder if there is a way to test it, given that it seems hard to measure the pain:suffering ratio of a person directly...
Is there a form of meditation that makes pain more aversive? Then we can have people who say “suffering isn’t any big deal and a pretty uninteresting thing to focus on” do that, and see if they end up agreeing with suffering-focused ethics?
Replies from: Kaj_Sotala, daozaich↑ comment by Kaj_Sotala · 2018-04-06T09:32:51.112Z · LW(p) · GW(p)
While this is a brilliant idea in the sense of being a novel way to test a hypothesis, trying to reprogram people's brains so as to make them experience more suffering strikes me as an ethically dubious way of doing the test. :)
I wouldn't expect just a one-off meditation session where they experienced strong suffering to be enough, but rather I would expect there to be a gradual shift in intuitions after living with an altered ratio for a long enough time.
↑ comment by daozaich · 2018-04-06T16:28:34.149Z · LW(p) · GW(p)
Regarding measurement of pain:suffering ratio
A possible approach would be to use self-reports (the thing that doctor's always ask about, pain scale 1-10) vs revealed preferences (how much painkillers were requested? What trade-offs for pain relief do patients choose?).
Obviously this kind of relation is flawed on several levels: Reported pain scale depends a lot on personal experience (very painful events permanently change the scale, ala "I am in so much pain that I cannot walk or concentrate, but compared to my worst experience... let's say 3?"). Revealed preferences depend a lot on how much people care about the alternatives (e.g. if people have bad health insurance or really important stuff to do they might accept a lot of subjective suffering in order to get out of hospital one day early). Likewise, time preference might enter a lot into revealed preference.
Despite these shortcomings, that's where I would start thinking about what such a ratio would mean. If one actually did a study with new questionaires, one should definitely ask patients for some examples in order to gauge their personal pain-scale, and combine actual revealed preferences with answers to hypothetical questions "how much money would pain relief be worth to you? How much risk of death? How many days of early hospital release? etc", even if the offer is not actually on the table.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2018-04-06T19:47:30.591Z · LW(p) · GW(p)
Apparently there have been a few studies on something like this: "[Long-Term Meditators], compared to novices, had a significant reduction of self-reported unpleasantness, but not intensity, of painful stimuli, while practicing Open Monitoring."
comment by Kaj_Sotala · 2019-09-25T12:46:02.271Z · LW(p) · GW(p)
This paper (Keno Juechems & Christopher Summerfield: Where does value come from? Trends in Cognitive Sciences, 2019) seems interesting from an "understanding human values" perspective.
Abstract: The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.
Some choice quotes:
We suggest that, during learning, humans form new setpoints pertaining to cognitive goals. For example, we might represent current and desired states on axes pertaining to financial stability, moral worth, or physical health as well as hunger, thirst, or temperature. [...] This theory proposes that current states and goals are encoded in a multidimensional ‘value map’. Motivated behaviour can then be seen as an attempt to minimise the maximum distance to setpoints in this value space. Repurposing this framework for cognitive settings, agents commit to policies that focus on purposively driving the current state towards setpoints on a particular goal dimension, such as caching resources, building a shelter, obtaining a mate, or enhancing professional status. In doing so, their ultimate goal is to maintain equilibrium among all goal states, achieving what might be popularly characterised as a state of ‘wellbeing’. [...]
More generally, we argue that some of the most complex and abstract decisions that humans make might be better described by a process that optimises over states, rather than rewards. For example, consider a high-school student choosing a career path. Under the (model-based) RL framework, the student must consider an impossibly large number of potential futures and select whichever is going to be most rewarding. This appears to imply the devotion of disproportionate levels of computational resources to the search problem. The approach advocated here implies that they first select a goal state (e.g., become a lawyer) and then takes actions that minimise distance to that goal. For example, they seek to go to law school; to maximise their chances of acceptance, they first study hard for their exams; this in turn influences decisions about whether to socialise with friends. This explanation appears to accord better with our common sense intuition of how the complex choices faced by humans are made. However, the computations involved may build upon more phylogenetically ancient mechanisms. For example, one of the most prominent theories of insect navigation proposes that, to reach their home base, central-place foragers, such as honey bees (and desert ants), initially encode an egocentric snapshot of their base and, subsequently, on the return journey, use a similarity-matching process to gradually reach their goal [28]. This implies that they are similarly performing gradient descent over states, akin to the process proposed here. [...]
An appealing aspect of this framework is that it provides a natural way to understand the affective states that pervade our everyday mental landscape, including satisfaction (goal completion), frustration (goal obstruction), and disappointment (goal abandonment), which have largely eluded computational description thus far [14]. [...]
The natural world is structured in such a way that some states are critical for survival or have substantial impact on long-run future outcomes. For example, the student introduced above might work hard to pass their exams in the knowledge that it will open up interesting career opportunities. These states are often attained when accumulated resources reach, or fall below, a critical threshold. Behavioural ecologists have argued that the risky foraging behaviour of animals adapts to satisfy a ‘budget rule’ that seeks to maintain energetic resources at aspirational levels that safely offset future scarcity. For example, birds make risky foraging choices at dusk to accrue sufficient energy to survive a cold night [45]. This view is neatly accommodated within the framework proposed here, in that the aspiration level reflects the setpoint against which current resource levels are compared, and the driver of behaviour is the disparity between current state and goal.
This framework of having multiple axes representing different goals, and trying to minimize the sum of distances to their setpoints, also reminds me a bit of moridinamael's Complex Behavior from Simple (Sub)Agents [LW · GW].
comment by Kaj_Sotala · 2019-11-26T13:01:38.893Z · LW(p) · GW(p)
Recent papers relevant to earlier posts in my multiagent sequence [? · GW]:
Understanding the Higher-Order Approach to Consciousness. Richard Brown, Hakwan Lau, Joseph E.LeDoux. Trends in Cognitive Sciences, Volume 23, Issue 9, September 2019, Pages 754-768.
Reviews higher-order theories (HOT) of consciousness and their relation to global workspace theories (GWT) of consciousness, suggesting that HOT and GWT are complementary. Consciousness and the Brain [LW · GW], of course, is a GWT theory; whereas HOT theories suggest that some higher-order representation is (also) necessary for us to be conscious of something. I read the HOT models as being closely connected to introspective awareness [LW · GW]; e.g. the authors suggest a connection between alexityhmia (unawareness of your emotions) and abnormalities in brain regions related to higher-order representation.
While the HOT theories seem to suggest that you need higher-order representation of something to be conscious of a thing, I would say that you need higher-order representation of something in order to be conscious of having been conscious of something. (Whether being conscious of something without being conscious of being conscious of it can count as being conscious of it, is of course an interesting philosophical question.)
Bridging Motor and Cognitive Control: It’s About Time! Harrison Ritz, Romy Frömer, Amitai Shenhav. Trends in Cognitive Sciences, in press.
I have suggested [LW · GW] that control of thought and control of behavior operate on similar principles; this paper argues the same.
We often describe our mental states through analogy to physical actions. We hold something in mind or push it out of our thoughts. An emerging question in cognitive control is whether this relationship runs deeper than metaphor, with similar cognitive architectures underpinning our ability to control our physical actions and our mental states. For instance, recent work has shown that analogous control processes serve to optimize performance and regulate brain dynamics for both motor and cognitive actions [1,2]. A new study by Egger and colleagues [3] provides important new clues that the mechanisms supporting motor and cognitive control are more similar than previously shown.
These researchers tested whether the control of internal states exhibits a signature property of the motor system: the reliance on an internal model to guide adjustments of control [4]. To control one’s actions, a person needs to maintain an internal model of their environment (e.g., potential changes in terrain or atmosphere) and of their own motor system (e.g., how successful they are at executing a motor command [5]). This model can be used to generate online predictions about the outcome of an action and to course-- correct when there is a mismatch between that prediction and the actual outcome. This process is thought to be implemented via interactions between: (i) a simulator that makes predictions, (ii) an estimator that learns the current state, and (iii) a controller that implements actions. This new study investigated whether neural activity during the control of cognitive processes reflected this same three-part architecture.
To answer this question, Egger and colleagues recorded neural activity while monkeys performed an interval reproduction task (Figure 1). The monkeys observed two samples of a time interval and then timed a saccade to reproduce this interval. Previous work has shown that population-level neural activity in the dorsomedial frontal cortex (DMFC) during similar tasks systematically scales with the timing of an action [6]. If action timing in this task depends on an internal model, then this temporal scaling should already be present in DMFC activity prior to receiving a cue to respond. If the monkeys were not relying on an internal model, and the activity instead reflected the passive measurement of time (‘open-loop’ control), then DMFC activity during the second interval should not exhibit such temporal scaling.
The monkeys’ behavior and neural activity demonstrated that they combined prior knowledge about the average interval duration with their perception of the current interval duration [7]. This behavior was well-captured by a nearoptimal Bayesian algorithm that updated predictions in a way that was biased towards the average interval. By independently varying the duration of the two sample intervals, the authors were further able to show that the monkeys incorporated both samples into their duration estimate.
Signatures of this biased updating process were also observed in DMFC neural activity. Replicating previous studies, individual neurons in the DMFC demonstrated ramping activity during the reproduction of an interval, with faster ramping when the monkey reproduced shorter intervals [6]. Critically, neural activity during the second sample interval exhibited the predicted simulation profile: neurons demonstrated interval-dependent ramping during this epoch, prior to the response cue.
Further support for an internal model hypothesis was found across different measures of neural activity, and in their relationship with subsequent behavior. Temporal scaling was evident not only at the level of DMFC single neurons but also in the population-level neural dynamics across this region. Unlike the transient single-unit responses, the rate of change in these population dynamics scaled consistently with interval length throughout the second sample interval. These dynamics reflected the same Bayesian biases observed in monkeys’ behavior: an initial bias towards the average interval duration that became less biased with more samples. Critically, these population dynamics also predicted when the monkey would saccade on the upcoming response interval, and did so above and beyond what would be predicted by the lengths of the sampled time intervals alone. Collectively, these findings are consistent with the DMFC implementing an internal model to optimize the learning of task goals and the control of neural population dynamics.
This study provides evidence that DMFC mediates the influence of prior predictions and incoming sensory evidence on planned actions, and lays the groundwork for critical tests of this proposed mechanism using causal manipulations (i.e., stimulation or inactivation). Such causal tests can also help to rule out alternative accounts of neural dynamics during the sample intervals, for instance, whether they reflect a simulated motor plan (as the authors infer) or an interval expectation (e.g., predicting the onset of the interval cue [8]). Nevertheless, by elaborating on the neuronal dynamics within DMFC during a task that requires online adjustments of learning and control, this study builds on a growing literature that implicates regions along this dorsomedial wall in the control of motor and cognitive commands [9,10].
More generally, this research provides compelling new evidence that motor and cognitive control share a common computational toolbox. Past work has suggested that both forms of control serve similar objectives (achieving a goal state within a dynamic, uncertain, and noisy environment) and that they are also both constrained by some underlying cost, limiting the amount of control that individuals can engage at a given time. As a consequence, decisions about how to allocate one’s control are sensitive to whether the reward for goal achievement outweighs these costs [10]. To the extent computational and neural architecture for motor and cognitive control allocation mirror one another, the behavior and neural dynamics observed in the current task should demonstrate sensitivity to performance incentives for both forms of control.
In spite of their abundant bodies of research, the obstacle to bridging our understanding of motor and cognitive control have been similarly abundant, including limitations of tasks, measurement tools, and model organisms. This study demonstrates how a combination of computational modeling and measures of neural dynamics in the monkey can be leveraged towards this goal and, in doing so, provides a valuable path forward in mapping the joints between these two domains of control.
From Knowing to Remembering: The Semantic–Episodic Distinction. Louis Renoult, Muireann Irish, Morris Moscovitch, and Michael D. Rugg. Trends in Cognitive Sciences, in press.
In Book summary: Unlocking the Emotional Brain [LW · GW] and Building up to an Internal Family Systems model [LW · GW], I referenced models under which a particular event in a person's life gives rise to a generalized belief schema, and situations which re-activate that belief schema may also partially re-activate recollection of the original event, and vice versa; if something reminds you of a situation you experienced as a child, you may also to some extent reason in the kinds of terms that you did when you were a child and in that situation. This paper discusses connections between episodic memories (e.g., "I remember reading 1984 in Hyde Park yesterday") and semantic memories (e.g. "1984 was written by George Orwell"), and how activation of one may activate another.
What underlies the overlap between the semantic and recollection networks? We propose that the answer lies in the fact that the content of an episodic memory typically comprises a conjunction of familiar concepts and episode-specific information (such as sensory and spatial context), much as the episodic interpretation of concept cells suggests. Thus, recollection of a prior episode entails the reinstatement not only of contextual information unique to the episode, but also of the conceptual processing that was engaged when the recollected event was experienced (see also [66]). From this perspective, ‘recollection success effects’ in cortical members of the core recollection network do not reflect processing that supports episodic memory per se, but rather, the reinstatement of the conceptual processing that invariably underpins our interactions with the world in real-time (e.g., [10,67,68]). [...]
Although the proposal that recollection success effects in the core network reflect the reinstatement of conceptual processing is both parsimonious and, we contend, consistent with the available evidence, it lacks direct support. fMRI studies examining the neural correlates of successful recollection have invariably used meaningful experimental items, such as concrete words, or pictures of objects, and have typically done so in the context of study tasks that require or encourage semantic elaboration. To our knowledge, with the exception of [89], there are no published studies in which recollection effects were contrasted according to the amount of semantic or conceptual processing engaged during encoding (although see [90] for a study in which encoding was manipulated but the subsequent memory test did not allow identification of items recognized on the basis of recollection rather than on familiarity). In [89], the memory test required a discrimination between unstudied items and items subjected to semantic or nonsemantic study. Retrieval effects in the core network were not fully explored, but intriguingly, one member of the network (left parahippocampal cortex) was reported to demonstrate a greater recollection effect (operationalized as greater activity for correct than incorrect source judgments) for semantically than nonsemantically studied items. This finding is consistent with the present proposal, but it remains to be established whether, as predicted by the proposal, recollection-related activity within the core network as a whole covaries with the amount of semantic processing accorded a recollected episode when it was first experienced. [...]
Thus far, we have discussed episodic and semantic memories without reference to the possibility that their content and neural underpinnings might vary over time. However, there is a long-standing literature documenting that memory representations can be highly dynamic, shifting their dependence from the hippocampus and adjacent regions of the medial temporal lobe (MTL) to other neocortical regions, a phenomenon often referred to as ‘systems consolidation’ [64,65,91–93]. In recent years, systems consolidation has become increasingly intertwined with the construct of memory ‘semanticization’ and schematization, processes by which semantic knowledge and schemas [83] emerge from episodic memory or assimilate aspects of it.
Early studies and theories of memory consolidation, beginning with Ribot and reiterated for almost a century, typically did not distinguish between episodic and semantic memory [65,94–96]. Among the first to realize the importance of the episodic–semantic distinction for theories of memory consolidation were Kinsbourne and Wood [97]. They proposed that traumatic amnesia affected only episodic memory, regardless of the age of the memory, and left semantic and schematic memory relatively preserved. Cases in which remote episodic memories appeared to be preserved were attributed to semanticization or schematization through repeated re-encoding (see remote memory), allowing them to achieve the status of personal facts [98,99].
In an important development of the ‘standard’ model of consolidation, McClelland et al. proposed that the hippocampus maintains episodic representations of an event while communicating with (‘instructing’) the neocortical system to incorporate information about the event into its knowledge structure [100]. It was argued that, to protect the cortical network from catastrophic interference, learning had to be slow, thus providing a principled explanation for the extended time period that systems consolidation was assumed to take. Of importance, the model proposes that, in the process of incorporating an episodic memory into a semantic network, the episodic component, initially dependent on the hippocampus, is lost. This represents an important point of divergence from the standard model, in which episodic information is retained in the neocortex along with semantic information (see later).
Incorporating the original idea of Kinsbourne and Wood [97] and the complementary learning perspective [100], ‘multiple trace theory’ (MTT) [101] proposed that the hippocampus supports episodic memories for as long as they exist. By contrast, the theory proposed that semantic memories depend upon the neocortex, which extracts statistical regularities across distinct episodes. Thus, hippocampal damage should have a profound effect on retention and retrieval of episodic memories of any vintage, while leaving semanticized and schematized memories relatively intact.
While receiving empirical support [64,102] (see also [65,103,104] for examples of convergent findings from studies of experimental animals), MTT has also been subjected to several critiques (e.g., [93,105–108]). However, the essence of the theory resonates with the recurring theme of the present review that episodic and semantic memory are intertwined, yet retain a measure of functional and neural distinctiveness. Since its inception, MTT has been extended [65,104,109] to propose that episodic memories can become transformed to more semantic or schematic versions with time and experience (see ‘Episodic and Semantic Memory in Neurodegenerative Disorders’ section); indeed, in some cases, both the original and the semanticized or schematic version of a memory coexist and engage in dynamic interaction with one another. According to this Trace Transformation Theory, the specific neocortical regions supporting transformed memories differ depending on the kind of information that is retained and retrieved. Correspondingly, for complex events, the transformed memories might depend either on event schemas, or on the gist of the event [110–113]. Increased activation of the vmPFC, believed to be implicated in processing schemas [83], and decreased hippocampal activation have both been reported as details are lost and memories become more gist-like and schematic [83,102,110,113], particularly for memories that are congruent with existing schemas [114,115]. Even when details of remote memories are retained, along with continuing hippocampal activation, there is increased vmPFC activation over time [116,117]. Which memory of an event (e.g., its semanticized or schematic version or the detailed episodic memory of the original event) predominates at retrieval will depend on a variety of factors, such as contextual factors and processing demands (see ‘Semantic memory: Neural Underpinnings’ and ‘Episodic Memory: Neural Underpinnings’ sections), in addition to the availability of one or the other type of information (see also [118]). Thus, retrieval of complex memories depends on the coordinated activation of different combinations of regions (‘process-specific assemblies’ [64,119,120]) belonging to neural networks underlying episodic and semantic memory.
The neuroimaging evidence reviewed to date strongly suggests that successful recollection necessitates the reinstatement not only of sensory-perceptual contextual information characteristic of the original experience, but also the semantic representations and conceptual processing that occurred during that experience. Rather than viewing episodic and semantic memory as dichotomous or mutually exclusive entities, the marked neural overlap between these forms of memory suggests that we must move towards considering the dynamic interplay of sensory-perceptual and conceptual elements during reinstatement of a recollected experience. One way in which we could test this proposal is to examine how progressive neural insult of key structures implicated in episodic and semantic memory impacts related putative functions, including event recollection and event construction.
comment by Kaj_Sotala · 2018-03-31T13:03:56.527Z · LW(p) · GW(p)
Hypothesis: basically anyone can attract a cult following online, provided that they
1) are a decent writer or speaker
2) are writing/speaking about something which may or may not be particularly original, but does provide at least some value to people who haven't heard of this kind of stuff before
3) devote a substantial part of their message into confidently talking about how their version of things is the true and correct one, and how everyone who says otherwise is deluded/lying/clueless
There's a lot of demand for the experience of feeling like you know something unique that sets you apart from all the mundane, unwashed masses.
(This isn't necessarily a bad thing. As long as the content that's being peddled is something reasonable, then these people's followers may get a lot of genuine value from being so enthusiastic about it. Being really enthusiastic almost by definition means that you are going to invest a lot more into internalizing and using the thing, than does someone who goes "meh, that's old hat" and then never actually does anything with the thing. A lot depends on how sensible the content is - this method probably works equally well with content that's a net harm to buy into, as it does with content that's a net good. But of course, the fact that it works basically regardless of what the content is, means that a lot of the content in question will be bad.)
Replies from: mr-hire, eigen↑ comment by Matt Goldenberg (mr-hire) · 2019-08-29T22:15:25.388Z · LW(p) · GW(p)
Other common marketing advice that fits into this:
- Set up a "bad guy" that you're against
- If you're in a crowded category, either
- Create a new category (e.g. rationality)
- Set yourself up as an alternative to number in a category (Pepsi)
- Become number one in the category (Jetblue?)
- It's better to provide value that takes away a pain (painkillers) than that adds something that was missing (vitamins)
↑ comment by eigen · 2019-08-29T20:11:27.559Z · LW(p) · GW(p)
I'd really like to read more about what you think of this. Another closely related feature they need is:
- Content well formatted (The Sequences are a great example of this,The Codex). Of course, blogs are also a good basic idea which allows incremental reading.
- Length of the posts? Maybe? I think there may be a case to be made for length helping to generate that cult following since it's directly related to the amount of time invested by people reading. There are many examples where posts could be summarized by a few paragraphs but instead they go long! (But of course there's a reason they do so).
comment by Kaj_Sotala · 2024-09-07T12:53:28.810Z · LW(p) · GW(p)
Some time back, Julia Wise published the results of a survey asking parents what they had expected parenthood to be like and to what extent their experience matched those expectations. I found those results really interesting and have often referred to them in conversation, and they were also useful to me when I was thinking about whether I wanted to have children myself.
However, that survey was based on only 12 people's responses, so I thought it would be valuable to get more data. So I'm replicating Julia's survey, with a few optional quantitative questions added. If you have children, you're welcome to answer here: https://forms.gle/uETxvX45u3ebDECy5
I'll publish the results at some point when it looks like there won't be many more responses.
Replies from: Sherrinford↑ comment by Sherrinford · 2024-09-07T16:36:58.594Z · LW(p) · GW(p)
The link is a link to a facebook webpage telling my that I am about to leave facebook. Is that intentional?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2024-09-07T19:36:02.196Z · LW(p) · GW(p)
Oh oops, it wasn't. Fixed, thanks for pointing it out.
comment by Kaj_Sotala · 2018-05-13T09:35:53.131Z · LW(p) · GW(p)
So I was doing insight meditation and noticing inconsistencies between my experience and my mental models of what things in my experience meant (stuff like "this feeling means that I'm actively and consciously spending effort... but wait, I don't really feel like it's under my control, so that can't be right"), and feeling like parts of my brain were getting confused as a result...
And then I noticed that if I thought of a cognitive science/psychology-influenced theory of what was going on instead, those confused parts of my mind seemed to grab onto it, and maybe replace their previous models with that one.
Which raised the obvious question of, wait, am I just replacing one set of flawed assumptions with another?
But that would explain the thing which Scott writes about in https://slatestarcodex.com/2018/04/19/gupta-on-enlightenment/ , where e.g. a Muslim who gets enlightened will adopt an Islamic framework to explain it and experience it as a deep truth. Insight meditation involves making the mind confused about what's going on, and when a mind gets confused, it will grab onto the first coherent explanation it finds.
But if you're aware of that, and don't mistake your new set of assumptions for a universal truth, then you can keep investigating your mind and uncovering new inconsistencies in your models, successively tearing each one apart in order to replace them with ever-more accurate ones.
comment by Kaj_Sotala · 2024-12-01T19:30:20.574Z · LW(p) · GW(p)
What could plausibly take us from now to AGI within 10 years?
A friend shared the following question on Facebook:
So, I've seen multiple articles recently by people who seem well-informed that claim that AGI (artificial general intelligence, aka software that can actually think and is creative) in less than 10 years, and I find that baffling, and am wondering if there's anything I'm missing. Sure, modern AI like ChatGPT are impressive - they can do utterly amazing search engine-like things, but they aren't creative at all.
The clearest example of this I've seen comes from people's experiences with AI writing code. From what I've read, AI can do exceptionally well with this task, but only if there are examples of the needed sort of code online that it can access or was trained on, and if it lacks this, it's accuracy is quite bad with easy problems and essentially non-existent with problems that are at all difficult. This clearly says to me that current AI are glorified very impressive search engines, and that's nowhere near what I'd consider AGI and doesn't look like it could become AGI.
Am I missing something?
I replied with some of my thoughts as follows:
Replies from: Seth Herd, Kaj_SotalaI have also been a little confused by the shortness of some of the AGI timelines that people have been proposing, and I agree that there are types of creativity that they're missing, but saying that they're not creative at all sounds too strong. I've been using Claude as a co-writer partner for some fiction and it has felt creative to me. Also e.g. the example of this conversation that someone had with it.
In 2017 I did a small literature review on human expertise, which to me suggested that expertise can broadly be divided into two interacting components: pattern recognition and mental simulation. Pattern recognition is what current LLMs do, essentially. Mental simulation is the bit that they're missing - if a human programmer is facing a novel programming challenge, they can attack it from first principles and simulate the program execution in their head to see what needs to be done.
The big question would then be something like "how hard would it be to add mental simulation to LLMs". Some indications that it wouldn't necessarily be that hard:
* In humans, while they are distinct capabilities, the two also seem to be intertwined. If I'm writing a social media comment and I try to mentally simulate how it will be received, I can do it because I have a rich library of patterns about how different kinds of comments will be received by different readers. If write something that triggers a pattern-detector that goes "uh-oh, that wouldn't be received well", I can rewrite it until it passes my mental simulation. That suggests that there would be a natural connection between the two.
* There are indications that current LLMs may already be doing something like internal simulation though not being that great at it. Like in the "mouse mastermind" vignette, it certainly intuitively feels like Claude has some kind of consistent internal model of what's going on. People have also e.g. trained LLMs to play games like Othello and found that the resulting network has an internal representation of the game board ( https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world [LW · GW] ).
* There have also been various attempts at explicitly combining an LLM-based component with a component that does something like simulation. E.g. DeepMind trained a hybrid LLM-theorem prover system that reached silver medal-level performance on this year's International Mathematics Olympiad ( https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ ), where the theorem prover component maintains a type of state over the math problem as it's being worked on.
* Iterative improvements like chain-of-thought reasoning are also taking LLMs in the direction of being able to apply more novel reasoning in domains such as math. Mathematician Terry Tao commented the following about giving the recent GPT-o1 model research-level math tasks to work on:
> The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent (static simulation of a) graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "(static simulation of a) competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks.
* There have also been other papers trying out various techniques such as "whiteboard of thought" ( https://whiteboard.cs.columbia.edu/ ) where an LLM, when being presented with visual problems in verbal format, explicitly generates visual representations of the verbal description to use as an aid in its reasoning. It feels like a relatively obvious idea would be to roll out these kinds of approaches into future LLM architectures, teaching them to generate "mental images" of whatever task they were told to work on. This could then be used as part of an internal simulation.
* There's an evolutionary argument that the steps from "pure pattern recognition" to "pattern recognition with mental simulation added" might be relatively simple and not require that much in the fundamental breakthroughs, since evolution managed to find it in humans and in humans those abilities seem to be relatively continuous with each other. So we might expect all of these iterative improvements to take us pretty smoothly toward AGI.
↑ comment by Seth Herd · 2024-12-01T21:33:19.075Z · LW(p) · GW(p)
Here's my brief pitch, starting with your point about simulation:
The strength and flexibility of LLMs probably opens up several more routes toward cognitive completeness and what we'd consider impressive creativity.
LLMs can use chain-of-thought sequential processing to do a type of mental simulation. If they are prompted to, or if they "prompt themselves" in a chain of thought system, they can access a rich world model to simulate how different actions are likely to play out. They have to put everything in language, although visual and other modalities can be added either through things like the whiteboard of thought, or by using CoT training directly on those modalities in multimodal foundation models. But language already summarizes a good deal of world models across many modalities, so those improvements may not be necessary.
The primary change that will make LLMs more "creative" in your friends' sense is letting them think longer and using strategy and training to organize that thinking. There are two cognitive capacities needed to do this. There is no barrier to progress in either direction; they just haven't received much attention yet.
LLMs don't have any episodic memory, "snapshot" memory for important experiences. And They're severely lacking executive functioning, the capacity to keep ourselves on-track and strategically direct our cognition. A human with those impairments would be very little use for complex tasks, let alone doing novel work we'd consider deeply creative.
Both of those things seem actually pretty easy to add. Vector-based databases aren't quite good enough to be very useful, but they will be improved. One route is a straightforward, computationally-efficient improvement based on human brain function that I won't mention even though work is probably underway on it somewhere. And there are probably other equally good routes.
The chain-of-thought training applied to o1, r1, Marco o1, and QwQ (and probably soon a whole bunch more) improves organization of chains of thought, adding some amount of executive function. Scaffolding in prompts for things like "where are you in the task? Is this making progress toward the goal? Should we try a different approach?" etc is also possible. This will work better when combined with episodic memory; a human without it couldn't organize their progress through a complex task - but LLMs now have large context windows that are like better-than-human working memory systems, so better episodic memory might not even be necessary for dramatic improvements.
This is spelled out a little more in Capabilities and alignment of LLM cognitive architectures [LW · GW], although that isn't as clear or compelling as I'd like. It looks to me like progress is happening apace on that direction.
That's just one route to "Real AGI" [LW · GW] from LLMs/foundation models. There are probably others that are just as easy. Foundation models can now do almost everything humans can in the short term. Making their cognition cumulative like ours seems like more of an unblocking and using their capacities more strategically and effectively, rather than adding any real new cognitive abilities.
Continuous learning, through better episodic memory and/or fine-tuning for facts/skills judged as useful is another low-hanging fruit.
Hoping that we're more than a decade from transformative AGI now seems wildly optimistic to me. There could be dramatic roadblocks I haven't foreseen, but most of those would just push it past three years. It could take more than a decade, but banking on that leaves us unprepared for the very short timelines that now seem fairly likely.
While the short timelines are scary, there are also large advantages to this route to AGI, including a relatively slow takeoff and the way that LLMs are almost an oracle AI trained largely to follow instructions. But that's another story.
That's a bit more than I meant to write; I've been trying to refine an intuitive explanation of why we may be spitting distance from real, transformative AGI, and that served as a useful prompt.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2024-12-02T10:21:20.678Z · LW(p) · GW(p)
Hoping that we're more than a decade from transformative AGI now seems wildly optimistic to me. There could be dramatic roadblocks I haven't foreseen, but most of those would just push it past three years.
Self-driving cars seem like a useful reference point. Back when cars got unexpectedly good performance at the 2005 and 2007 DARPA grand challenges, there was a lot of hype about how self-driving cars were just around the corner now that they had demonstrated having the basic capability. 17 years later, we're only at this point (Wikipedia):
As of late 2024, no system has achieved full autonomy (SAE Level 5). In December 2020, Waymo was the first to offer rides in self-driving taxis to the public in limited geographic areas (SAE Level 4),[7] and as of April 2024 offers services in Arizona (Phoenix) and California (San Francisco and Los Angeles). [...] In July 2021, DeepRoute.ai started offering self-driving taxi rides in Shenzhen, China. Starting in February 2022, Cruise offered self-driving taxi service in San Francisco,[11] but suspended service in 2023. In 2021, Honda was the first manufacturer to sell an SAE Level 3 car,[12][13][14] followed by Mercedes-Benz in 2023.
And self-driving capability should be vastly easier than general intelligence. Like self-driving, transformative AI also requires reliable worst-case performance rather than just good average-case performance [LW · GW], and there's usually a surprising amount of detail involved that you need to sort out before you get to that point.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2024-12-02T18:24:58.294Z · LW(p) · GW(p)
I admit, I'd probably call self-driving cars at this point a solved or nearly-solved problem by Waymo, and the big reason why self-driving cars only now are taking off is basically because of regulatory and liability issues, and I consider a lot of the self-driving car slowdown as evidence that regulation can work to slow down a technology substantially.
↑ comment by Kaj_Sotala · 2024-12-02T11:41:51.595Z · LW(p) · GW(p)
(Hmm I was expecting that this would get more upvotes. Too obvious? Not obvious enough?)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-12-02T17:58:26.483Z · LW(p) · GW(p)
It seems to me that o1 and deepseek already do a bunch of the "mental simulation" kind of reasoning, and even previous LLMs did so a good amount if you prompted them to think in chain-of-thoughts, so the core point fell a bit flat for me.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2024-12-02T19:12:48.059Z · LW(p) · GW(p)
Thanks, that's helpful. My impression from o1 is that it does something that could be called mental simulation for domains like math where the "simulation" can in fact be represented with just writing (or equations more specifically). But I think that writing is only an efficient format for mental simulation for a very small number of domains.
comment by Kaj_Sotala · 2022-11-26T08:26:23.394Z · LW(p) · GW(p)
A morning habit I've had for several weeks now is to put some songs on, then spend 5-10 minutes letting the music move my body as it wishes. (Typically this turns into some form of dancing.)
It's a pretty effective way to get my energy / mood levels up quickly, can recommend.
It's also easy to effectively timebox it if you're busy, "I will dance for exactly two songs" serves as its own timer and is often all I have the energy for before I've had breakfast. (Today Spotify randomized Nightwish's Moondance as the third song and boy I did NOT have the blood sugar for that, it sucked me in effectively enough that I did the first 30 seconds but then quickly stopped it after the pace slowed down and it momentarily released its grip on me.)
comment by Kaj_Sotala · 2019-09-07T11:25:12.298Z · LW(p) · GW(p)
Janina Fisher's book "Healing the Fragmented Selves of Trauma Survivors" has an interesting take on Internal Family Systems [LW · GW]. She conceptualizes trauma-related parts (subagents) as being primarily associated with the defensive systems of Fight/Flight/Freeze/Submit/Attach.
Here's how she briefly characterizes the various systems and related behaviors:
- Fight: Vigilance. Angry, judgmental, mistrustful, self-destructive, controlling, suicidal, needs to control.
- Flight: Escape. Distancer, ambivalent, cannot commit, addictive behavior or being disorganized.
- Freeze: Fear. Frozen, terrified, wary, phobic of being seen, agoraphobic, reports panic attacks.
- Submit: Shame. Depressed, ashamed, filled with self-hatred, passive, "good girl," caretaker, self-sacrificing.
- Attach: Needy. Desperate, craves rescue & connection, sweet, innocent, wants someone to depend on.
Here's how she describes a child-like part connected to an "attach" system coming to existence:
... research has demonstrated the propensity of the brain to develop neural networks holding related neural pathways that consistently “fire” together, and these neural systems often encode complex systems of traits or systems (Schore, 2001) that represent aspects of our personalities or ways of being. For example, if neural pathways activating the proximity drive fire consistently in the presence of the attachment figure, along with neural pathway holding feelings of loneliness and yearning for comfort and a neural network holding the tendency to believe that “she loves me—she would never hurt me,” the result might be a neural system representing a young child part of the personality with a toddler’s yearning for comfort and closeness along with the magical thinking that the attachment figure will be safe and loving, yet also the uneasy feeling that something is not right. Such neural systems can be complex with a subjective sense of identity or can be a simpler collection of traits associated with different roles played by the individual.
Here are how she relates various trauma symptoms to these systems:
The paradoxical quality of these symptoms is rarely captured by traditional diagnostic models. Clients report symptoms of major depression (the submit part), anxiety disorders (freeze), substance abuse and eating disorders (flight), anger management or self-harm issues (fight), and they alternately cling to others or push them away (the characteristic symptoms of disorganized or traumatic attachment).
And here's how she describes something that in traditional IFS terms would be described as polarized parts:
Aaron described the reasons for which he had come: “I start out by getting attached to women very quickly—I immediately think they’re the ‘one.’ I’m all over them, can’t see them enough … until they start to get serious or there’s a commitment. Then I suddenly start to see everything I didn’t see before, everything that’s wrong with them. I start feeling trapped with someone who’s not right for me—I want to leave, but I feel guilty—or afraid they’ll leave me. I’m stuck. I can’t relax and be happy, but I can’t get out of it either.”
Aaron was describing an internal struggle between parts: between an attachment-seeking part that quickly connected to any attractive woman who treated him warmly and a hypervigilant, hypercritical fight part that reacted to every less-than-optimal quality she possessed as a sign of trouble. His flight part, triggered by the alarms of the fight part, then would start to feel trapped with what felt like the “wrong person,” generating impulses to get out—an action that his submit and cry for help parts couldn’t allow. Guilt and shame for the commitment he’d promised (the submit part’s contribution) and fear of loss (the input from his traumatically attached part) kept him in relationships that his fight and flight parts resisted with equal intensity. Without a language to differentiate each part and bring it to his awareness, he ruminated constantly: should he leave? Or should he stay? Was she enough? Or should he get out now? Often, suicide seemed to him the most logical solution to this painful dilemma, yet at the same time “he” dreamed of having a family with children and a loving and lovely wife. “He” didn’t approve of his wandering eye, yet “he” couldn’t stop trolling for prospective partners. Who was “he”? The suicidal part’s threat to end it all was in direct conflict with his wish for a wife and family; the “trolling for women” part was at odds with the person he wanted to be and believed he should and could be.
comment by Kaj_Sotala · 2018-07-09T06:14:58.144Z · LW(p) · GW(p)
Huh. I woke up feeling like meditation has caused me to no longer have any painful or traumatic memories: or rather all the same memories are still around, but my mind no longer flinches away from them if something happens to make me recall them.
Currently trying to poke around my mind to see whether I could find any memory that would feel strongly aversive, but at most I can find ones that feel a little bit unpleasant.
Obviously can't yet tell whether some will return to being aversive. But given that this seems to be a result of giving my mind the chance to repeatedly observe that flinching away from things is by itself the thing that makes the things unpleasant, I wouldn't be too surprised if I'd managed to successfully condition it to stop doing that for the memories. Though I would expect there to be setbacks, the next time that something particularly painful happened or was just generally feeling bad.
Replies from: Elo