Posts

A mechanistic model of meditation 2019-11-06T21:37:03.819Z · score: 86 (26 votes)
On Internal Family Systems and multi-agent minds: a reply to PJ Eby 2019-10-29T14:56:19.590Z · score: 35 (13 votes)
Book summary: Unlocking the Emotional Brain 2019-10-08T19:11:23.578Z · score: 164 (62 votes)
Against "System 1" and "System 2" (subagent sequence) 2019-09-25T08:39:08.011Z · score: 73 (23 votes)
Subagents, trauma and rationality 2019-08-14T13:14:46.838Z · score: 67 (34 votes)
Subagents, neural Turing machines, thought selection, and blindspots 2019-08-06T21:15:24.400Z · score: 58 (18 votes)
On pointless waiting 2019-06-10T08:58:56.018Z · score: 43 (22 votes)
Integrating disagreeing subagents 2019-05-14T14:06:55.632Z · score: 86 (23 votes)
Subagents, akrasia, and coherence in humans 2019-03-25T14:24:18.095Z · score: 87 (25 votes)
Subagents, introspective awareness, and blending 2019-03-02T12:53:47.282Z · score: 60 (20 votes)
Building up to an Internal Family Systems model 2019-01-26T12:25:11.162Z · score: 153 (55 votes)
Book Summary: Consciousness and the Brain 2019-01-16T14:43:59.202Z · score: 94 (31 votes)
Sequence introduction: non-agent and multiagent models of mind 2019-01-07T14:12:30.297Z · score: 88 (33 votes)
18-month follow-up on my self-concept work 2018-12-18T17:40:03.941Z · score: 57 (16 votes)
Tentatively considering emotional stories (IFS and “getting into Self”) 2018-11-30T07:40:02.710Z · score: 39 (11 votes)
Incorrect hypotheses point to correct observations 2018-11-20T21:10:02.867Z · score: 75 (30 votes)
Mark Eichenlaub: How to develop scientific intuition 2018-10-23T13:30:03.252Z · score: 68 (28 votes)
On insecurity as a friend 2018-10-09T18:30:03.782Z · score: 38 (20 votes)
Tradition is Smarter Than You Are 2018-09-19T17:54:32.519Z · score: 68 (24 votes)
nostalgebraist - bayes: a kinda-sorta masterpost 2018-09-04T11:08:44.170Z · score: 16 (7 votes)
New paper: Long-Term Trajectories of Human Civilization 2018-08-12T09:10:01.962Z · score: 27 (13 votes)
Finland Museum Tour 1/??: Tampere Art Museum 2018-08-03T15:00:05.749Z · score: 20 (6 votes)
What are your plans for the evening of the apocalypse? 2018-08-02T08:30:05.174Z · score: 24 (11 votes)
Anti-tribalism and positive mental health as high-value cause areas 2018-08-02T08:30:04.961Z · score: 26 (10 votes)
Fixing science via a basic income 2018-08-02T08:30:04.380Z · score: 30 (14 votes)
Study on what makes people approve or condemn mind upload technology; references LW 2018-07-10T17:14:51.753Z · score: 21 (11 votes)
Shaping economic incentives for collaborative AGI 2018-06-29T16:26:32.213Z · score: 47 (13 votes)
Against accusing people of motte and bailey 2018-06-03T21:31:24.591Z · score: 83 (27 votes)
AGI Safety Literature Review (Everitt, Lea & Hutter 2018) 2018-05-04T08:56:26.719Z · score: 37 (10 votes)
Kaj's shortform feed 2018-03-31T13:02:47.793Z · score: 13 (3 votes)
Helsinki SSC March meetup 2018-03-26T19:27:17.850Z · score: 12 (2 votes)
Is the Star Trek Federation really incapable of building AI? 2018-03-18T10:30:03.320Z · score: 29 (8 votes)
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms 2018-03-08T07:37:54.532Z · score: 283 (107 votes)
Some conceptual highlights from “Disjunctive Scenarios of Catastrophic AI Risk” 2018-02-12T12:30:04.401Z · score: 63 (18 votes)
On not getting swept away by mental content 2018-01-25T20:30:03.750Z · score: 23 (7 votes)
Papers for 2017 2018-01-04T13:30:01.406Z · score: 32 (8 votes)
Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering 2018-01-03T14:39:18.024Z · score: 1 (1 votes)
Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering 2018-01-03T13:57:55.979Z · score: 16 (6 votes)
Fixing science via a basic income 2017-12-08T14:20:04.623Z · score: 38 (11 votes)
Book review: The Upside of Your Dark Side: Why Being Your Whole Self–Not Just Your “Good” Self–Drives Success and Fulfillment 2017-12-04T13:10:06.995Z · score: 27 (8 votes)
Meditation and mental space 2017-11-06T13:10:03.612Z · score: 26 (7 votes)
siderea: What New Atheism says 2017-10-29T10:19:57.863Z · score: 12 (3 votes)
Postmodernism for rationalists 2017-10-17T12:20:36.139Z · score: 24 (1 votes)
Anti-tribalism and positive mental health as high-value cause areas 2017-10-17T10:20:03.359Z · score: 30 (10 votes)
You can never be universally inclusive 2017-10-14T11:30:04.250Z · score: 34 (10 votes)
Meaningfulness and the scope of experience 2017-10-05T11:30:03.863Z · score: 35 (14 votes)
Social Choice Ethics in Artificial Intelligence (paper challenging CEV-like approaches to choosing an AI's values) 2017-10-03T17:39:00.683Z · score: 8 (3 votes)
LW2.0 now in public beta (you'll need to reset your password to log in) 2017-09-23T12:00:50.345Z · score: 2 (2 votes)
Nobody does the thing that they are supposedly doing 2017-09-23T10:40:06.155Z · score: 69 (34 votes)
Debiasing by rationalizing your own motives 2017-09-03T12:20:10.405Z · score: 1 (1 votes)

Comments

Comment by kaj_sotala on A Practical Theory of Memory Reconsolidation · 2019-11-15T11:04:50.324Z · score: 6 (3 votes) · LW · GW

I've been thinking about that top image for a day now and find it really good and helpful. It gets at both "parts may use opposite strategies for achieving the same goals" and the hierarchical nature of goals. I'm going to start using that image or something like it from now on when explaining these things.

My only suggestion would be to reverse the order: I had difficulties reading it until I figured out that the logic goes from the right to the left, and not the other way around.

Comment by kaj_sotala on A Practical Theory of Memory Reconsolidation · 2019-11-14T16:08:18.510Z · score: 3 (1 votes) · LW · GW

I really like the content in this post and the three following ones, but would prefer them to be in a single post, as length-wise there doesn't seem to be a reason for them to separate and the transitions interrupt the flow of reading. It also makes me confused about how to upvote them: if I like them all, not upvoting them all feels wrong, but it also feels like it's awarding triple karma for one article's worth of content and that feels wrong too. (Fortunately in this case, 3/4 of the posts felt like I wanted to upvote them [for a total of 9 karma] and if they had been a single article I might have strong-upvoted it [for a total of 10 karma], so that felt like an acceptable solution.)

Comment by kaj_sotala on Evolution of Modularity · 2019-11-14T09:39:05.598Z · score: 16 (6 votes) · LW · GW

There is also the suggestion that having connection costs imposes modularity:

We investigate an alternate hypothesis [than the MVG one] that has been suggested, but heretofore untested, which is that modularity evolves not because it conveys evolvability, but as a byproduct from selection to reduce connection costs in a network (figure 1) [9,16]. Such costs include manufacturing connections, maintaining them, the energy to transmit along them and signal delays, all of which increase as a function of con- nection length and number [9,17 –19]. The concept of connection costs is straightforward in networks with physical connections (e.g. neural networks), but costs and physical limits on the number of possible connections may also tend to limit interactions in other types of networks such as genetic and metabolic pathways. For example, adding more connections in a signalling pathway might delay the time that it takes to output a critical response; adding regulation of a gene via more transcription factors may be difficult or impossible after a certain number of proximal DNA binding sites are occupied, and increases the time and material required for genome replication and regulation; and adding more protein–protein interactions to a system may become increasingly difficult as more of the remaining surface area is taken up by other binding interactions. Future work is needed to investigate these and other hypotheses regarding costs in cellular networks. The strongest evidence that biological networks face direct selection to minimize connection costs comes from the vascular system [20] and from nervous systems, including the brain, where multiple studies suggest that the summed length of the wiring diagram has been minimized, either by reducing long connections or by optimizing the placement of neurons [9,17 –19,21 –23]. Founding [16] and modern [9] neuroscientists have hypothesized that direct selection to minimize connection costs may, as a side-effect, cause modularity. [...]

Given the impracticality of observing modularity evolve in biological systems, we follow most research on the subject by conducting experiments in computational systems with evolutionary dynamics [4,11,13]. Specifically, we use a well- studied system from the MVG investigations [13,14,27]: evolving networks to solve pattern-recognition tasks and Boolean logic tasks (§4). [...]

After 25 000 generations in an unchanging environment (L-AND-R), treatments selected to maximize performance and minimize connection costs (P&CC) produce significantly more modular networks than treatments maximizing per- formance alone (PA)

Comment by kaj_sotala on Kaj's shortform feed · 2019-11-12T09:23:05.314Z · score: 12 (4 votes) · LW · GW

Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake.

Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people".

Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way.

At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any *other* way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew.

Now someone comes and happens to mention that I "don't seem to try to understand other people".

I get upset and defensive because I totally do, this person hasn't understood me at all!

And in one sense, I'm right - it's true that there's a dimension of "trying to understand other people" that I've put a lot of effort into, in which I've probably invested more than other people have.

And in another sense, the other person is right - while I was good at one dimension of "trying to understand other people", I was severely underinvested in others. And I had not really even properly acknowledged that "trying to understand other people" had other important dimensions too, because I was justifiably proud of my investment in one of them.

But from the point of view of someone who *had* invested in those other dimensions, they could see the aspects in which I was deficient compared to them, or maybe even compared to the median person. (To some extent I thought that my underinvestment in those other dimensions was *virtuous*, because I was "not making assumptions about people", which I'd been told was good.) And this underinvestment showed in how I acted.

So the mistake is that if there's a vaguely defined, multidimensional skill and you are strongly invested in one of its dimensions, you might not realize that you are deficient in the others. And if someone says that you are not good at it, you might understandably get defensive and upset, because you can only think of the evidence which says you're good at it... while not even realizing the aspects that you're missing out on, which are obvious to the person who *is* better at them.

Now one could say that the person giving this feedback should be more precise and not make vague, broad statements like "you don't seem to try to understand other people". Rather they should make some more specific statement like "you don't seem to try to make guesses about other people based on how they compare to other people you know".

And sure, this could be better. But communication is hard; and often the other person *doesn't* know the exact mistake that you are making. They can't see exactly what is happening in your mind: they can only see how you behave. And they see you behaving in a way which, to them, looks like you are not trying to understand other people. (And it's even possible that *they* are deficient in the dimension that *you* are good at, so it doesn't even occur to them that "trying to understand other people" could mean anything else than what it means to them.)

So they express it in the way that it looks to them, because before you get into a precise discussion about what exactly each of you means by that term, that's the only way in which they can get their impression across.

It's natural to get defensive when someone says that you're bad at something you thought you were good at. But the things we get defensive about, are also things that we frequently have blindspots around. Now if this kind of a thing seems to happen to me again, I try to make an effort to see whether the skill in question might have a dimension that I've been neglecting.

Once I've calmed down and stopped being defensive, that is.

(see also this very related essay by Ferrett)

Comment by kaj_sotala on [Team Update] Why we spent Q3 optimizing for karma · 2019-11-08T09:41:27.973Z · score: 8 (4 votes) · LW · GW

To offer another data point, my vote weights are also 3 / 10, and it hasn't occurred to me to think about these things. I just treat my "3" as a "1", and usually only strong-upvote if I get a clear feeling of "oh wow, I want to reward this extra hard" (i.e. my rule is something like "if I feel any uncertainty about whether this would deserve a strong upvote, then it doesn't").

Comment by kaj_sotala on Total horse takeover · 2019-11-07T14:55:36.735Z · score: 5 (2 votes) · LW · GW

This kind of reminds me of how it goes when I get lucid dreams where I'm in control. They've sounded really great in theory, but my experience is that if I'm in control, then nothing happens without me controlling it. E.g. if I want some other person in my dream, I have to decide everything they say and do. This usually gets tedious within about three seconds and I just want to wake up.

Comment by kaj_sotala on The Curse Of The Counterfactual · 2019-11-06T17:02:30.410Z · score: 5 (2 votes) · LW · GW

Thank you! This was useful advice (and you were right, I hadn't really understood those aspects of the Work).

That sense of urgency and anxiety that came up around the end had continued to re-trigger itself, so I tried the approach in your comment after reading it. Roughly, the belief seemed to be something like "without this anxiety, I will get stuck doing useless things" - which felt kinda true, but I was not super-convinced that the feeling was particularly helpful concerning that problem... still, I had no clear counterevidence, and lacking it I would have gone down an IFS route previously.

But then I went through the steps until I got to "who would I be if I didn't believe that this feeling is necessary for me to stop doing useless things and for actually getting work done in time?"

... huh. A moment of confusion; felt like a novel possibility. Then felt like it would be a big relief... mostly. I think there was some reconsolidation. But also some unease, some objection I didn't quite uncover.

While I didn't manage to get a firm grip of the next objection, the shift was enough to make the anxiety temporarily subside - which by itself was more than I'd managed to do with all the Focusing and IFS that I'd been throwing at it for the last couple of years.

And for the last two days, the anxiety has felt different. Now it has actually been good at pushing me to work, rather than stopping me from getting anything done. Got quite a bit done, and also didn't worry about what the optimal thing was.

I think it would still be better not to need anxiety as a driver in the first place, so I still want to dig into that soon, but these two days were already a big improvement over Monday. So thank you!

Comment by kaj_sotala on The Curse Of The Counterfactual · 2019-11-04T15:16:02.875Z · score: 3 (1 votes) · LW · GW
So if you have a problem like that, I'd appreciate your feedback on the content. (Are the instructions clear? Were you able to apply the technique? What happened afterwards?) Thanks!

The instructions were clear to me; I thought the explanation of the felt sense was one of the best that I've seen. Though a friend of mine who I also showed the explanation to had this question: "Is counting the left turns just an example of a task which requires stopping and thinking, or an actual example of the felt sense? To me it's a visual memory exercise during which I can just count the turns 'aloud', without any particular felt."

I have a talk that I need to prepare, and I found myself having difficulties starting on it. In particular, there are several different things that I could be doing which the presentation might benefit from, but I'm having difficulty choosing which one to focus on. So I figured that this would be a good opportunity to try out your ebook.

I read your first question (before I read any of the text explaining the first question), answered it, and got a result that seemed promising. Then I felt like I should read the explanations for the two questions, so I read those, at which point I had lost the felt sense of the original answer. Fortunately I had written my answer down, so I could read it, recapture the felt sense, and proceed with the second question. (I skipped the troubleshooting section, figuring it would mostly be the kind of stuff I already knew.) I wrote down a "stack trace" starting from there (I usually don't do mindhacking while writing down my progress, but maybe I should do more of it; it seemed beneficial by itself).

---

"What bad thing am I thinking about, or expecting will happen?"

I expect that I will start doing the wrong thing. There are lots of things I could be potentially pursuing or working but only a limited amount of time, so I might pick one that feels fun and easy to work on, but isn't necessarily the most productive. But I also don't want to pick the tedious-feeling one.

Also I'm afraid that even when I do pick one thing, I will remain uncertain of whether it was the best approach, so I can't properly concentrate on it and will just keep switching tasks. That makes me want to focus on whatever feels the easiest. But again the easiest one isn't necessarily the best, so I again feel worried about making the choice for the wrong reasons.

"What do I want instead?"

To be able to pick the right thing, and work on it with confidence until I have what I need.

An objection comes up: it's impossible to know the right thing for certain (and thus to always pick the right thing).

Another objection: the "work on it with confidence until I have what I need" produces a mental image which is associated with drudgery; working on something that I don't really care about because I'm so focused on it.

Let me try to apply the questions recursively to the objections. the second objection feels more serious (and is an old friend), so let's start with that.

Objection: mental image of drudgery.

"What bad thing am I thinking about, or expecting will happen?"

I'm getting fear, a sense of mild panic. a feeling of being trapped. suffocating. not getting to do anything meaningful, while being forced to do things that I hate and which are draining life out of me. a literal sense of life going to waste, precious minutes that I could be spending on anything becoming forever lost. memories are coming up of various times which felt like that. a sense of urgency.

this feels like it needs a different approach than just the questions; I've already tried IFS on this before, but never really gotten anywhere. let met try The Work on this. for that I think I need to boil down my reactions to a more concise form. hmm.

[at this point I basically drifted away from just doing the two questions, so this transcript basically stops being feedback for the book at this point, but included here for the sake of completeness since I wrote it down anyway]

"Each second that I spend doing something I don't want, is forever wasted." does that fit to what I was getting before?

Kinda. But there's something off about it; "doing something I don't want" isn't quite the right thing. Let's try again...

It feels like there's... an expectation that I myself will always make choices which cause me to do things that feel meaningless. It feels dangerous for me to commit to a course of action, because I will always commit to an action which is soul-draining. Huh.

Is that true? If I commit to something which is meaningful... then I will never stay on track. I will be... pulled away from it somehow. Memories of times when that has been the case. There's a sense of... an almost physical sense of getting pulled to the side, whenever I try to do something which is genuinely meaningful.

can I verbalize that prediction? "It's impossible for me to do anything meaningful. Each time that I try, I will be pulled to the side, away from the meaningful thing."

Is that true?

It feels like it has always been true in the past. Is it necessarily true in the future? Logic says it could change, but my mind seems to predict that it will continue happening.

Can I absolutely know that it's true?

No, I don't think I can. Or I guess that if I lived my whole life, and reflected on the question on my deathbed, then I could know that it was true, if the pattern had continued up until then. But even then I would probably have logical doubts. And it seems kinda silly to expect that it would always happen. The more I think of it, the less likely it seems.

How do you react, what happens, when you believe that thought?

There's that bodily sensation of being pulled again. A sense of struggling to go one way, and it being a constant fight. A resignation that things can't be easy, that I can't ever rest, a wish to just be free of all the things that keep pulling me aside. A sense of tension. Feels like there's a set of reins around my shoulder, pulling me backwards, and I want to just yank at them enough that whoever-is-holding-them drops them and I'm free. I get a sense of a shape, of some ominous figure who is holding them. Looks like a cartoonish anthropomorphic Death.

... I feel like the reins are pulling me to my death. That's what they are. Stealing minutes, hours of my life. I only have a limited time here, and it's not enough that most of the universe's lifetime will be spent with me non-existing; I'm not even allowed to properly exist here.

Should I move on to the next question? I have a sense that I haven't quite understood this yet. But I also have a sense of urgency that I should be making progress, that I've spent quite a while brainhacking and that's not producing any results yet, I should actually start working eventually.

I did an IFS move and asked the sense of urgency to move aside for a moment. It wants a guarantee that I will actually start working after this, and that the whole day won't go waste from a work perspective. A reasonable worry. After this I will go on a walk, actually decide what I will work on, and then work on it. Can I stick to that? It's not convinced that I can, and neither am I.

Wait, is this sense of urgency exactly the same issue I was just working on? Let me see.

It feels like it's... something which is trying to help me stay on track? That doesn't feel quite right. But it has a similar sense of almost physical pull.

It's feeling very strong now. Sense of anxiety that's making hard to focus on the actual beliefs and predictions in it. I get - as I've gotten many times - a sense of myself as a teenager, doing something on my computer when I was supposed to be doing something else. A school assignment?

Or maybe just something that I genuinely wanted to do... but getting stuck on instant messenger despite no external pressure, failing to do something that I had been looking forward to.

Huh. I had been previously been assuming that this pressure is something that tries to motivate me to do something else. But there's an expectation that... the pressure will keep me in place, preventing me from doing anything?

Because it feels like, I have the pressure in that memory as well... building up, becoming stronger, keeping me more in place.

A particular memory. Complaining online that I wasn't able to do the things I wanted during my vacation. Someone misunderstanding and trying to assure me by saying that it was vacation, I was free to do whatever I wanted. I had the desire to say that "but these were things that I wanted to do, and I didn't get around doing them", but never did.

I get a sense of this feeling significant. That I never did say that. Let me imagine saying that. What did I want to have happened in response?

The other person realizing their mistake. Asking me why that happened. Helping me figure it out, suggest solutions. Maybe know something about executive function issues, help me figure it out back when I was still a teenager.

Help me live a life where I wouldn't have been as ashamed of my executive dysfunction issues as I was, and would have understood it to be normal...

Was this sense of pressure... actually a desire to get help? To be noticed? To be understood?

Let me go back to the visualization... yeah, I wanted understanding for my inability to accomplish what I wanted. And an earlier memory comes up, of when something similar happened and it *was* about a school assignment... and I felt ashamed. Or at least embarrassed.

Trying to give my younger selves compassion... it works to some extent. but that sense of urgency comes up again, making me want to rush this. I thought the sense of urgency was what I was working on right now? feels like there's something about this that I'm still missing...

I go back to that teenage me in a chair again. I get an image of... NaNoWriMo? Specifically the year when I was working on my Verani story... of how I had charted out how the story should go but it just felt so dry to try to write it using that method, and I didn't get anything good written.

It feeling dry, but me still needing to come up with words... only having a limited amount of time to do so. Feeling that the sense of creativity and enjoyment is actively blocked by the sense of urgency... that this just feels like sandpaper.

I wanted to enjoy writing fiction. But I couldn't. I could never focus on it. And when I tried to use NaNoWriMo as a way to pressure myself into writing, that felt bad too... like a sense of drudgery.

Getting confused about where this is going or what I should do next, but... I guess this feels connected to that earlier fear, that if I commit to something, then I will just commit to something that feels meaningless? Like it did with Nano... and like it did with school and studies after I'd burned out. And other times...

What's the belief here?

Maybe something like... "if I commit to something, then I will constantly keep wanting to do something else." That's similar to the "it is dangerous for me to commit to a course of action, because I will always commit to an action which is soul-draining" that I got earlier, but slightly different.

Is that true? Again, it has often been true. Hmm. Considering the answers that I get, it feels like this combines the "I can't do anything meaningful" and "everything I commit to will be drudgery" answers from before: I can't do anything that would feel meaningful, because I keep thinking about something else that needs to be done; and I can't commit to anything boring-but-necessary either, because I keep thinking of more fun things.

So when I'm doing something tedious-but-necessary, I keep thinking of more meaningful things; when I'm doing something meaningful, I keep thinking about something tedious-but-necessary.

... feels like I'm a little off again. hmmh.

But okay, "if I ever commit to something, then I will constantly keep wanting to do something else" feels true. Maybe I'll take that as a statement which I'll keep it in my head for an extended UtEB-style integration, see how my mind reacts to that.

[at this point, I'd been at this for about an hour and a half, and was starting to feel tired. the feeling of urgency that I'd triggered felt difficult to just be with, and I was also starting to get a sense that my answers were starting to run around in circles. So I went out on a walk and to get some food; experience suggests that the best way to let the sense of urgency unwind is by not thinking about work for a while, so I haven't gone back to trying to work on the presentation yet.]

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-11-04T15:08:41.846Z · score: 5 (2 votes) · LW · GW

Thank you, I'm happy to hear that there are no more bad feelings. :)

So if, for example, we don't see ourselves as worthless, then experiencing ourselves as "being" or love or okayness is a natural, automatic consequence.

Cool, I've been having basically the same model for a while. (Related: my hypothesis is that all the talk about people having difficulty "finding meaning" these days seems somewhat misplaced; if things seem meaningless, it's because someone is suffering from objections to their sense of meaning. If those objections would be dealt with, then they would pretty quickly naturally gravitate towards things that felt naturally meaningful.)

Interestingly, now that you've mentioned TTS (indirectly, by linking to your posts referencing it), it reminds me that TTS actually includes something rather like a reconsolidation-oriented approach to quality changes.

Yeah, I don't remember TTS in detail either, but upon reading UtEB it felt like "oh, TTS was a special case of explicitly targeted reconsolidation".

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-11-04T14:49:01.881Z · score: 5 (2 votes) · LW · GW

Thanks! Though in all honesty, now that a few days have passed since I wrote the comment... I've been paying more attention to what I actually do currently, and it feels more UtEB-ish in style, so it might not have been correct to say that UtEB didn't enable me to do that much new.

... maybe. It might also be the case that since I never experienced my parts as particularly anthropomorphic, a large part of my IFS has actually always been working directly on the level of memories. And the reason why I thought that UtEB wasn't telling me that many new things in terms of concrete practice, was that my "IFS" had actually been more Coherence Therapy all along.

I'm just confused about what exactly I have been doing, now. :-)

Comment by kaj_sotala on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T08:03:40.254Z · score: 7 (5 votes) · LW · GW

I think that DeepMind realized they'd need another breakthrough to do what they did to Go, and decided to throw in the towel while making it look like they were claiming victory.

Do we know that DeepMind is giving up on StarCraft now? I'd been assuming that this was a similar kind of intermediate result as the MaNa/TLO matches, and that they would carry on with development.

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-31T15:13:36.688Z · score: 20 (6 votes) · LW · GW

Thank you to you and Vaniver for clarifying. I'm sorry for misunderstanding and -representing your position (and for everything else); from now on, I will explicitly interpret all of your UtEB-flavored statements as referencing a generic UtEB-style system, rather than any personal system of yours.

So to answer this. I wouldn't say that any any of the three points in your earlier characterization of my position (meta-schema existing, IFS being able to arrive at the same place as other models, or IFS' frame of positive intention) quite capture the reason why I'm arguing in favor of IFS.

Rather, I do think that IFS provides the best package of applications... of the ones that I personally have encountered so far. I'm not claiming that it would be impossible for a system to be better or that there would be any in principle reason that would make IFS (or parts-based systems in general) intrinsically superior than any others. I'm sure that for every thing that IFS does, there's a way to do it some other way.

Still, of the ones that I have tried, I have found IFS to have the best combination of flexibility, power, and ease-of-learning so far. Maybe I'm just totally ignorant about this, and you tell me of a better one, and then I look into that and start promoting it instead. (I would be very grateful if you did!)

But to compare it with other systems that I've tried / looked into:

  • More mainstream therapies, such as Cognitive Behavioral Therapy and Acceptance and Commitment Therapy. Read a few books on these and definitely found them valuable, but they felt mostly counteractive so limited in their usefulness.
  • Focusing. Useful, but it was often unclear what exactly I should do with the results that came up, and I didn't always get clear answers. Felt relatively laborious.
  • Core Transformation. The first mindhacking technique that I thought *really* worked, and I kept using it for a long time, but some serious issues it seemed to just completely fail with. Also felt like I had to keep repeating uses of it, or the results would fade.
  • Steve Andreas's self-concept editing. Very transformative on a specific set of issues that made me feel like there was something fundamentally wrong with me. However, didn't seem to work on other kinds of issues, such as putting excessive probability on other people thinking bad things of me and deciding to shun me as a result.
  • Internal Double Crux. Found this very useful for dealing with internal conflicts, but it couldn't seem to deal with conflicts involving what IFS would call extreme parts; trying to deal with them produced odd stuff which IDC didn't tell me how to address. This made me look into IFS, which did.
  • Meditation. Seems useful at spontaneously bringing up and healing some issues, but not good for targeted investigation and healing of any particular one.
  • Coherence Therapy. So far I've only read UtEB, which was more focused on giving examples of it than really documenting the system. I've ordered an actual manual, but what I mostly got from UtEB (besides an improved theoretical understanding) was a few extra tools, which generally didn't enable me to do that much new. Coherence Therapy also seems harder to apply, because it requires figuring out what would count as counterevidence for a particular schema, whereas IFS seems to reliably achieve reconsolidation without needing to understand this on an equally explicit level.

So for each of these systems, either they have relatively narrow applicability, or require a lot of experience to use effectively.

In comparison, IFS feel has felt like it has broad applicability and like it is relatively easy to learn. Of course, I had the benefit of having worked with several similar systems before, which no doubt made it faster and quicker for me to figure it out. But a lot of people do seem to take to IFS intuitively. And it feels like IFS is unusually versatile in at least two respects.

First, it's broad in what kinds of situations it can be used for. It gives you a set of skills that lets you

1) Dig into the core of schemas which are based on an incorrect generalization from the original evidence (solving a "problem" which isn't actually a problem in the first place), the way that e.g. Coherence Therapy does

2) Take schemas which are responding to a correct problem with a counterproductive strategy, and update their strategy on the fly (either before going to a situation that would trigger them, or right after they've triggered)

3) Take schemas whose information may or may not be correct, and unblend from them enough that any new information in the situation allows their information to reconsolidate.

4) Mediate internal conflicts arising from schemas which are both correct but contradictory until you reconcile them, the way Internal Double Crux does

Having a collection of four entirely different contexts in which essentially the same skills can be used, seems to make it a lot more flexible than the other, more specialized systems.

Second, my experience is that if you manage to access the original memories behind a schema and do it from a place of Self (two criteria which can admittedly be frequently tricky to get right), then reconsolidation will basically always happen. Like I suggested in the OP, my model is that IFS does this by essentially hacking how extreme reactions are neurally encoded and exploiting the fact that the problem in any schema bottoms out at "and then I would feel so horrible as for it to be totally unbearable", allowing you to reconsolidate that by witnessing it from Self.

This means that IFS works on pretty much any issue where you manage to get that far. IME, something like (say) self-concept editing works on things that make you feel like a horrible person, but not on (say) things where you are afraid of being left alone through no intrinsic fault of your own. But if you have an extreme fear of either one, it's because your brain has a schema which predicts that one of them happening or being the case will cause unbearable suffering. IFS lets you reconsolidate that prediction regardless of what the exact flavor of the problem is. This also seems to be in contrast to say Coherence Therapy, which AFAICT targets the belief one step earlier in the chain. That is, it targets "being a horrible" person or "being left alone", rather than the "and that would cause unbearable suffering" which follows from that. As a result, Coherence Therapy requires figuring out the exact nature of the counterevidence needed, whereas in IFS just witnessing the schema's prediction from Self acts as a universal counterevidence to the prediction of unbearable suffering.

(Probably obvious to you, but just to make it explicit for the other readers; this is not the same as wireheading. You can heal the extreme fear of being left alone, while still strongly preferring not to be left alone and working towards preventing that. What does get fixed is having such a strong fear that you can't reason about it rationally, and extreme reaction patterns which at worst contribute to the very problem they are trying to prevent.)

Of course, none of this means that IFS would be perfect or that I would have managed to fix all of my issues with it alone... but even granting the weaknesses which we've discussed so far, the combination of it being versatile, powerful, and relatively easy to learn makes it the best overall system that I've found. It's also the one which seems to deliver the most "bang for the buck", in case I was forced to choose just one system to teach to other people.

But again, if there's some even better system out there, I'd be happy to be pointed to it! I just haven't found one yet.

Also, I don't know whether this intrinsically requires thinking in terms of parts. Probably you could do all of the same things with a more UtEB-style approach as well. For example, I'm guessing that the reason why the NLP phobia cure procedure works, is that it has that same element of "realizing that you can recall/re-experience this without it being unbearable" as witnessing something from Self does; and there you work on the level of memories rather than parts.

But at least my feeling has been that the "parts interface" gives you the kind of a natural UI from which all of these things flow relatively naturally; if you successfully teach someone the basic set of IFS skills for doing one thing, then it's just a short step towards learning the other things as well. Whereas if you were thinking in more mechanistic terms, you would need more explicit figuring out how to implement each piece. E.g. UtEB only talked about the kind of stuff you do in conventional therapy sessions, and didn't say anything about on-the-fly updating or using the system for decision-making, suggesting that their framework didn't lend itself to those applications being easily invented.

But then again, maybe this is discussed in some CT manual which I haven't read yet. And in honesty's sake, one close friend of mine who has been using IFS for as long as I have, seems to recently have been finding the UtEB approach more effective. So it's certainly possible that I'm wrong about all of this. And again, I would certainly like to be shown a system which was even better than IFS is. :-)

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-31T08:43:06.662Z · score: 7 (3 votes) · LW · GW

Oh. I'm sorry that turning this into a post made you feel uncomfortable and like I was singling you out for public criticism. It wasn't intended as either, but now that you pointed it out, it's kinda obvious that I should have realized that you might and that I should have asked you first.

This post was intended as a compliment in spirit: that I thought your points were important enough that I wanted to highlight them in a more public manner.

I'm not sure if I've ever told you, but I have both high respect for your work in general, and gratitude for your contributions personally. First, you've been talking in a lot of detail about mind hacking theory which has only become obvious to me many years later; probably a lot of what I'm currently slowly puzzling together is stuff that's already obvious to you.

Second, because you mentioned Core Transformation on LW a long time ago, I bought the book at the time; then I didn't get around reading it for a really long time, until I happened to have hit a personal bottom. At that point I finally read it and got a lot of help from it. That then started a chain of events which caused me to find more of the Andreases work, which in turn helped me finally fix the cause of a decades-old depression. None of that might have happened without those old comments of yours, so I'm very grateful for them.

So a part of the motivation for making this post was that I felt that you'd made a lot of good comments and criticisms, and you certainly know what you are talking about, so I wanted to highlight some of those points.

To try to trace back the exact chain of thought that led me to make this post:

  • I started writing a response, and noticed it was getting pretty long, an article's worth on its own
  • I thought something like "pjeby has made several good points on reductionism here; I agree with them, but from the fact that he seems to think I don't, I guess that's not obvious. In case it's just illusion of transparency speaking, I should take these criticisms and more publicly indicate where I agree with them, so that it won't just be buried in the comments of an old post."
  • and "the part about the pragmatic benefits of IFS seems to be getting into a lot of detail about what I think is going on with IFS, as well as new details about how it connects with e.g. the UtEB model; those new details might be interesting to share more widely as well".
  • "and since I just included my summary of what I think to be his core points, and he mostly endorsed them, this would be an easy opportunity to create a distillation of our conversation."

So I made this into its own article (which, as the others noted, I did feel a little uncertain about frontpaging), since that seemed like a nice opportunity to 1) continue our discussion 2) signal-boost and indicate agreement with the points of yours that I thought were important and correct 3) communicate some of my updated thoughts on IFS to a broader audience 4) generally indicate respect for you, in that I'd found your responses important enough to distill and promote.

But I realize now that you found this uncomfortable, and I'm again sorry for not having asked you about it first.

I also didn't realize that you had been trying to explicitly avoid ascribing positions. You had previously discussed your own methods on several occasions, and I assumed that you were contrasting the IFS approach with whatever system you thought was best - and that this would be your own system, since why would you use a system which you didn't think was the best. :-) I'm sorry for misreading and mischaracterizing you; I've now edited the post remove terms such as "pjebyan". If you want me to edit something else in the post, or even take it down entirely, please just let me know.

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T18:21:50.654Z · score: 3 (1 votes) · LW · GW
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical.

I certainly agree! Have you looked at some of the later posts that I've been referencing in my comments, say the one on neural Turing machines? (I know you read the UtEB one.) Dissolving agency has been one of my reasons for writing them, too.

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T18:15:18.586Z · score: 3 (1 votes) · LW · GW

I'd like to go a bit meta before I respond to the object-level content in this comment, because I feel like we're talking past each other somehow.

Could you say what exactly is the position that you are arguing for, in this conversation?

For my own behalf, I'm not trying to say that IFS is necessarily the best system, a totally unique system, or even the only system that one should use. I do think that, of the mind hacking systems that I have encountered, it is a pretty good one; and I count it as among one of the most life-changing ones that I have found.

But this is certainly not a claim that it would be impossible to do even better. In fact I'm quite certain that there are some issues for which IFS does not work very well, and for which there must be something better.

So when you say things like, for example,

But none of these arguments actually favor IFS over other systems, or even distinguish it at all! [...] But that "positive intention" frame isn't new, and definitely isn't unique to IFS. Other therapy modalities had it long before IFS was developed, and you can use a reductionist modality without losing the benefits.

then I'm left a little confused as to why you are saying them. I don't think I've said that the "positive intention" frame would be new, or unique to IFS. Certainly there are other therapy modalities that have it too. And these arguments might not favor IFS over other systems, but then I never said that IFS would be the best possible system. I just said that it seems to be a pretty good system that has worked pretty well for me, and for several other people I know.

In terms of conversational frames, I feel like I came to this conversation mostly in a gears-oriented frame, where I don't have any very strong agenda. You seemed to have criticisms about IFS which looked to me like they were based on misunderstandings of IFS, so I tried to correct them; and you seemed confused about why people found IFS valuable, so I tried to share my perspective on why people do. Where we disagree, I'm mostly interested in fleshing out the details of why and how, so as to better combine our models.

And maybe I'm just totally misreading you, but I get the vibe that you are (or at least perceive me to be) in some kind of a dominance frame, and want to establish that IFS isn't a very good system? Or shoot down my claim that IFS is a uniquely good system? Or something?

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T10:40:43.077Z · score: 3 (1 votes) · LW · GW
In other words, on at least the "IFS as a reductionist model" side, it looks like you have used an awful lot of words to basically concede my point. ;-)

Well... yes? That's what I said in the opening, that I think we mostly agree on the reductionist thing but are just choosing to emphasize things somewhat differently and have mild disagreements on what's a useful framing. :-)

Comment by kaj_sotala on bgaesop's Shortform · 2019-10-29T17:55:40.262Z · score: 15 (3 votes) · LW · GW

Yeah I don't know what exactly was said, but given that this was the CFAR alumni reunion, I would be willing to give the speaker the benefit of the doubt and assume a non-crazy presentation until I hear more details. Especially since a lot of things which have sounded crazy and mystical have turned out to have reasonable explanations.

Comment by kaj_sotala on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-29T17:47:25.636Z · score: 3 (3 votes) · LW · GW

If someone is better off thinking of things as parts that have wants, then they should get the IFS-derived version; if not, then not.

Yeah, this is basically my position. I don't have any particularly strong opinion on when exactly one is better off thinking that, though. Feels like it depends a lot on the details of their personality, as well as how interested they are in the details in the first place - e.g. if someone was only interested in getting a practical system that they could use as soon as possible, I'd probably point them to IFS. If they wanted to have a thorough understanding of what exactly they were doing and the neural basis of it, I'd give them UtEB. Or something.

IFS feels like it's easier to just get started with using, since the process is self-guiding to an extent. So I would probably suggest anyone try out at least a couple of sessions with a facilitator, just in case they could grab low-hanging fruit that way. I do think that if people got into this via IFS and it seems to work, no reason to abandon it, though complementing it with other techniques when IFS seems to fail is probably a good idea.

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-29T15:01:19.778Z · score: 5 (2 votes) · LW · GW

Here's my reply! Got article-length, so I posted it separately.

Comment by kaj_sotala on bgaesop's Shortform · 2019-10-29T13:54:49.035Z · score: 9 (4 votes) · LW · GW
After 100 years of parapsychology research, it's pretty obvious to anyone with a halfway functioning outside view that any quick experiment will either be flawed or say chakras are not real

I don't know why the person didn't want to do an experiment, and I'd be willing to extend them the benefit of the doubt, but is there some particular research disproving chakras? So far I'd been going with the non-mystical chakra model that

In general, if you translate all mystical statements to be talking about the internal experiences, they’ll make a lot more sense. Let’s take a few common mystical concepts and see how we can translate them.
Energy -- there are sensations that form a feeling of something liquid (or gaseous) that moves within or outside of your body. When unpacked, it’s likely to be made up of sensations of light (visual), warmth, tingling, and tension (physical). “Channeling energy” is adjusting your state of mind so as to cause these sensations to “move” in a certain way, to appear or disappear.
Chakras -- points in your body where it’s particularly easy to feel certain types of energies. It’s also particularly easy to visualize / feel the energy moving into or out of those places. “Aligning chakras” is about adjusting your state of mind so as to cause the energy to flow evenly through all chakras.
(Chakras are a great example of a model that pays rent. You can read about chakras, see what predictions people are making about your internal experience when you explore chakras, and then you can go and explore them within yourself to see if the predictions are accurate.)

... and after meditation caused me to have these kinds of persistent sensations on my forehead, I assumed that "oh, I guess that's what the forehead chakra thing is referring to". Another post suggested that experiences of "energy" correspond to conscious representations of autonomic nervous system activity, and the chakras to physiological hubs of that activity.

That has seemed sensible enough to me, but the topic hasn't seemed important enough to explore in detail; should I assume that this model is actually wrong?

Comment by kaj_sotala on bgaesop's Shortform · 2019-10-25T19:36:37.319Z · score: 6 (3 votes) · LW · GW

Definitely. But note that according to the paper, the stress thing “was observed in a total of three participants”; he says that he then went on to “went on to conduct other experiments” and found results among similar lines, and then gives the yoga and racism examples. So it’s not clear to me exactly how many individuals had that kind of a disconnect between their experience of stress and their objective level of stress; 3/50 at least sounds like a pretty small minority.

I'm intending to flesh out my model further in a future post, but the short version is that I don't believe the loss of awareness to be an inevitable consequence of all meditation systems - though it is probably a real risk with some. Metaphorically, there are several paths that lead to enlightenment and some of them run the risk of reducing your awareness, but it seems to me entirely possible to take safer paths.

Comment by kaj_sotala on bgaesop's Shortform · 2019-10-25T06:07:27.822Z · score: 9 (4 votes) · LW · GW

Note that I already discussed this paper a bit at the end of my earlier post on meditation; (this kind of) enlightenment removing your subjective suffering over your incompetence and otherwise leaving most of your behavior intact is as predicted and would still be considered valuable by many people. Also, enlightenment is only one of the things you can develop via meditation, and if you want practical benefits there are other axes that you can focus on.

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-22T08:44:32.443Z · score: 3 (1 votes) · LW · GW

Thanks for the clarifications! I'll get back to you with my responses soon-ish.

Comment by kaj_sotala on What's your big idea? · 2019-10-20T19:27:27.232Z · score: 5 (2 votes) · LW · GW

Hmm. This interpretation was the impression that I recall getting from reading Jensen's The g Factor, though it's possible that I misremember. Though it's possible that he was arguing that IQ tests should be aiming to measure g, even if they don't necessarily always do, and held the most g-loaded ones as the gold standard.

Comment by kaj_sotala on What's your big idea? · 2019-10-20T19:16:48.478Z · score: 5 (2 votes) · LW · GW

Here the situation is different in that it's not just that we don't know how to measure X, but rather the way in which we have derived X means that directly measuring it is impossible even in principle.

That's distinct from something like (say) self-esteem, where it might be the case that we might figure out what self-esteem really means, or at least come up with a satisfactory instrumental definition for it. There's nothing in the normal definition of self-esteem that would make it impossible to measure on an individual level. Not so with g.

Of course, one could come up with a definition for something like "intelligence", and then try to measure that directly - which is what people often do, when they say that "intelligence is what intelligence tests measure". But that's not the same as measuring g.

This matters because it's part of what makes e.g. the Flynn effect so hard to interpret - yes raw test scores on IQ tests have gone up, but have people actually gotten smarter? We can't directly measure g, so a rise alone doesn't yet tell us anything. On the other hand, if people's scores on a test of self-esteem went up over time, then it would be much more straightforward to assume that people's self-esteem has probably actually gone up.

Comment by kaj_sotala on What's your big idea? · 2019-10-20T16:30:19.988Z · score: 7 (3 votes) · LW · GW

The g-factor, or g for short, is the thing that IQ tries to measure.

The name "g factor" comes from the fact that it is a common, general factor which all kinds of intelligence draw upon. For instance, Deary (2001) analyzed an American standardization sample of the WAIS-III intelligence test, and built a model where performance on the 13 subtests was primarily influenced by four group factors, or components of intelligence: verbal comprehension, perceptual organization, working memory, and processing speed. In addition, there was a common g factor that strongly influenced all four.

The model indicated that the variance in g was responsible for 74% of the variance in verbal comprehension, 88% of the variance in perceptual organization, 83% of the variance in working memory, and 61% of the variance in processing speed.

Technically, g is something that is computed from the correlations between various test scores in a given sample, and there's no such thing as the g of any specific individual. The technique doesn't even guarantee that g actually corresponds with any physical quantity, as opposed to something that the method just happened to produce by accident.

So when you want to measure someone's intelligence, you make a lot of people take tests that are known to be strongly g-loaded. That means that the performance on the tests is strongly correlated with g. Then you take their raw scores and standardize them to produce an IQ score, so that if e.g. only 10% of the test-takers got a raw score of X, then anyone getting the raw score of X is assigned an IQ indicating that they're in the top 10% of the population. And although IQ still doesn't tell us what an individual's g score is, it gives us a score that's closely correlated with g.

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-20T12:13:23.046Z · score: 10 (4 votes) · LW · GW

The content of this and the other comment thread seems to be overlapping, so I'll consolidate (pun intended) my responses to this one. Before we go on, let me check that I've correctly understood what I take to be your points.

Does the following seem like a fair summary of what you are saying?

Re: IFS as a reductionist model:

  • Good reductionism involves breaking down complex things into simpler parts. IFS "breaks down" behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn't simplifying anything.
  • Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don't have. If you say that a thermostat "wants" the temperature to be something in particular, or that a part "wants" to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
  • The real mechanisms behind emotional issues aren't really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
  • Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
  • On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
  • The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
  • The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn't just entirely change its behavior through changing earlier learning.
  • Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don't need the metaphor of "parts", and you also have a more correct model which does actual reduction to simpler components.

Re: the practical usefulness of IFS as a therapeutic approach:

  • Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
  • Thinking about experiences like "being in distress" or "inner criticism" as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it's useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
  • Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
  • If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
  • "Negotiating with parts" buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
  • "Developing self-leadership" involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
  • IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one's head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
  • Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
  • Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.

Comment by kaj_sotala on The Cognitive Science of Rationality · 2019-10-19T12:09:49.366Z · score: 4 (2 votes) · LW · GW

Overcoming Bias; most of Eliezer's LW writing was originally posted on OB, until LW was created as a community where it would be easier for other people to write about these topics as well, and Eliezer's writing got moved here.

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-17T12:54:31.368Z · score: 9 (2 votes) · LW · GW
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of "parts".

If it was just that brief description, then sure, the parts metaphor would be unnecessary. But the IFS model contains all kinds of additional predictions and applications which make further use of those concepts.

For example, firefighters are called that because "they are willing to let the house burn down to contain the fire"; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that's involved with a typical firefighter response. This makes firefighters and managers typically polarized - mutually opposed - with each other.

Now, it's true that you don't need to use the "part" expression for explaining this. But if we only talked about various behaviors getting reinforced, we wouldn't predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing. Now, obviously it can still be explained in a more sophisticated reinforcement model, in which you talk about e.g. differing prioritizations in different situations, and some behavioral routines kicking in at different situations...

...but if at the end, this comes down to there being two distinct kinds of responses depending on whether you are trying to avoid a situation or are already in it, then you need names for those two categories anyway. So why not go with "manager" and "firefighter" while you're at it?

And sure, you could call it, say, "a response pattern" instead of "part" - but the response pattern is still physically instantiated in some collection of neurons, so it's not like "part" would be any less correct, or worse at reductionism. Either way, you still get a useful model of how those patterns interact to cause different kinds of behavior.

From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one's treatment of the "parts" (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. [...] But "therapeutic metaphor" and "reductionist model" are not the same thing. IFS has a useful metaphor -- in some contexts -- but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.

I agree that the practical usefulness of IFS is distinct from the question of whether it's a good model of behavior.

That said, if we are also discussing the benefits of IFS as a therapeutic method, then what you said is one aspect of what I think makes it powerful. Another is its conception of Self and unblending from parts.

I have had situations where for instance, several conflicting thoughts are going around my head, and identifying with all of them at the same time feels like I'm being torn into several different directions. However, then I have been able to unblend from each part, go into Self, and experience myself as listening to the concerns of the parts while being personally in Self; in some situations, I have been able to facilitate a dialogue and then feel fine.

IFS also has the general thing of "fostering Self-Leadership", where parts are gradually convinced to remain slightly on the side as advisors, while keeping Self in control of things at all times. The narrative is something like, this can only happen if the Self is willing to take the concerns of _all_ the parts into account. The system learns to increasingly give the Self leadership, not because they would agree that the Self’s values would be better than theirs, but because they come to trust the Self as a leader which does its best to fulfill the values of all the parts. And this trust is only possible because the Self is the only part of the system which doesn’t have its own agenda, except for making sure that every part gets what it wants.

This is further facilitated by there being distinctive qualities of being in Self, and IFS users developing a "parts detector" which lets them notice when parts have been triggered, helping them unblend and return back to Self.

I'm not saying that you couldn't express unblending in a non-partisan way. But I'm not sure how you would use it if you didn't take the frame of parts and unblending from them. To be more explicit, by "use it" here I mean "be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it".

(But of course, as you said, this is a digression to whether IFS is a useful mindhacking tool, which is distinct from the question of whether it's good reductionism.)

If I try to steelman this argument, I have to taboo "agent", since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to "thing", in which case I get "things which just try to prevent/achieve something", and now I have to figure out how to reduce "try"...

I said a few words about my initial definition of agent in the sequence introduction:

One particular family of models that I will be discussing, will be that of multi-agent theories of mind. Here the claim is not that we would literally have multiple personalities. Rather, my approach will be similar in spirit to the one in Subagents Are Not A Metaphor:
Here’s are the parts composing my technical definition of an agent:
1. Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
2. World-Model
Degenerate case: stateless world model consisting of just sense inputs.
3. Search Process
Causal decision theory is a search process. “From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
This is a model that can be applied naturally to a wide range of entities, as seen from the fact that thermostats qualify. And the reason why we tend to automatically think of people - or thermostats - as agents, is that our brains have evolved to naturally model things in terms of this kind of an intentional stance; it’s a way of thought that comes natively to us.
Given that we want to learn to think about humans in a new way, we should look for ways to map the new way of thinking into a native mode of thought. One of my tactics will be to look for parts of the mind that look like they could literally be agents (as in the above technical definition of an agent), so that we can replace our intuitive one-agent model with intuitive multi-agent models without needing to make trade-offs between intuitiveness and truth. This will still be a leaky simplification, but hopefully it will be a more fine-grained leaky simplification, so that overall we’ll be more accurate.

I don't think that the distinction between "agent" and "rule-based process" really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance. A thermostat can be called a set of rules which adjusts the heating up when the temperature is below a certain value, and adjusts the heating down when the temperature is above a certain value; or it can be called an agent which tries to maintain a target temperature by adjusting the heating. Both make the same predictions, they're just different ways of describing the same thing.

Or as I discussed in "Integrating disagreeing subagents":

The frame that I’ve had so far is that of the brain being composed of different subagents with conflicting beliefs. On the other hand, one could argue that the subagent interpretation isn’t strictly necessary for many of the examples that I bring up in this post. One could just as well view my examples as talking about a single agent with conflicting beliefs.
The distinction between these two frames isn’t always entirely clear. In “Complex Behavior from Simple (Sub)Agents”, mordinamael presents a toy model where an agent has different goals. Moving to different locations will satisfy the different goals to a varying extent. The agent will generate a list of possible moves and picks the move which will bring some goal the closest to being satisfied.
Is this a unified agent, or one made up of several subagents?
One could argue for either interpretation. On the other hand, mordinamael's post frames the goals as subagents, and they are in a sense competing with each other. On the other hand, the subagents arguably don’t make the final decision themselves: they just report expected outcomes, and then a central mechanism picks a move based on their reports.
This resembles the neuroscience model I discussed in my last post, where different subsystems in the brain submit various action “bids” to the basal ganglia. Various mechanisms then pick a winning bid based on various criteria - such as how relevant the subsystem’s concerns are for the current situation, and how accurate the different subsystems have historically been in their predictions.
Likewise, in extending the model from Consciousness and the Brain for my toy version of the Internal Family Systems model, I postulated a system where various subagents vote for different objects to become the content of consciousness. In that model, the winner was determined by a system which adjusted the vote weights of the different subagents based on various factors.
So, subagents, or just an agent with different goals?
Here I would draw an analogy to parliamentary decision-making. In a sense, a parliament as a whole is an agent. Various members of parliament cast their votes, with “the voting system” then “making the final choice” based on the votes that have been cast. That reflects the overall judgment of the parliament as a whole. On the other hand, for understanding and predicting how the parliament will actually vote in different situations, it is important to model how the individual MPs influence and broker deals with each other.
Likewise, the subagent frame seems most useful when a person’s goals interact in such a way that applying the intentional stance - thinking in terms of the beliefs and goals of the individual subagents - is useful for modeling the overall interactions of the subagents.
For example, in my toy Internal Family Systems model, I noted that reinforcement learning subagents might end up forming something like alliances. Suppose that a robot has a choice between making cookies, poking its finger at a hot stove, or daydreaming. It has three subagents: “cook” wants the robot to make cookies, “masochist” wants to poke the robot’s finger at the stove, and “safety” wants the robot to not poke its finger at the stove.
By default, “safety” is indifferent between “make cookies” and “daydream”, and might cast its votes at random. But when it votes for “make cookies”, then that tends to avert “poke at stove” more reliably than voting for “daydream” does, as “make cookies” is also being voted for by “cook”. Thus its tendency to vote for “make cookies” in this situation gets reinforced.
We can now apply the intentional stance to this situation, and say that “safety” has "formed an alliance" with “cook”, as it correctly “believes” that this will avert masochistic actions. If the subagents are also aware of each other and can predict each other's actions, then the intentional stance gets even more useful.
Of course, we could just as well apply the purely mechanistic explanation and end up with the same predictions. But the intentional explanation often seems easier for humans to reason with, and helps highlight salient considerations.
Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-17T11:05:18.408Z · score: 3 (1 votes) · LW · GW
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is "count(subject matter) times count(strategies)" instead of "count(subject matter) plus count(strategies)". By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.

It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn't seem to object to the description of schemas; does your objection also apply to them?

IFS in general is very vague about how exactly the parts are implemented on a neural level. It's not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.

Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they're reinforced. So you get "firefighting" for free as a side-effect of the brain being able to learn from reinforcement

I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [1 2]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn't just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.

And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get "stuck on" way past the time when it has stopped being beneficial. Such as when I've slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn't any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple "behaviors get reinforced" model, but it is more consistent with a "parts can get stuck on after they have been activated" model.

And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior.

Not sure what you mean by agency?


Comment by kaj_sotala on Misconceptions about continuous takeoff · 2019-10-15T09:44:49.239Z · score: 3 (1 votes) · LW · GW

Miles Brundage argues that "it’s an impressive achievement, but considering it in this larger context should cause us to at least slightly decrease our assessment of its size/suddenness/significance in isolation".

In the wake of AlphaGo’s victory against Fan Hui, much was made of the purported suddenness of this victory relative to expected computer Go progress. In particular, people at DeepMind and elsewhere have made comments to the effect that experts didn’t think this would happen for another decade or more. One person who said such a thing is Remi Coulom, designer of CrazyStone, in a piece in Wired magazine. However, I’m aware of no rigorous effort to elicit expert opinion on the future of computer Go, and it was hardly unanimous that this milestone was that long off. I and others, well before AlphaGo’s victory was announced, said on Twitter and elsewhere that Coulom’s pessimism wasn’t justified. Alex Champandard noted that at a gathering of game AI experts a year or so ago, it was generally agreed that Go AI progress could be accelerated by a concerted effort by Google or others. At AAAI last year [2015], I also asked Michael Bowling, who knows a thing or two about game AI milestones (having developed the AI that essentially solved limit heads-up Texas Hold Em), how long it would take before superhuman Go AI existed, and he gave it a maximum of five years. So, again, this victory being sudden was not unanimously agreed upon, and claims that it was long off are arguably based on cherry-picked and unscientific expert polls. [...]
Hiroshi Yamashita extrapolated the trend of computer Go progress as of 2011 into the future and predicted a crossover point to superhuman Go in 4 years, which was one year off. In recent years, there was a slowdown in the trend (based on highest KGS rank achieved) that probably would have lead Yamashita or others to adjust their calculations if they had redone them, say, a year ago, but in the weeks leading up to AlphaGo’s victory, again, there was another burst of rapid computer Go progress. I haven’t done a close look at what such forecasts would have looked like at various points in time, but I doubt they would have suggested 10 years or more to a crossover point, especially taking into account developments in the last year. Perhaps AlphaGo’s victory was a few years ahead of schedule based on reported performance, but it should always have been possible to anticipate some improvement beyond the (small team/data/hardware-based) trend based on significant new effort, data, and hardware being thrown at the problem. Whether AlphaGo deviated from the appropriately-adjusted trend isn’t obvious, especially since there isn’t really much effort going into rigorously modeling such trends today. Until that changes and there are regular forecasts made of possible ranges of future progress in different domains given different effort/data/hardware levels, “breakthroughs” may seem more surprising than they really should be.
Comment by kaj_sotala on Maybe Lying Doesn't Exist · 2019-10-15T08:54:39.630Z · score: 4 (2 votes) · LW · GW

I didn't mean "coordination" just in the sense of "coordination problems" (in the technical economic sense), but as language existing to enable any coordination at all. In the sense where, if I ask you to bring me a glass of water, we have coordinated on an action to bring me a glass of water. I don't think that this is just an effect which needs to be taken into account, but rather one of the primary reasons why language exists in the first place. Its usefulness for making improved (non-coordination-related) predictions could very well be a later addition that just happened to get tacked onto the existing mechanism.

Comment by kaj_sotala on Maybe Lying Doesn't Exist · 2019-10-15T08:49:21.629Z · score: 13 (7 votes) · LW · GW

"Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn't, and I had enough information to know that, but you see, I wasn't conscious that it wouldn't cure their disease, as I was selling it to them, so it isn't really fraud" would not fly in any court that is even seriously pretending to be executing justice.

(just to provide the keyword: the relevant legal doctrine here is that the seller "knew or should have known" that the drug wouldn't cure the disease)

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-14T20:41:43.003Z · score: 6 (2 votes) · LW · GW

(adding to my other comment)

dividing people into lots of mini-people isn't a reduction.

And like, the post you're responding to just spent several thousand words building up a version of IFS which explicitly doesn't have "mini-people" and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-14T19:40:57.703Z · score: 3 (1 votes) · LW · GW

If IFS said, "brains have modules for these types of mental behavior", (e.g. hiding, firefighting, etc.), then that would also be a reduction.

I'm not sure why IFS's exile-manager-firefighter model doesn't fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something -> exile being triggered and getting anxious -> gaming firefighter seeking to suppress the anxiety with a game -> inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as "little people". They're basically just simple trigger-action rules too, like "if there is something that Kaj should be doing and he isn't getting around doing it, start ramping up an increasing level of reminders".

There's also Janina Fisher's model of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.

but compassion towards a "part" is not really necessary for that, just that one suppress commentary.

Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.

Comment by kaj_sotala on Maybe Lying Doesn't Exist · 2019-10-14T16:53:08.221Z · score: 6 (5 votes) · LW · GW

For demonstrating anything which involves a matter of degree, the point is communicated most effectively by highlighting examples which are at an extreme end of the spectrum. It is true that something being a "crime" is arguably 100% socially determined and 0% "objectively" determined, but that doesn't make it a bad example. It just demonstrates the extreme end of the spectrum, in the same way that a concept from say physics demonstrates the opposite end of the spectrum, where it's arguably close to 100% objective whether something really has a mass of 23 kilograms or not.

The relevant question is where "lying" falls on that spectrum. To me it feels like it's somewhere in between - neither entirely socially determined, nor entirely a fact of the matter.

Comment by kaj_sotala on Building up to an Internal Family Systems model · 2019-10-14T16:34:46.355Z · score: 3 (1 votes) · LW · GW
I'm actually kind of surprised that IFS seems so popular in rationalist-space, as I would've thought rationalists more likely to bite the bullet and accept the existence of their unendorsed desires as a simple matter of fact.

Some reasons for the popularity of IFS which seem true to me, and independent of whether you accept your desires:

  • It's the main modality that rationalists happen to know which lets you do this kind of thing at all. The other popular one is Focusing, which isn't always framed in terms of subagents, but in terms of the memory reconsolidation model it basically only does accessing; de- and reconsolidation will only happen to the extent that the accessing happens to trigger the brain's spontaneous mismatch detection systems. (Also the Bio-Emotive Framework has gotten somewhat popular of late, but that's a very recent development.)
  • Rationalists tend to really like reductionism, in the sense of breaking complex systems down into simpler parts that you can reason about. IFS is good at giving you various gears about how minds operate, and e.g. turning previously incomprehensible emotional reactions into a completely sensible chain of parts triggering each other. (And this doesn't feel substantially different than thinking in terms of e.g. schemas the way Coherence Therapy does; one is subagent-framed and the other isn't, but one's predictions seem to be essentially the same regardless of whether you think of schemas setting off each other or IFS-parts doing it.)
  • Many people have natural experiences of multiplicity, e.g. having the experience of an internal critic which communicates in internal speech; if your mind tends to natively represent things as subagents already, then it's natural to be drawn to an approach which lets you use the existing interface. On the other hand, even if someone doesn't experience natural multiplicity, especially if they've dealt with severely traumatized people, they are likely to have experienced something like part-switching in others.
  • IFS seems to offer some advantages that non-subagent ones don't; as an example, I noticed a bug earlier today and used Coherence Therapy's "what does my brain predict would happen if I acted differently" technique to access a schema's prediction... but then I noticed that I was starting to get too impatient to disprove that belief before I had established sufficient access to it, so I switched to treating the schema as a subagent that I could experience compassion and curiosity towards, and that helped deal with the sense of urgency. In general, it feels like the "internal compassion" frame seems to help with a lot of things such as just wanting to rush into solutions, or figuring that some particular bug isn't so important to fix; and knowing about the qualities of Self and having a procedure for getting there is often helpful for putting those kinds of meta-problems to the side.

That said, I do agree that sometimes simulating subagents seems to get in the way; I've had some IFS sessions where I did make progress, but it felt like the process wasn't quite cutting reality at the joints, and I suspect that something like Coherence Therapy might have produced results quicker... and I also agree that

and that the kind of people drawn to rationalism might be extra-likely to want to disavow all their "irrational"-seeming desires!

is a thing. In my IFS training, it was said that "Self-like parts" (parts which pretend to be Self, and which mostly care about making the mind-system stable and bringing it under control) tend to be really strongly attracted towards any ideology or system which claims to offer a sense of control. I suspect that a many of the people who are drawn to rationalism are indeed driven by a strong part/schema which strongly dislikes uncertainty, and likes the promise of e.g. objectively correct methods of thinking and reasoning that you can just adopt. This would go hand in hand with wanting to reject some of your desires entirely.

Comment by kaj_sotala on Maybe Lying Doesn't Exist · 2019-10-14T14:44:20.385Z · score: 19 (13 votes) · LW · GW

But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).

This sounds like you are saying that the purpose of language is only to describe reality, so we should not appeal to consequences when discussing word boundaries. If so, that seems wrong to me - language serves several different purposes, of which prediction is only one.

As an example, consider the word "crime", and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like "lying is bad, but it shouldn't be a crime, because that would have worse consequences than it being legal"; and it would seem clearly wrong to me not to.

One might object that legal terms are a special case, since they are part of a formal system with wide-ranging impact. But is that so different from other words, other than quantitatively? A legal term is primarily a tool for coordination, but so are rubes and bleggs: on average, bleggs contain vanadium and rubes contain palladium, and the reason the factory draws those boundaries is to be able to instruct their workers on how to sort the things. If it turned out that their standard definitions were too confusing to their workers and made it harder to extract vanadium and palladium efficiently, then the factory would want to redefine the terms so as to make the sorting more efficient.

Or if I am a speaker of American English and want to ask my friend to bring me what are called chips in US English, but I know him to be a Brit, I might ask him to bring me crisps... because that word choice will have better consequences.

This is still compatible with all the standard words-as-disguised-queries stuff, because the language-as-prediction and language-as-coordination can be viewed as special cases of each other:

  • From the language-as-prediction model, the ultimate disguised query is "what are the consequences of defining the word in this way and do those consequences align with my goals"; that is still capturing statistical regularities, those regularities just happen to also be defined in terms of one's values.
  • From the language-as-coordination model, sometimes we want to coordinate around a purpose such as describing reality in a relatively value-neutral way, in which case it's good to also have terms whose queries make less reference to our values (even if the meta-algorithm producing them still uses our values as the criteria for choosing the object-level query; e.g. different occupations develop specialized vocabulary that allows them to do their jobs better, even though the queries implicit in their vocabulary don't directly reference this).

More succinctly: both "Language is about coordination, and sometimes we want to coordinate the best way to make predictions" and "Language is about prediction, and sometimes we want to predict the best ways to coordinate" seem equally valid, and compatible with the standard Sequences.

Comment by kaj_sotala on Book summary: Unlocking the Emotional Brain · 2019-10-14T11:23:34.749Z · score: 14 (3 votes) · LW · GW

The incoming information gets processed through an existing filter that deletes any information that doesn't fit the paradigm, or mangles the information until it does fit.

By the way, I believe this to be related to the kind of thing that Valentine was trying to point at in the Kensho post:

Imagine you’re in a world where people have literally forgotten how to look up from their cell phones. They use maps and camera functions to navigate, and they use chat programs to communicate with one another. They’re so focused on their phones that they don’t notice most stimuli coming in by other means.

Somehow, by a miracle we’ll just inject mysteriously into this thought experiment, you look up, and suddenly you remember that you can actually just see the world directly. You realize you had forgotten you were holding a cell phone.

In your excitement, you try texting your friend Alex:

YOU: Hey! Look up!
ALEX: Hi! Look up what?
YOU: No, I mean, you’re holding a cell phone. Look up from it!
ALEX: Yeah, I know I have a cell phone.
ALEX: <alex_cell_phone.jpg>
ALEX: If I look up from my phone, I just see our conversation.
YOU: No, that’s a picture of your cell phone. You’re still looking at the phone.
YOU: Seriously, try looking up!
ALEX: Okay…
ALEX: looks up
YOU: No, you just typed the characters “looks up”. Use your eyes!
ALEX: Um… I AM using my eyes. How else could I read this?
YOU: Exactly! Look above the text!
ALEX: Above the text is just the menu for the chat program.
YOU: Look above that!
ALEX: There isn’t anything above that. That’s the top.
ALEX: Are you okay?

You now realize you have a perplexing challenge made of two apparent facts.

First, Alex doesn’t have a place in their mind where the idea of “look up” can land in the way you intend. They are going to keep misunderstanding you.

Second, your only familiar way of interacting with Alex is through text, which seems to require somehow explaining what you mean.

But it’s so obvious! How can it be this hard to convey? And clearly some part of Alex already knows it and they just forgot like you had; otherwise they wouldn’t be able to walk around and use their phone. Maybe you can find some way of describing it to Alex that will help them notice that they already know…?

Or… maybe if you rendezvous with them, you can somehow figure out how to reach forward and just pull their head up? But you’re not sure you can do that; you’ve never used your hands that way before. And you might hurt them. And it seems kind of violating to try.

And later in this comment:

... my personal impression had been that it’s actually quite easy to see what Looking is and how one might translate it into reductionist third-person perspectives. But my personal experience had been that whenever I tried to share that translation, I’d bounce off of weird walls of misunderstanding. After a while I noticed that the nature of the bounce had a structure to it, and that that structure has self-reference. (Once again, analogies along the lines of “get out of the car” and “look up from your phone” come to mind.) After watching this over several months and running some informal tests, and comparing it to things CFAR has been doing (successfully, strugglingly, or failing to do) over the six years I’ve been there, it became obvious to me that there are some mental structures people run by default that actively block the process of Looking. And for many people, those structures have a pretty strong hold on what they say and consciously think. I’ve learned to expect that explaining Looking to those structures simply will never work. [...]

If I try to argue with a paperclip maximizer about how maximizing paperclips isn’t all there is to life, it will care to listen only to the extent that listening will help it maximize paperclips. I claim that by default, human mind design generates something analogous to a bunch of paperclip maximizers. If I’m stuck talking to one of someone’s paperclip maximizers, then even if I see that there are other parts of their mind that would like to engage with what I’m saying, I’m stuck talking to a chunk of their mind that will never understand what I’m saying.

... with the particular difference that he was pointing to a case where the meta-issue involves one's basic ontology, and where (if I interpret him correctly) he thinks that most people have that meta-issue by default.

Comment by kaj_sotala on Book summary: Unlocking the Emotional Brain · 2019-10-11T13:55:50.421Z · score: 3 (1 votes) · LW · GW

Being depressed about how many beings are suffering in this moment?

That could be one example, though I was thinking more about examples such as Tomas in the text - you can correctly believe in there being nobody in your life who really understands your problems, in which case it's also valid to want it.

Whether depression is a useful reaction to that situation is another matter, and I'm under the impression that CT can sometimes alter one's schemas about how to respond to a bad circumstance. E.g. Ted's case changed his behavior after he made the update that his strategy wasn't ever going to work. But that doesn't make the original evaluation of the badness of the circumstances any less correct. (and at least some interpretations of CBT are premised on that evaluation being irrational)

Comment by kaj_sotala on Book summary: Unlocking the Emotional Brain · 2019-10-10T10:57:46.894Z · score: 6 (3 votes) · LW · GW

Good luck with the therapist! Curious to hear how that goes, if there's anything about your experience that you feel like sharing.

Comment by kaj_sotala on Book summary: Unlocking the Emotional Brain · 2019-10-10T10:57:03.029Z · score: 8 (4 votes) · LW · GW

Schizophrenia is not listed in the book's example list of conditions that Coherence Therapy might work for; there is a case study of a woman who hears hallucinatory voices, though the report states that "She did not fit the typical pattern of schizophrenia, which was the diagnosis she had been given". The general impression I get is that the writer treats them as a psychotic symptom related to her depression rather than her being schizophrenic in general.

I don't feel like I know enough about schizophrenia to put it in a subagent context.

Comment by kaj_sotala on Who lacks the qualia of consciousness? · 2019-10-08T20:43:18.980Z · score: 3 (1 votes) · LW · GW

People commonly report that the strength of their self-sensation varies depending on what they are doing. In particular, flow states are frequently described as ones where the sense of self vanishes, as the person's focus is purely on the activity and nothing else. Just the doing, with no room for the sensation of a self, as the person's entire focus is on the sensations of the doing.

Does this match anything in your experience?

Comment by kaj_sotala on Hazard's Shortform Feed · 2019-10-07T10:25:03.351Z · score: 3 (1 votes) · LW · GW
Keeping calm despite feeling strong emotions can be misinterpreted by others as not caring.

To some extent, the interpretation is arguably correct; if you personally suffer from something not working out, then you have a much greater incentive to actually ensure that it does work out. If a situation going bad would cause you so much pain that you can't just walk out from it, then there's a sense in which it's correct to say that you do care more than if you could just choose to give up whenever.

Comment by kaj_sotala on Who lacks the qualia of consciousness? · 2019-10-07T09:02:07.844Z · score: 5 (2 votes) · LW · GW
I don't know if it's possible to link to a Facebook post

The timestamp of a Facebook post should have a link to its URL, letting you link to the post if it's a public one. (This is a common UI convention: in general, if you want to link to an individual post-like-thing, see if its timestamp would happen to have the link.) E.g. this is the link to the post I assume you were thinking of.

Comment by kaj_sotala on Who lacks the qualia of consciousness? · 2019-10-06T17:59:27.800Z · score: 17 (6 votes) · LW · GW

Do you mean the state of having conscious experiences at all, or the qualia of having/being a self? Those seem different to me, but you seem to talk about both.

Comment by kaj_sotala on Jacob's Twit, errr, Shortform · 2019-10-04T18:14:58.915Z · score: 3 (1 votes) · LW · GW

I was a little unhappy with the implied intuition vs. cognition conflict in the "My gut feelings and intuitions are..." question. I would have preferred to say something like "I try to make my intuitions and cognition into a unified whole, using them to balance each other's strengths and weaknesses, and I don't have an experience of privileging either over the other".

Comment by kaj_sotala on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2019-10-04T10:21:53.653Z · score: 3 (1 votes) · LW · GW
It is only when you expect a system to radically gain capability without needing any safeguards, does it makes sense to expect there to be a dangerous AI created by a team with no experience of safe guards or how to embed them.

That sounds right to me. Also worth noting that much of what parents do for the first few years of a child's life is just trying to stop the child from killing/injuring themselves, when the child's own understanding of the world isn't sufficiently developed yet.

Comment by kaj_sotala on Subagents, trauma and rationality · 2019-10-04T09:41:18.584Z · score: 12 (2 votes) · LW · GW
Using bio-emotive to examine the relationship between an emotional reaction I'm having now and a related memory has given the phrase "being present" a meaning it didn't have for me before; often when we aren't present it's because we're in a real sense in the past, possibly way back in the past depending on what memories are being activated.

This reminded me of this bit (not sure if I agree with all of it, but it's an interesting perspective):

The purpose of memory is not to maintain veridical records of the past so much as to guide future behaviour on the basis of past experience. The purpose of learning is, in a word, to shape predictions, predictive models of reality, predictive models of how we can meet our needs in the world.

That is why memory functions implicitly for the most part; it serves no useful purpose to be consciously aware of the past basis of your present actions, so long as the actions in question bring about the predicted (desired) outcomes. In fact, conscious reflection upon an automatised motor programme undermines the intended behaviour because it destabilises the underlying programme. It becomes necessary to bring past experience to consciousness only when predicted outcomes fail to materialise, when prediction error occurs. Friston (2010) calls this “surprise.” Prediction error renders the basis of present actions salient again – and deserving of attention (of consciousness) once more – precisely because the prediction that was generated by the past learning episode is now in need of revision. [...]

Biologically successful memories are reliable predictive algorithms – what Helmholtz (1866) called “unconscious inferences.” There is no need for them to be conscious. In fact, as soon as they become conscious they no longer deserve to be called memories, because at that point they become labile again. This seems to be what Freud had in mind when he famously declared that “consciousness arises instead of a memory-trace” (Freud 1920, p. 25). The two states – consciousness and memory – are mutually incompatible with each other. They cannot arise from the same neural assemblage at the same time. [...]

...the affective core of consciousness attributes meaning to experience, within a biological scale of values: “Is this new experience (this surprise), good or bad for my survival and reproductive success, and therefore, how do I respond to it?” The affective basis of consciousness explains why it (consciousness) is required to solve the biobehavioural problem of meeting our needs in unpredicted (or unpredictable) situations, and why it is superfluous in relation to successful predictive algorithms. [...]

Unconscious cognitive processes do not consist only in viable predictive algorithms. Although it is true that the ultimate aim of learning is the generation of perfect predictive models – a state of affairs in which there is no need for consciousness (Nirvana) – the complexity of life is such that this ideal is unattainable. Real life teems with uncertainty and surprise, and therefore with consciousness. That is to say, it teems with unsolved problems. As a result, we frequently have to automatise less-than-perfect predictive algorithms so that we can get on with the job of living, considering the limited capacity of consciousness (Bargh 2014). Many behavioural programmes therefore have to be automatised – rendered unconscious – before they adequately predict how to meet our needs in the world. This applies especially to predictions generated in childhood, when it is impossible for us to achieve the things we want – when there is so much about reality that we cannot master.

The consequently rampant necessity for premature automatisation is, I believe, the basis of what Freud called “repression.” I hope this makes clear why repressed memories are always threatening to return to consciousness. They do not square with reality. They give rise to constant “surprise,” for example, in the transference. I hope this also clarifies why the repressed part of the unconscious is the part of the mind that most urgently demands reconsolidation, and therefore most richly rewards psychotherapeutic attention.

-- Mark Solms. Reconsolidation: Turning consciousness into memory. Commentary on Lane et al. (2015).

Comment by kaj_sotala on Against "System 1" and "System 2" (subagent sequence) · 2019-10-04T08:52:25.784Z · score: 11 (4 votes) · LW · GW

Thanks! That does indeed sound valuable. Updated towards wanting to read that book.