Posts

Comments

Comment by korz on Where do (did?) stable, cooperative institutions come from? · 2020-11-07T20:47:09.632Z · LW · GW

I really liked this question and the breadth of interesting answers. 

I want to add a mechanism which might contribute to a weakening of institutions that is related to the 'stronger memes' described by ete (I have not thought this out properly, but I am quite confident that I am pointing at something real even if I might well be mistaken in many of the details):

In myself, and I think this is quite common, while considering my life/career options, I noticed an internal drive (think elephant from elephant in the brain) that made me focus on the highest-prestige group that seemed like a viable option. A natural choice is an institution at the highest (available) power level/size.

I think that modern communication technologies are strong enough to capture that drive by giving (felt) access to the most prestigious groups from around the globe. 

As a consequence, I expect that the emotionally impactful access to global culture/'tribes' decreases the felt importance of, and thus the effort put into local institutions, culture and tribes. (Related topics that come to mind would be the loss of spoken languages or local newspapers)

Comment by korz on Message Length · 2020-10-30T11:29:54.327Z · LW · GW

I am not sure whether my take on this is correct, so I'd be thankful if someone corrects me if I am wrong:

I think that if the goal was only 'predicting' this bit-sequence after knowing the sequence itself, one could just state probability 1 for the known sequence.

In the OP instead, we regard the bit-sequence as stemming from some sequence-generator, of which only this part of the output is known. Here, we only have limited data such that singling out a highly complex model out of model-space has to be weighed against the models' fit to the bit-sequence.

Comment by korz on Tools for keeping focused · 2020-08-05T07:07:43.715Z · LW · GW

Thanks for sharing!

There seems to be a typo ('k4rss' compared to 'krss') in the link to your blog-post introducing kindle4rss

Comment by korz on Uncalibrated quantum experiments act clasically · 2020-07-22T21:11:39.999Z · LW · GW

I'm glad if this was helpful.

I was also surprised to learn about this formalism at my university, as it wasn't mentioned in either the introductory nor the advanced lecture on QM, but turns out to be very helpful for understanding how/when classical mechanics can be a good approximation in a QM universe.

Comment by korz on Uncalibrated quantum experiments act clasically · 2020-07-21T17:51:15.443Z · LW · GW

I would need to think about this more to be sure, but from my first read it seems as if your idea can be mapped to decoherence.

The maths you are using looks a bit different than what I am used to, but I am somewhat confident that your uncalibrated experiment is equivalent to a suitably defined decohering quantum channel. The amplitudes that you are calculating would be transition amplitudes from the prepared initial state to the measured final state (Denoting the initial state as |i>, the final state as |f> and the time evolution operator as U, your amplitudes would be <f|U|i> in the notation of the linked wikipedia article). The go-to method for describing statistical mixtures over quantum states or transition amplitudes is to switch from wave-functions and operators to density matrices and quantum channels (physics lectures about open quantum systems or quantum computing will introduce these concepts) - they should be equivalent to (more accurately: a super-set of) your averaging over s and t for the uncalibrated experiment, as one can just define a time evolution operator for fixed values of s and t and then get the corresponding channel by taking the probability weighted integral (compare the Operator-sum representation in the Wikipedia article) to arrive at the corresponding channel.

Regarding all the interesting aspects regarding the Born rule, I cannot contribute at the moment.

Comment by korz on Sick of struggling · 2020-07-01T20:14:01.292Z · LW · GW
I’m just sick of struggling through life. The inefficiencies all around me are staggering and overwhelming.

Your mileage will vary, but a train of thought that helped me change my perspective on this (and I fully endorse this shift) was to realize that my emotions were ill-calibrated:

When I considered the state of the world, my emotional reaction was mostly negative, but when I tried to compare this reaction to a world in which earth is replaced by a lifeless rock I realized that this would clearly not be an improvement. After contemplating this, I decided that my emotions were missing a huge chunk: The immense value of life on earth which makes it reasonable to be pained by all the inadequacies in the first place. Since then, my emotional estimation of our world's value has climbed a lot which makes seeing all the problems much more bearable. (this change in perspective was largely influenced by the Sequences and HPMOR, but I am not sure whether this train of thought was mentioned explicitly)

Comment by korz on Optimized Propaganda with Bayesian Networks: Comment on "Articulating Lay Theories Through Graphical Models" · 2020-06-30T17:32:32.797Z · LW · GW

Up-voted for thoroughly putting the idea into less wrong context - i enjoyed being reminded of all the related ideas


A thought: I am a bit surprised that one can distil a single belief network explaining a whole lot of the variance of beliefs across many people. This makes me take the idea more seriously that a large number of people regularly do have very similar beliefs (down to the argumentative structure). Remembering You Have About Five Words this surprises me as I would expect a less reliable transmission of beliefs? (It might well be that I am just misunderstanding something)

Comment by korz on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2020-06-24T23:01:19.921Z · LW · GW

Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:


One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing:
On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with 'aesthetics',
on the other hand if in my mind I replace the term "high-energy-state" with "currently-active-goal-function(s)", this becomes a shockingly strong model describing my introspective experiences (matching large parts of what I would usually think of roughly as 'System 1-thinking'). Also the aspects of 'dissonance' and 'consonance' directly being unpleasant and pleasant feel more natural to me if I treat them as (possibly contradicting) goal functions, that also synchronize the perception-, memorizing-, modelling- and execution-parts of the mind. A highly consonant goal function will allow for vibrant and detailed states of mind.

Is there some mechanism that would allow for evolution to somewhat define the 'landscape' of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote


Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system.



---

Another aspect where my current model differs is that I do not identify consciousness (at least the part that creates the feeling of pleasure/suffering and the explicit feeling of 'self') as part of this goal-setting mechanism. In my model, the part of the mind that generates the feeling of pleasure or suffering is more of a local system (plus complications*) that takes the global state as model- and goal-input and tries to derive strategies from this. In my model, this part of the mind is what usually identifies as 'self' and it is this that is most relevant for depression or schizophrenia. But as what I describe as 'model- and goal-input' really defines the world and goals that the 'self' sees and pursues at each moment (sudden changes can be very disconcerting experiences), the implications of annealing for health would stay similar.

---

After writing all of this I can finally address the question of the parent comment:


Are your previous models single or multi-agent?


I very much like the multiagent-model sequence although I am not sure how well my "Another aspect [...]"-description matches: On the one hand, my model does have a privileged 'self'-system that is much less fragmented than the goal-function-landscape. On the other hand, the goal-function-landscape seems best described by "shards of desire" (which is a formulation used in the sequences if I remember correctly) and they can direct and override the self easily. This part fits well with the multiagent-model


---

*) A complication is that the 'self' can also endorse/reject goals and redirect 'active goal-energy' (it feels like a kind of delegable voting power that the self as strategy-expert can use if it gained the trust and thus voting-power of goal-setting parts) onto the goal-setting parts themselves in order to shape them.

Comment by korz on From self to craving (three characteristics series) · 2020-05-27T22:33:21.309Z · LW · GW

I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.


With regards to the 'mental motion':

In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from [...]

As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one's mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect this part to the global workspace and thus consciousness, which allows noticing and influencing the decision. If this connection is strong enough and can be activated consciously, it can make sense to call this process a mental motion.

Comment by korz on Identity Isn't In Specific Atoms · 2020-05-26T17:48:03.541Z · LW · GW

I think the meaning behind 'identical particles' is very hard to pin down without directly using mathematical definitions*. The analogy with (secretly numbered) billiard balls gives a strong intuition for non-identical particles. There are also intuitive examples that behave more like identical particles:

For example, the intuition for symbols nicely matches identical symbol/particle behaviour:

If I represent a Helium atom with the symbol "H" and no atom with "_", the balloons interior might be described by

"H__H_H____H__H_____H_______H_H__HH____H".

Here, it would still make sense to think 'the Helium atom at this position', but thinking 'what if I wrote "the fifth H" at the position of "the third H" and vice versa?' is not meaningful in the same way that the word "identical" remains "identical" even if I claim that I exchanged the two "i".


Can't we distinguish between particles through their relationships with other objects or "themselves", including causal relationships? For example, the electrons in my body now have different (and stronger) causal effects on electrons in my body later than on electrons in your body, and by this we can distinguish them.

I think this way of distinguishing particles makes sense, but does not rely on 'identity' in the sense of identical particles – your example could be realized both with identical and non-identical particles, as 'identifying' a particle by its state remains valid in both cases.


And can't we trace paths in spacetime for identity? Not particle-like paths, but by just relying on causality and the continuity of the wavefunction over spacetime?
The atom swap experiment would then destroy both atoms and create two atoms (possibly the same, possibly different, possibly swapped). What we could say about their identities would depend on the precise details of the view. Maybe there's no coherent way to make this work.

A different, but consistent definition for individual particle-identity might be possible. But, as the experimental predictions** from identical particles are well-confirmed, it would still have to treat the way that two electrons have different identity in a different way than the different identity between an electron and, say, a photon. I do not see how one could get the qm-predictions without also using the identical-particle maths.


*) One (simplified) way to write it for 2 particles would be:

  • For two non-identical particles the wave function is defined over the space of ordered tuples of positions, e.g. (r_1, r_2). Here it makes sense to think 'what happens if I exchanged the two particles?' as (r_2,r_1) is generally not (r_1,_2) and 'exchange particles' is a meaningful term.
  • For identical particles instead, the wave function is defined on the space of unordered tuples, e.g. {r_1,r_2}. Here, 'exchange particles' is not meaningful as {r_1, r_2} and {r_2, r_1} per definition describe the same thing.

**) There are significant consequences: As the space that the wave function moves in is changed drastically, its behaviour also changes. E.g. everything solid builds on the Pauli principle, which is a consequence of identical particles

Comment by korz on Reflective Complaints · 2020-05-24T18:50:16.367Z · LW · GW
That was the first thing I did when I created an account here.

Oops - I didn't notice the 'load more' option for the posts on your profile earlier, I upvoted your post now.

I have not yet written any posts myself and have only skimmed the detailed rules about karma some time ago, but I can easily imagine that the measures against spam can sometimes lead good posts from new accounts to be overlooked.

Comment by korz on Reflective Complaints · 2020-05-24T13:00:17.031Z · LW · GW

a) I liked reading your guide: You managed to include many important LW-related concepts while still keeping a hands-on feeling. This makes it a nice reference for people who do not enjoy a more technical/analytical approach. Have you considered creating a link-post on lesswrong?

b) You write:

The good news is that the virtuous cycle here also works: I've found that if one person is consistently unusually virtuous in their conversations and arguments, a little bubble of sanity spreads around that person to everyone in the vicinity over time.

This seems like a more deliberate version of what Scott Alexander describes in Different Worlds? (a term that is used is 'niceness fields')

I would be very interested in approaches to actively create 'bubbles of sanity' or 'niceness fields'.

The points 'aim for success, not victory' and 'assume good faith' of your guide seem important for this. A big part is probably to clearly communicate that the other's status is in no way being questioned and thus need not be defended. In my experience, this part of communication is usually not deliberate (or even conscious) and hard to change. Of course, even small improvements can be valuable.

Comment by korz on Perpetual Motion Beliefs · 2020-05-18T19:28:46.158Z · LW · GW
One could say that there is still a difference between probabilities so high/low that you can use ~1/~0 writings and probable but not THAT probable situations such as 98:2

I don't think that Eliezer would disagree with this.

As I understand it, he generally argues for following the numbers and in this post he tries to bind the reader's emotions to reality: He gives examples that make it emotionally clear that it already is in our interest to follow the numbers ('hot water need not *necessarily* burn you, but you correctly do not count on this. Getting burned is bad') and forces one to contrast this realisation with examples where common intuition/behaviour doesn't follow the numbers ('you do not *necessarily* loose money in a lottery, but you are mistaken to count on this. Loosing money is bad').

Comment by korz on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-08T17:57:33.892Z · LW · GW

Thanks for writing this post! Your writing helps me a lot in tying together other's claims and my own experiences into a more coherent model.

As Richard_Kennaway points out in their comment, the goal of insight meditation and 'enlightenment' is not necessarily the same as the goal of rationality (e.g. instrumental rationality/shaping the world's future towards a desired goal seems a part of rationality but not of 'enlightenment' as far as I can tell). I would be very interested in your opinion of how instrumental rationality relates to insight meditation and enlightenment.


My knowledge around this topic is admittedly weak, but the points where my introspection differs from your description might still be interesting:

  • When I introspect on my sense of self, my results are that it does stem from a quite localised part of my mind instead of being generated by different parts at different times*
  • Exploring the self-generating part of my mind led me to think that it can be somewhat described as a consciousness-level goal-setting system. The system's decisions (of endorsement or rejection) can fuel mental processes which gives them the felt property of identity. I think that the goal-setting property fits nicely with the finding that it is not part of one's problem-solving mind, but still a central aspect of the conscious experience. [EDIT: I just noticed that the 'player' description of Player vs. Character: A Two-Level Model of Ethics seems to point at the same experience]

__

*In the sense of: The source is always in the same localised part of my mind – the feeling of self does extend to different parts of my mind in different situations.

Comment by korz on Einstein's Arrogance · 2020-04-28T22:59:51.921Z · LW · GW

My original reading was 'there was less arrogance in Einstein's answer than you might think'. After rereading Eliezer's text and the other comments again today, I cannot tell how much arrogance (regarding rationality) we should assume. I think it is worthwhile to compare Einstein not only to a strong Bayesian:

On the one hand, I agree that a impressive-but-still-human Bayesian would probably have accumulated sufficient evidence at the point of having the worked-out theory that a single experimental result against the theory is not enough to outweigh the evidence. In this case there is little arrogance (if I assume the absolute confidence in “Then I would feel sorry for the good Lord. The theory is correct.” to be rhetoric and not meant literally.)

On the other hand, a random person saying 'here is my theory that fundamentally alters the way we have to think of our world' and dismissing a contradicting experiment would be a prime example of arrogance.


Assuming these two cases to be the endpoints of a spectrum, the question becomes where Einstein was located. With special relativity and other significant contributions to physics already at that point in time, I think it is safe to put Einstein into the top tier of physicists. I assume that he did find a strong theory corresponding to his search criteria. But as biases are hard to handle, especially if they concern one's own assumptions about fundamental principles about our world, there remains the possibility that Einstein did not optimize for correspondence-to-reality for finding general relativity but a heuristic that diverged along the way of finding the theory.

As Einstein had already come up with special relativity (which is related and turned out correct), I tend towards assuming that his assumptions about fundamental principles were on an impressive level, too.

With all this i think it is warranted to take his theory of general relativity very seriously even before the experiment. But Einstein's confidence is much stronger than that: it seems that he neglects the possibility that some of his fundamental assumptions might be wrong (his confidence in deriving general relativity from these assumptions seems warranted). This means that either he was (to a degree) mistaken in his confidence or that he was on a hard-to-believe level of rationality regarding the question of general relativity. Einstein actually was right, so it is problematic to claim that he was mistaken in his confidence.


After writing this, my conclusion is that i) evidence-gathering for humans might imply that when detecting a signal (finding the theory of general relativity), it's likely that we have actually accumulated a large pile of evidence, ii) Einstein does seem surprisingly confident, but (i) implies that this could be warranted and it is problematic to criticise correct predictions

Comment by korz on How effective are tulpas? · 2020-03-10T00:10:19.033Z · LW · GW

I do not have any experience with tulpas, but my impression of giving one's models the feel of agency is that one should be very careful:

There are many people who perceive the world as being full of ghosts, spirits, demons, ..., while others (and science) do not encounter such entities. I think that perceiving one's mental models themselves as agentic is a large part of this difference (as such models can self-reinforce by triggering strong emotions)


If I model tulpas as a supercharged version of modelling other people (where the tulpa may be experienced as anything from 'part of self' to 'discomfortingly other') - then I would expect that creating a tulpa does not directly increase one's abilities but might be helpful by circumventing motivational hurdles or diversifying one's approach to problems. Also, Dark Arts of Rationality seems related.

Comment by korz on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T21:52:30.215Z · LW · GW

Regarding "intuitive moral sense", I would add that one's intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.

For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for different people, which is evidence that this behaviour of my intuition does not reflect my 'actual' preferences), my intuitions updated towards valuing strangers.

I am not sure how strongly other EAs have reshaped their intuitions, but I think that using and accepting quantitative arguments for moral questions needs quite a bit of intuition-reshaping for most people.

Comment by korz on Should We Still Fly? · 2019-12-21T22:22:26.446Z · LW · GW

This argument does make sense and makes me wonder what other reasons there are for me to avoid flying if I accept that the impact of CO2 is solvable without excessive additional costs. What comes up is :

  • Not trusting the bought compensation. [This does not hold up on reflection: Given some research, I am confident that I would find trustworthy organisations such that I could be confident that the social costs are being addressed]
  • The feeling that 'just paying for the costs' is only an excuse and that actually I would be defecting. [This seems to just be caused by my emotions not following the inferential steps needed to realize that 'the harm I inflict' is actually taken care of]
  • Signalling to others the willingness of accepting non-trivial inconveniences when it comes to my behaviour affecting climate. [This aspect seems to be the most important. Even though not flying might not actually be a good way of having a positive influence regarding climate change, it *is* a simple and clear signal that I care about my influence on climate change.]

To conclude, I will update towards 'flying can easily be worth the CO2' and keep an eye out for alternative ways of signaling 'this topic is important to me' ('I do not fly' has the convenient properties of being i) easy to understand, ii) fast to transmit and iii) neither trivial nor too radical).

Comment by korz on What Are Meetups Actually Trying to Accomplish? · 2019-12-16T10:12:50.395Z · LW · GW

Thanks, this makes sense

Comment by korz on What Are Meetups Actually Trying to Accomplish? · 2019-12-15T22:45:28.693Z · LW · GW

Thank you for this post! I am currently playing with the thought of organizing a meetup in my local area (there is no active pre-existing one) and am still undecided whether it would be a good idea (I am neither very experienced with LW/EA nor a natural in leading discussions) and this post is very helpful in pointing me towards things to consider.

I did not understand the sentence:

Meetups send a lot of counterfactual people to CFAR workshops [...].

Could someone explain what is meant with "counterfactual people"?

Comment by korz on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-15T13:24:25.436Z · LW · GW

As any collapse (if it does happen) occurs so 'late' that current experiments are unable to differentiate between many worlds and collapse -- it seems quite possible that both theories will continue to give identical predictions for all realisable situations, with the only difference being 'one branch becomes realised' and 'all branches become realised'.

General:

More Human related:

  • One relevant aspect is how natural utility maximisation feels using one of the two theories as world model. Thinking in many worlds terms makes expected utility maximisation a lot more vivid compared to the different future outcomes being 'mere probabilities' -- on the other hand, this vividness makes rationalisation of pre-existing intuitions easier.
  • Another point is that most people strongly value existence/non-existence additionally to the quality and 'probability' of existence (e.g. people might play Quantum Russian Roulette but not normal Russian Roulette as many worlds makes sure that they will survive [in some branches]). This makes many worlds feel more comforting when facing high probabilities of grim futures.
  • A third aspect is the consequences for the concept of identity. Adopting many worlds as world model also means that naive models of self and identity are up for a major revision. As argued above, valuing all future branch selves equally (=weighted by the 'probabilities') should make many worlds and collapse equivalent (up to the 'certain survival [in some branches]' aspect). A different choice in accounting for many worlds might not be translatable into the collapse world model.

Disclaimer:

I am still very much confused by decision theories that involve coordination without a causal link between agents such as Multiverse-wide Cooperation. For such theories, other considerations might also be important.

----

¹: To be more exact, I would argue that the case for Quantum Russian Roulette becomes identical to the case for normal Russian Roulette if many world branches are weighted with their 'probabilities' and also takes into account the 'certain survival [in some branches]' bonus that many worlds gives.

Comment by korz on Conscious Proprioception. · 2019-12-11T23:10:57.900Z · LW · GW

Some anecdotal evidence: When first reading this sequence, and focusing on the 'Base-Line', this did seem to increase my bodily perception more strongly than I expected (my reference were results from focusing on my breath), which made me decide to apply the idea of the Base-Line for a few days:

For years, when listening to talks that need a lot of concentration, I had the problem of becoming sleepy and I have found no working solution to this problem. Today, while in such a situation, I remembered this sequence and shifted some of my focus to my Base-Line, which actually was helpful in getting rid of my sleepiness.

If I assume this to be repeatable in the future, this makes me suspect that at least some level of conscious bodily perception is needed for staying alert/awake and that me focusing strongly makes me sink below this level.

This makes me look forward to exploring the idea of 'conscious proprioception is very valuable' further

Comment by korz on Bayesian examination · 2019-12-11T22:50:49.657Z · LW · GW

Tangentially relevant: I think that adopting Bayesian examination widely in society would decrease the number of people with aversion to maths/science/lawful thinking:

In my personal experience, thinking in probabilities feels much more natural* than 'hard' true-false thinking. I think that this aspect of lawful thinking plays an important role in many people deciding that "maths/science/... is not for me" and creating an Ugh field around them, and I think that Bayesian examinations as a default for examinations would be likely to shift the general opinion towards feeling comfortable with lawful thinking.

____

*: in the sense of "I can apply this kind of thinking also without using 'my logic-module'"; "Universal law" of the sequences has as a main point that most human thinking is based on leaky abstractions, which are very compatible with probabilistic reasoning

Comment by korz on Bayesian examination · 2019-12-11T21:46:55.978Z · LW · GW
It turns out that this maximization leads to the following answers.
For Alice:
1. Credence p1=33% in Geneva, but answers q1=100%.
2. Credence p2=33% in Lausanne, but answers q2=0%.
3. Credence p3=33% in Zurich, but answers q3=0%.
4. Credence p4=33% in Lugano, but answers q4=0%.

I am surprised by these numbers:

i) I assume that p4=33% and not p4=1% is a typo?

ii) Also, when reading that q1=100%, while q2,q3= 0%, I was surprised. As p1,p2 and p3 are the same, (if I am not mistaken) Alice should be free to arbitrarily divide her probability mass between these three? Given that, I expected her to choose q1=q2=q3. In case others were confused by this detail too, it might be worth it to slightly complicate the example (along the lines of 'Alice remembers an ambitious athlete friend being invited to Geneva once' and using this as tie breaker for the honest probabilities)


Comment by korz on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2019-11-30T17:38:38.741Z · LW · GW

Thank you for this explanation.

While reading the OP and trying to match the ideas with my previous models/introspection, I was somewhat confused: on the one hand, the ideas seemed to usefully describe processes that seem familiar using a gears-level model , on the other hand I was unable to fit it with my previous models (I finally settled with sth along the lines of 'this seems like an intriguing model of top/high-level coordination (=~conscious processes?) in the mind/brain, although it does not seem to address the structure that minds have?')

[...] the purpose of CSHW is not to replace the massive information processing solved by neural networks.

Your comment really helped me put this into perspective

Comment by korz on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T23:24:42.359Z · LW · GW

Disclaimer: I only know about IFS from this sequence, so I might confuse parts of it with my own models.

I think there is a value (or at least could be) in speaking of IFS parts as being person-like to a somewhat larger degree than a fully reductionist model would imply them to be:

When focusing on a part, a big chunk of one's mind is involved, which I expect to lend one's experience of the part actual person-like properties even if they were not there initially. Also, I would expect this effect to be easily amplified by having expectations of agency while focusing on the part (if I expect the part to be person-like, I will model it as a person and then just use my person-model of the part in order to interact with it). It seems plausible/possible that using one's 'dealing with a person'-abilities (where everyone has a lot of experience) for interacting with parts is easier to apply than using a more abstract model (I do not know much about other models, so it might well be that there are other easy-for-humans-to-apply methods which don't use 'mini-people').

With this I think that the IFS framing of parts as 'mini-people' can be seen as a feature, not a bug - although one should keep in mind that perceiving a part as strongly person-like is not necessarily a property of the part. I would expect: For improving one's understanding of the mind, one should not overestimate the degree of person-hood of parts; for dealing with one's parts, treating them as mini-people might be useful even if they aren't.

My impression was that parts in IFS range from if-then-rules to dissociated personalities (I was surprised to learn that their existence is debated; non-debated complex examples are described in Subagents, trauma and rationality). Because of this I thought that the descriptions as 'mini-people' are mostly meant to be easy to grasp and remember and do not claim to be accurate over the whole range.

Comment by korz on Tales From the American Medical System · 2019-05-10T11:55:02.208Z · LW · GW

To me too, a mindset of "I am the authority on this topic" from the doctor sounds likely.

I would not be surprised if the doctor adopted a rule of "always discuss treatment in person" as health issues often are very emotional and patients may be ill-informed: Meeting in person is a plus for establishing trust between doctor and patient, which will be essential for handling such situations. This reason doesn't really apply to the case presented by Zvi, but it seems reasonable that at least some motivation for the doctor's behaviour comes from a sloppy application of this rule. It seems to me that the doctor (and nurse) dismissed the possibility that someone could actually have a reason for not visiting right now and then got stuck in their positions.

If the doctor also doesn't reflect on their role as doctor in a consequentialist way, for some situations they might value shown respect ("If your doctor says you should meet them now, you should meet them now") more than the actual improvement in their patient's lives.


I wonder how the doctor would react if Zvi's friend would point out his motivation for keeping his schedule while actively endorsing the importance of his doctor's opinion. This should happen in person, as phone communication is (even) less good at correcting misinterpretations.
If I am right, this could allow the doctor to be assured that their value of shown respect is safe. And possibly this lets the doctor be open to the point of Zvi's friend.

- - -
Apart from this, I am quite distraught by the almost active distrust in their patient's decisions on the side of this doctor and nurse. If this really is typical for the American medical system, there will be massive associated problems ..

Comment by korz on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2019-05-01T20:27:08.939Z · LW · GW

[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]

My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds:

For describing my experience, I will regard the mind as consisting loosely of sub-agents, which are inter-connected and coordinating with each other (as in Global Workspace Theory). In healthy equilibrium, the agents are largely aligned and contribute to a single global agent. Properties of agents include 'trust in their inputs' and 'alertness/willingness to update'.

Now to my description: For me, it felt as if part of my mind lost some of its input-connections from other parts, increasing its alertness (something fundamentally changed, thus predictions must be updated) and also crippling feedback from the 'global opinion'. This caused drifting behaviour of the affected sub-agent, as it updated on messy/incomplete input, while not being successfully realigned by other sub-agents. After some time, the impaired sub-agent would either settle on a new, misinformed model (allowing its alertness to settle) or keep grasping for explanations (alertness staying high, maybe because more alert-type input from other agents remained).

The rest of my mind experienced a sub-agent panicking and then broadcasting eccentric opinions in good faith, while either not being impressed by contradictions or erratically updating to warped opinions loosely connected to input from the other agents. As the impaired agent felt as if it would update to contradictions (but didn't), the source of the felt alertness ("something is very wrong") was elusive and it became natural to just globally adjust to the sub-agent to restore coherence. Thus, internal coherence was partially restored at the cost of deviating from common sense (creating an Ugh Field in confrontations with contradicting experiences).

Should my experience be representative, the decision for accepting a delusional idea is not solely based on it being optimal for describing global sensory input. Instead one of the sub-agents does not properly update to global decisions, but still dominates them whenever active as all other agents do keep updating*. In this view the delusion is actually the best sensory input explanation, conditioned on the impaired sub-agent being right.

*) There should be some additional responses like generally decreasing the 'trust in input' or possibly recognizing the actual source of the problem. The latter would need confronting the Ugh Field, which should take a lot of effort

Comment by korz on Many maps, Lightly held · 2019-04-25T22:36:55.210Z · LW · GW

It seems, the text of point 6 got lost somehow, so I will cite it from the original post:

6.
The fable of the rational vampire.  (I wish I had a link to credit the author).  The rational vampire casually goes through life rationalising away the symptoms – “I’m allergic to garlic”, “I just don’t like the sun”.  “It’s impolite to go into someone’s home uninvited, I’d be mortified if I did that”. “I don’t take selfies” and on it goes. Constant rationalisation.

I really like the summarized addressing of the reasons. While reading, it felt as if the point of Many Maps, Lightly held gained momentum in some way. I think this helped me with aligning my 'gut-feeling' with my understanding.

Comment by korz on What are questions? · 2019-01-11T23:36:58.548Z · LW · GW

I will try to focus on the "compose a satisfying, useful, compact, and true model of what questions are" aspect. To reduce the problem to something more manageable, I will regard the thought process while questioning and exclude social and linguistic aspects.

In short:


My model proposal:
- While thinking, we use 'frameworks' (expectations/models/concepts/..)
- When thinking inside of a framework, we are able to notice gaps and inconsistencies, which feels unnerving to confusing
- This causes us to search for a solution (filling the gap, fixing the inconsistency, replacing the framework), which is the act of asking a question

(- The nested, interacting, fuzzy and changing 'frameworks' make everything complicated.)


In long:
Aiyen answered "It's a noticed gap in your knowledge", which I would like to build on:
It seems to me that questions are only possible when there is some expectation/model/concept in my mind to find the gap in.

As no better term comes to my mind I will use *framework* as the term for the expectation/model/concept that the question is stemming from. One can imagine 'framework' to refer to a mental picture of some part of reality.

Now it seems to me that while thinking inside of a framework one can notice gaps or inconsistencies in the framework (this strongly reminds me of 'Noticing Confusion' from the Sequences), which feels unnerving (if clear) or confusing (if vague).
The search for a fix to the gap of the framework would then be what we call asking a question.

When doing this in a social setting, asking a question will tell others that help (in some sense) is being asked for and reveal something about the framework in use (which has many implications for social interaction).

Example

- I think that the term 'stupid question' is usually used when one thinks that the asking person is using an unsuitable framework altogether. It doesn't refer to the question itself but to the fact that 'basic understanding' (the 'proper framework') seems to be missing and thus answering the question would be pointless.

Usefulness and Summary

Although this model of Questions seems quite compact and true to me, at this point it doesn't help with moving from the "Unknown Unknown to Known Unknwon".
Pointing out that confusion plays a big role is already part of the Sequences.
Apart from hiding everything complicated behind the term 'framework', the main aspect of my model is the claim that questions always, per definition, have their origin from 'inside their box' and are a quest for looking outside of it.


Our quest consists of the simplest operations, each one worthy of examination. We cannot build towers of thought without a solid foundation. We cannot build better tools if we don't know how our current tools operate, and it's often good to bootstrap by using our tools on themselves.


To improve our tools of thinking, a better understanding of questions and their behaviour surely is useful.
In my usual way of thinking, the frameworks I am using in my mind are fuzzy and ever changing, which makes it hard to pin down and realize confusion.
This problem can be approached by thoroughly and consciously choosing one's framework of interest. One would expect this to take a lot of mental work/time, but in exchange be a more robust way to improve frameworks
(This does sound a lot like the "System 2" way of thinking from Kahnemann's "Thinking, Fast and Slow").

If it is true that finding gaps in a defined box (framework) is a natural ability of our mind (and the existence of a box a condition for this ability), this could open an approach for improving our tools.

___
Final note: Until now I only read about rationality and certainly do not feel confident in my ability to contribute without erring often. Please point out mistakes that I make or basic ideas that I am unaware of.