Posts

Comments

Comment by korz on Should We Still Fly? · 2019-12-21T22:22:26.446Z · score: 4 (3 votes) · LW · GW

This argument does make sense and makes me wonder what other reasons there are for me to avoid flying if I accept that the impact of CO2 is solvable without excessive additional costs. What comes up is :

  • Not trusting the bought compensation. [This does not hold up on reflection: Given some research, I am confident that I would find trustworthy organisations such that I could be confident that the social costs are being addressed]
  • The feeling that 'just paying for the costs' is only an excuse and that actually I would be defecting. [This seems to just be caused by my emotions not following the inferential steps needed to realize that 'the harm I inflict' is actually taken care of]
  • Signalling to others the willingness of accepting non-trivial inconveniences when it comes to my behaviour affecting climate. [This aspect seems to be the most important. Even though not flying might not actually be a good way of having a positive influence regarding climate change, it *is* a simple and clear signal that I care about my influence on climate change.]

To conclude, I will update towards 'flying can easily be worth the CO2' and keep an eye out for alternative ways of signaling 'this topic is important to me' ('I do not fly' has the convenient properties of being i) easy to understand, ii) fast to transmit and iii) neither trivial nor too radical).

Comment by korz on What Are Meetups Actually Trying to Accomplish? · 2019-12-16T10:12:50.395Z · score: 1 (1 votes) · LW · GW

Thanks, this makes sense

Comment by korz on What Are Meetups Actually Trying to Accomplish? · 2019-12-15T22:45:28.693Z · score: 4 (3 votes) · LW · GW

Thank you for this post! I am currently playing with the thought of organizing a meetup in my local area (there is no active pre-existing one) and am still undecided whether it would be a good idea (I am neither very experienced with LW/EA nor a natural in leading discussions) and this post is very helpful in pointing me towards things to consider.

I did not understand the sentence:

Meetups send a lot of counterfactual people to CFAR workshops [...].

Could someone explain what is meant with "counterfactual people"?

Comment by korz on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-15T13:24:25.436Z · score: 7 (3 votes) · LW · GW

As any collapse (if it does happen) occurs so 'late' that current experiments are unable to differentiate between many worlds and collapse -- it seems quite possible that both theories will continue to give identical predictions for all realisable situations, with the only difference being 'one branch becomes realised' and 'all branches become realised'.

General:

More Human related:

  • One relevant aspect is how natural utility maximisation feels using one of the two theories as world model. Thinking in many worlds terms makes expected utility maximisation a lot more vivid compared to the different future outcomes being 'mere probabilities' -- on the other hand, this vividness makes rationalisation of pre-existing intuitions easier.
  • Another point is that most people strongly value existence/non-existence additionally to the quality and 'probability' of existence (e.g. people might play Quantum Russian Roulette but not normal Russian Roulette as many worlds makes sure that they will survive [in some branches]). This makes many worlds feel more comforting when facing high probabilities of grim futures.
  • A third aspect is the consequences for the concept of identity. Adopting many worlds as world model also means that naive models of self and identity are up for a major revision. As argued above, valuing all future branch selves equally (=weighted by the 'probabilities') should make many worlds and collapse equivalent (up to the 'certain survival [in some branches]' aspect). A different choice in accounting for many worlds might not be translatable into the collapse world model.

Disclaimer:

I am still very much confused by decision theories that involve coordination without a causal link between agents such as Multiverse-wide Cooperation. For such theories, other considerations might also be important.

----

¹: To be more exact, I would argue that the case for Quantum Russian Roulette becomes identical to the case for normal Russian Roulette if many world branches are weighted with their 'probabilities' and also takes into account the 'certain survival [in some branches]' bonus that many worlds gives.

Comment by korz on Conscious Proprioception -Awareness of the Body's Position, Motion, Alignment & Balance. · 2019-12-11T23:10:57.900Z · score: 4 (2 votes) · LW · GW

Some anecdotal evidence: When first reading this sequence, and focusing on the 'Base-Line', this did seem to increase my bodily perception more strongly than I expected (my reference were results from focusing on my breath), which made me decide to apply the idea of the Base-Line for a few days:

For years, when listening to talks that need a lot of concentration, I had the problem of becoming sleepy and I have found no working solution to this problem. Today, while in such a situation, I remembered this sequence and shifted some of my focus to my Base-Line, which actually was helpful in getting rid of my sleepiness.

If I assume this to be repeatable in the future, this makes me suspect that at least some level of conscious bodily perception is needed for staying alert/awake and that me focusing strongly makes me sink below this level.

This makes me look forward to exploring the idea of 'conscious proprioception is very valuable' further

Comment by korz on Bayesian examination · 2019-12-11T22:50:49.657Z · score: 8 (3 votes) · LW · GW

Tangentially relevant: I think that adopting Bayesian examination widely in society would decrease the number of people with aversion to maths/science/lawful thinking:

In my personal experience, thinking in probabilities feels much more natural* than 'hard' true-false thinking. I think that this aspect of lawful thinking plays an important role in many people deciding that "maths/science/... is not for me" and creating an Ugh field around them, and I think that Bayesian examinations as a default for examinations would be likely to shift the general opinion towards feeling comfortable with lawful thinking.

____

*: in the sense of "I can apply this kind of thinking also without using 'my logic-module'"; "Universal law" of the sequences has as a main point that most human thinking is based on leaky abstractions, which are very compatible with probabilistic reasoning

Comment by korz on Bayesian examination · 2019-12-11T21:46:55.978Z · score: 5 (3 votes) · LW · GW
It turns out that this maximization leads to the following answers.
For Alice:
1. Credence p1=33% in Geneva, but answers q1=100%.
2. Credence p2=33% in Lausanne, but answers q2=0%.
3. Credence p3=33% in Zurich, but answers q3=0%.
4. Credence p4=33% in Lugano, but answers q4=0%.

I am surprised by these numbers:

i) I assume that p4=33% and not p4=1% is a typo?

ii) Also, when reading that q1=100%, while q2,q3= 0%, I was surprised. As p1,p2 and p3 are the same, (if I am not mistaken) Alice should be free to arbitrarily divide her probability mass between these three? Given that, I expected her to choose q1=q2=q3. In case others were confused by this detail too, it might be worth it to slightly complicate the example (along the lines of 'Alice remembers an ambitious athlete friend being invited to Geneva once' and using this as tie breaker for the honest probabilities)


Comment by korz on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2019-11-30T17:38:38.741Z · score: 3 (3 votes) · LW · GW

Thank you for this explanation.

While reading the OP and trying to match the ideas with my previous models/introspection, I was somewhat confused: on the one hand, the ideas seemed to usefully describe processes that seem familiar using a gears-level model , on the other hand I was unable to fit it with my previous models (I finally settled with sth along the lines of 'this seems like an intriguing model of top/high-level coordination (=~conscious processes?) in the mind/brain, although it does not seem to address the structure that minds have?')

[...] the purpose of CSHW is not to replace the massive information processing solved by neural networks.

Your comment really helped me put this into perspective

Comment by korz on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T23:24:42.359Z · score: 1 (1 votes) · LW · GW

Disclaimer: I only know about IFS from this sequence, so I might confuse parts of it with my own models.

I think there is a value (or at least could be) in speaking of IFS parts as being person-like to a somewhat larger degree than a fully reductionist model would imply them to be:

When focusing on a part, a big chunk of one's mind is involved, which I expect to lend one's experience of the part actual person-like properties even if they were not there initially. Also, I would expect this effect to be easily amplified by having expectations of agency while focusing on the part (if I expect the part to be person-like, I will model it as a person and then just use my person-model of the part in order to interact with it). It seems plausible/possible that using one's 'dealing with a person'-abilities (where everyone has a lot of experience) for interacting with parts is easier to apply than using a more abstract model (I do not know much about other models, so it might well be that there are other easy-for-humans-to-apply methods which don't use 'mini-people').

With this I think that the IFS framing of parts as 'mini-people' can be seen as a feature, not a bug - although one should keep in mind that perceiving a part as strongly person-like is not necessarily a property of the part. I would expect: For improving one's understanding of the mind, one should not overestimate the degree of person-hood of parts; for dealing with one's parts, treating them as mini-people might be useful even if they aren't.

My impression was that parts in IFS range from if-then-rules to dissociated personalities (I was surprised to learn that their existence is debated; non-debated complex examples are described in Subagents, trauma and rationality). Because of this I thought that the descriptions as 'mini-people' are mostly meant to be easy to grasp and remember and do not claim to be accurate over the whole range.

Comment by korz on Tales From the American Medical System · 2019-05-10T11:55:02.208Z · score: 9 (3 votes) · LW · GW

To me too, a mindset of "I am the authority on this topic" from the doctor sounds likely.

I would not be surprised if the doctor adopted a rule of "always discuss treatment in person" as health issues often are very emotional and patients may be ill-informed: Meeting in person is a plus for establishing trust between doctor and patient, which will be essential for handling such situations. This reason doesn't really apply to the case presented by Zvi, but it seems reasonable that at least some motivation for the doctor's behaviour comes from a sloppy application of this rule. It seems to me that the doctor (and nurse) dismissed the possibility that someone could actually have a reason for not visiting right now and then got stuck in their positions.

If the doctor also doesn't reflect on their role as doctor in a consequentialist way, for some situations they might value shown respect ("If your doctor says you should meet them now, you should meet them now") more than the actual improvement in their patient's lives.


I wonder how the doctor would react if Zvi's friend would point out his motivation for keeping his schedule while actively endorsing the importance of his doctor's opinion. This should happen in person, as phone communication is (even) less good at correcting misinterpretations.
If I am right, this could allow the doctor to be assured that their value of shown respect is safe. And possibly this lets the doctor be open to the point of Zvi's friend.

- - -
Apart from this, I am quite distraught by the almost active distrust in their patient's decisions on the side of this doctor and nurse. If this really is typical for the American medical system, there will be massive associated problems ..

Comment by korz on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2019-05-01T20:27:08.939Z · score: 3 (2 votes) · LW · GW

[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]

My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds:

For describing my experience, I will regard the mind as consisting loosely of sub-agents, which are inter-connected and coordinating with each other (as in Global Workspace Theory). In healthy equilibrium, the agents are largely aligned and contribute to a single global agent. Properties of agents include 'trust in their inputs' and 'alertness/willingness to update'.

Now to my description: For me, it felt as if part of my mind lost some of its input-connections from other parts, increasing its alertness (something fundamentally changed, thus predictions must be updated) and also crippling feedback from the 'global opinion'. This caused drifting behaviour of the affected sub-agent, as it updated on messy/incomplete input, while not being successfully realigned by other sub-agents. After some time, the impaired sub-agent would either settle on a new, misinformed model (allowing its alertness to settle) or keep grasping for explanations (alertness staying high, maybe because more alert-type input from other agents remained).

The rest of my mind experienced a sub-agent panicking and then broadcasting eccentric opinions in good faith, while either not being impressed by contradictions or erratically updating to warped opinions loosely connected to input from the other agents. As the impaired agent felt as if it would update to contradictions (but didn't), the source of the felt alertness ("something is very wrong") was elusive and it became natural to just globally adjust to the sub-agent to restore coherence. Thus, internal coherence was partially restored at the cost of deviating from common sense (creating an Ugh Field in confrontations with contradicting experiences).

Should my experience be representative, the decision for accepting a delusional idea is not solely based on it being optimal for describing global sensory input. Instead one of the sub-agents does not properly update to global decisions, but still dominates them whenever active as all other agents do keep updating*. In this view the delusion is actually the best sensory input explanation, conditioned on the impaired sub-agent being right.

*) There should be some additional responses like generally decreasing the 'trust in input' or possibly recognizing the actual source of the problem. The latter would need confronting the Ugh Field, which should take a lot of effort

Comment by korz on Many maps, Lightly held · 2019-04-25T22:36:55.210Z · score: 1 (1 votes) · LW · GW

It seems, the text of point 6 got lost somehow, so I will cite it from the original post:

6.
The fable of the rational vampire.  (I wish I had a link to credit the author).  The rational vampire casually goes through life rationalising away the symptoms – “I’m allergic to garlic”, “I just don’t like the sun”.  “It’s impolite to go into someone’s home uninvited, I’d be mortified if I did that”. “I don’t take selfies” and on it goes. Constant rationalisation.

I really like the summarized addressing of the reasons. While reading, it felt as if the point of Many Maps, Lightly held gained momentum in some way. I think this helped me with aligning my 'gut-feeling' with my understanding.

Comment by korz on What are questions? · 2019-01-11T23:36:58.548Z · score: 1 (1 votes) · LW · GW

I will try to focus on the "compose a satisfying, useful, compact, and true model of what questions are" aspect. To reduce the problem to something more manageable, I will regard the thought process while questioning and exclude social and linguistic aspects.

In short:


My model proposal:
- While thinking, we use 'frameworks' (expectations/models/concepts/..)
- When thinking inside of a framework, we are able to notice gaps and inconsistencies, which feels unnerving to confusing
- This causes us to search for a solution (filling the gap, fixing the inconsistency, replacing the framework), which is the act of asking a question

(- The nested, interacting, fuzzy and changing 'frameworks' make everything complicated.)


In long:
Aiyen answered "It's a noticed gap in your knowledge", which I would like to build on:
It seems to me that questions are only possible when there is some expectation/model/concept in my mind to find the gap in.

As no better term comes to my mind I will use *framework* as the term for the expectation/model/concept that the question is stemming from. One can imagine 'framework' to refer to a mental picture of some part of reality.

Now it seems to me that while thinking inside of a framework one can notice gaps or inconsistencies in the framework (this strongly reminds me of 'Noticing Confusion' from the Sequences), which feels unnerving (if clear) or confusing (if vague).
The search for a fix to the gap of the framework would then be what we call asking a question.

When doing this in a social setting, asking a question will tell others that help (in some sense) is being asked for and reveal something about the framework in use (which has many implications for social interaction).

Example

- I think that the term 'stupid question' is usually used when one thinks that the asking person is using an unsuitable framework altogether. It doesn't refer to the question itself but to the fact that 'basic understanding' (the 'proper framework') seems to be missing and thus answering the question would be pointless.

Usefulness and Summary

Although this model of Questions seems quite compact and true to me, at this point it doesn't help with moving from the "Unknown Unknown to Known Unknwon".
Pointing out that confusion plays a big role is already part of the Sequences.
Apart from hiding everything complicated behind the term 'framework', the main aspect of my model is the claim that questions always, per definition, have their origin from 'inside their box' and are a quest for looking outside of it.


Our quest consists of the simplest operations, each one worthy of examination. We cannot build towers of thought without a solid foundation. We cannot build better tools if we don't know how our current tools operate, and it's often good to bootstrap by using our tools on themselves.


To improve our tools of thinking, a better understanding of questions and their behaviour surely is useful.
In my usual way of thinking, the frameworks I am using in my mind are fuzzy and ever changing, which makes it hard to pin down and realize confusion.
This problem can be approached by thoroughly and consciously choosing one's framework of interest. One would expect this to take a lot of mental work/time, but in exchange be a more robust way to improve frameworks
(This does sound a lot like the "System 2" way of thinking from Kahnemann's "Thinking, Fast and Slow").

If it is true that finding gaps in a defined box (framework) is a natural ability of our mind (and the existence of a box a condition for this ability), this could open an approach for improving our tools.

___
Final note: Until now I only read about rationality and certainly do not feel confident in my ability to contribute without erring often. Please point out mistakes that I make or basic ideas that I am unaware of.