How feeling more secure feels different than I expected 2021-09-17T09:20:05.294Z
What does knowing the heritability of a trait tell me in practice? 2021-07-26T16:29:52.552Z
Experimentation with AI-generated images (VQGAN+CLIP) | Solarpunk airships fleeing a dragon 2021-07-15T11:00:05.099Z
Imaginary reenactment to heal trauma – how and when does it work? 2021-07-13T22:10:03.721Z
[link] If something seems unusually hard for you, see if you're missing a minor insight 2021-05-05T10:23:26.046Z
Beliefs as emotional strategies 2021-04-09T14:28:16.590Z
Open loops in fiction 2021-03-14T08:50:03.948Z
The three existing ways of explaining the three characteristics of existence 2021-03-07T18:20:24.298Z
Multimodal Neurons in Artificial Neural Networks 2021-03-05T09:01:53.996Z
Different kinds of language proficiency 2021-02-26T18:20:04.342Z
[Fiction] Lena (MMAcevedo) 2021-02-23T19:46:34.637Z
What's your best alternate history utopia? 2021-02-22T08:17:23.774Z
Internet Encyclopedia of Philosophy on Ethics of Artificial Intelligence 2021-02-20T13:54:05.162Z
Bedtime reminiscences 2021-02-19T11:50:05.271Z
Unwitting cult leaders 2021-02-11T11:10:04.504Z
[link] The AI Girlfriend Seducing China’s Lonely Men 2020-12-14T20:18:15.115Z
Are index funds still a good investment? 2020-12-02T21:31:40.413Z
Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare 2020-11-24T10:36:40.843Z
Retrospective: November 10-day virtual meditation retreat 2020-11-23T15:00:07.011Z
Memory reconsolidation for self-affection 2020-10-27T10:10:04.884Z
Group debugging guidelines & thoughts 2020-10-19T11:02:32.883Z
Things are allowed to be good and bad at the same time 2020-10-17T08:00:06.742Z
The Felt Sense: What, Why and How 2020-10-05T15:57:50.545Z
Public transmit metta 2020-10-04T11:40:03.879Z
Attention to snakes not fear of snakes: evolution encoding environmental knowledge in peripheral systems 2020-10-02T11:50:05.327Z
AI Advantages [Gems from the Wiki] 2020-09-22T22:44:36.671Z
The Haters Gonna Hate Fallacy 2020-09-22T12:20:06.050Z
(Humor) AI Alignment Critical Failure Table 2020-08-31T19:51:18.266Z
nostalgebraist: Recursive Goodhart's Law 2020-08-26T11:07:46.690Z
Collection of GPT-3 results 2020-07-18T20:04:50.027Z
Are there good ways to find expert reviews of popular science books? 2020-06-09T14:54:23.102Z
Three characteristics: impermanence 2020-06-05T07:48:02.098Z
On the construction of the self 2020-05-29T13:04:30.071Z
From self to craving (three characteristics series) 2020-05-22T12:16:42.697Z
Craving, suffering, and predictive processing (three characteristics series) 2020-05-15T13:21:50.666Z
A non-mystical explanation of "no-self" (three characteristics series) 2020-05-08T10:37:06.591Z
A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble 2020-05-05T19:09:44.484Z
Stanford Encyclopedia of Philosophy on AI ethics and superintelligence 2020-05-02T07:35:36.997Z
Healing vs. exercise analogies for emotional work 2020-01-27T19:10:01.477Z
The two-layer model of human values, and problems with synthesizing preferences 2020-01-24T15:17:33.638Z
Under what circumstances is "don't look at existing research" good advice? 2019-12-13T13:59:52.889Z
A mechanistic model of meditation 2019-11-06T21:37:03.819Z
On Internal Family Systems and multi-agent minds: a reply to PJ Eby 2019-10-29T14:56:19.590Z
Book summary: Unlocking the Emotional Brain 2019-10-08T19:11:23.578Z
System 2 as working-memory augmented System 1 reasoning 2019-09-25T08:39:08.011Z
Subagents, trauma and rationality 2019-08-14T13:14:46.838Z
Subagents, neural Turing machines, thought selection, and blindspots 2019-08-06T21:15:24.400Z
On pointless waiting 2019-06-10T08:58:56.018Z
Integrating disagreeing subagents 2019-05-14T14:06:55.632Z
Subagents, akrasia, and coherence in humans 2019-03-25T14:24:18.095Z


Comment by Kaj_Sotala on How feeling more secure feels different than I expected · 2021-09-17T13:03:13.745Z · LW · GW

Thank you. <3

Comment by Kaj_Sotala on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T13:12:54.919Z · LW · GW

It was trained on the Internet (among other sources); I would be unsurprised to find out that it has read most of the Sequences.

Comment by Kaj_Sotala on GPT-Augmented Blogging · 2021-09-14T18:19:24.795Z · LW · GW

The list of titles reminds me that Tom Scott also had two fun videos on using GPT-3 to generate titles/scripts for his and other people's videos.

Comment by Kaj_Sotala on GPT-Augmented Blogging · 2021-09-14T18:17:39.504Z · LW · GW
  • I deliberately overplanned my life and everything is going wrong

This one sounds hilarious. :D

Comment by Kaj_Sotala on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T16:00:10.277Z · LW · GW

Are claims like "you have been socialised into racism" all that different from claims such as "you are running on corrupted hardware", though?

It's true that such claims can be used in insidious ways, but at the same time some such claims are also going to be true. If you automatically assume that all such claims are to just to get the readers to signal obeisance and discard them just because of that, then you are also going to discard quite a few claims that you shouldn't have.

Comment by Kaj_Sotala on Three Principles to Writing Original Nonfiction · 2021-09-07T15:12:40.457Z · LW · GW

I don't read Aaron Diaz's webcomic Dresden Codak to learn about transhumanism. I read it because it's a masterwork of visual art with a riveting story. Nonfiction writing is about the ideas, not the experience. Get to the point.

That's a little strong. Nonfiction is about ideas, but we generally care about ideas because they are connected to experiences that matter to us (positively or negatively), and it's hard to convey ideas without conveying any experiences. In fact, this very post occasionally stops to do things that I would call conveying experiences, such as when you relay Etirabys's experience of you at different times. Arguably even you mentioning Dresden Codak in the previous sentences is evoking an experience. :)

Nonfiction conveys information. 

Fiction evokes emotion. [...]

Though the ostensible purpose of nonfiction is the conveyance of information, if that information is in a raw state, the writing seems pedestrian, black-and-white facts in a colorful world. The reader, soon bored, yearns for the images, anecdotes, characterization, and writerly precision that make informational writing come alive on the page. That is where the techniques of fiction can be so helpful to the nonfiction writer. [...]

TRADITIONAL NONFICTION: New York City has more than 1,400 homeless people.

BETTER NONFICTION: The man who has laid claim to the bench on the corner of 88th Street and Park Avenue is one of New York City’s 1,400 homeless people.

(Sol Stein, Stein on Writing)

Comment by Kaj_Sotala on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T14:50:59.576Z · LW · GW

It seems to me this book is largely a manual for obedience to a political faction; a long list of the details of how one ought to act in different scenarios in order to signal obeisance. 

I read the sentences just before the one you quoted as explicitly de-emphasizing signaling obeisance:

You should not feel guilty for having been socialized into racismS. That’s just the way it is for all of us. Leave the sackcloth and ashes aside. When you find out you’ve been doing something that perpetuates racismS, the best response is to say “of course I was; I’m glad I finally found out about it so I can change.” 

If I was writing something that was trying to get its readers to signal obeisance, I'm not sure what exactly I would say to get that outcome, but I think that my message would be closer to "you are bad and should feel bad" than "this is the way it is for all of us, so don't feel guilty or make too big of a deal out of it".

Comment by Kaj_Sotala on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T14:43:36.438Z · LW · GW

To make your point more stark, if one were to modify the quote to say

When you find out you’ve been doing something that is neither epistemically nor instrumentally rational, the best response is to say “of course I was; I’m glad I finally found out about it so I can change.” 

then it would presumably be better received on LW, even though both are expressing a similar point: if you realize you've been mistaking a mistake, the most effective course of action is not to spend time beating yourself up, but to say "oops", update, and be happy that you noticed in the first place.

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-09-03T09:09:45.056Z · LW · GW

Glad that we're coming closer to agreement. :)

I think this is quite different from just teaching a friend to the best of your ability

Could you elaborate on this? I agree that if we were talking about teaching a skill in the abstract, it would be different, but I'm not sure where the difference is if we're teaching a habit? Since to me learning a habit is reshaping your motivational system.

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-09-02T06:28:48.301Z · LW · GW

I think we're talking past each other. 

We seem to be, yes. :)

I guess a difference here is that where you see "manipulation", I see "good pedagogy".

A friend was once trying to teach / encourage me to cook. One of the things she told me to do was to put on some music that I liked as I was doing it, so as to make it feel more enjoyable. I don't know whether she thought about it in those terms, but in making that suggestion she was trying to use conditioning on me - associating the act of cooking with pleasant music. I never thought that this suggestion was manipulation or something that she should have asked my consent for, I just felt that it was her being thoughtful and trying to make my experience as pleasant as possible. 

The article strongly gives the impression that the thing Brittany was excited about acquiring was not just the abstract skill of knowing how to cook, but also the habit of actually cooking regularly ("Brittany wanted tasty food and not to be sick all the time"). And if you want to acquire a habit, then the way that we acquire habits is through conditioning; there isn't any other way. The only question is whether you know enough to help someone (or yourself) to acquire it in a way that's fast and pleasant or slow and less pleasant.

If we strip away cold-sounding terms like "operant conditioning" and look what lsusr actually did, we get things like "I never, at any point, implied that Brittany might be deficient because she didn't know how to cook". The opposite to this would have been... making her feel bad for no reason. I assume that lsusr wouldn't have wanted to make his friend feel bad for no reason anyway, so asking for her consent on this particular point would feel like it amounts to something like "are you okay with me being nice to you while we do this, just as I'd try to be nice to you anyway".

Similar to "I didn't start by bringing Brittany to the store, then teach her to cook and have her eat at the end. I started by feeding her, then I taught her to cook and only at the end did I bring her to the grocery store". If someone wants to learn to cook, then you have to choose some point in the chain to start them out from. If you happen to know that starting out from the end results is both the fastest way to teach and the choice that will produce the most pleasant experience to them, do you need to ask for their consent to choose this point rather than a point that produces a worse experience? I guess it wouldn't hurt to ask for consent, but certainly if I've already told someone that I want them to help me get into a particular habit, then I actively hope that they do everything they can to make the process as pleasant and effective to me as possible!

I guess the intuition that I'm trying to express here is that there's no difference, in this case, between "applying conditioning" and "trying to make sure that your friend has a pleasant time and comes to enjoy the activity that they've expressed wanting to do". The things that we find enjoyable and pleasant to do become habitual to us, and behaviorism is (in part) just the science of figuring out what it is that causes things to become enjoyable and pleasant to us. If you hadn't read anything about behaviorism, but were just motivated by a desire to be nice to your friend and tried to figure out how to have them have an enjoyable time as they tried out cooking, you could still arrive at exactly the same behaviors. So asking for consent on that ends up being basically the same as "do you consent to me trying to be nice to you, rather than me not being nice to you".

The way a similar point was expressed in Don't Shoot the Dog, IIRC, was that we have no choice of whether our actions condition other people or not. Everything that we do conditions other people to like various things either more or less; the question is only whether we let our effects be random and outside our control, or whether we learn enough to try to make our effects beneficial. Once you've learned how to effectively teach someone habit, and they ask you to teach it to them, you don't really have a choice of "not using conditioning" anymore; you know that all of your choices cause some degree of conditioning, and you only have the choice of whether to do it well or poorly.

All of this feels different to me than going on a date and using operant conditioning to make someone fall in love with you faster. (Setting aside the way in which things like dressing up nicely or being good in conversation also involve an element of conditioning the other.) If someone consents to go on a date with you, they haven't consented to you teaching them a habit, they have just consented to spending an evening trying out whether they happen to like you or not. There are certainly manipulative tricks that one could employ there, that would have an effect of clouding their ability to form an accurate judgment, and that would be wrong. But I don't see anything like that happening here: Brittany had already made the judgment that there was an end result she wanted, and lsusr was just doing his best to help her reach that end result.

Comment by Kaj_Sotala on How To Write Quickly While Maintaining Epistemic Rigor · 2021-09-01T19:12:12.953Z · LW · GW

Liked this essay and upvoted, but there's one part that feels a little too strong:

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. [...]

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.

Suppose that I have studied a particular field X, and this has given me a particular set of intuitions about how things work. They're not based on any specific claim that I could cite directly, but rather a more vague feeling of "based on how I understand things to generally work, this seems to make the most sense to me".

I now have an experience E. The combination of E and my intuitions gathered from studying X cause me to form a particular belief. However, if I had not studied X, I would have interpreted the experience differently, and would not have formed the belief.

If I now want to communicate the reasons behind my belief to LW readers, and expect many readers to be unfamiliar with X, I cannot simply explain that E happened to me and therefore I believe this. That would be an accurate account of the causal history, but it would fail to communicate many of the actual reasons. I could also say that "based on studying X, I have formed the following intuition", but that wouldn't really communicate the actual generators of my belief either.

But what I can do is to try to query my intuition and try to translate it into the kind of a framework that I expect LW readers to be more familiar with. E.g. if I have intuitions from psychology, I can find analogous concepts from machine learning, and express my idea in terms of those. Now this isn't quite the same as just writing the bottom line first, because sometimes when I try to do this, I realize that there's some problem with my belief and then I actually change my mind about what I believe. But from the inside it still feels a lot like "persuasion", because I am explicitly looking for ways of framing and expressing my belief that I expect my target audience to find persuasive.

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-09-01T18:05:29.897Z · LW · GW

Are you suggesting that she wouldn't have been excited without manipulation? 

Given that she was apparently enthusiastic "every step in the way", I read the post as saying that just the first step of having a picnic together in the park and lsusr telling her that home-cooked meals are cheaper than TV meals was enough to get her to want to cook. If something as minor as that would have been enough to make her excited about it, then that doesn't feel like it qualifies as manipulation to me. 

I can't cook either and I've also had people offer me tasty food and mention that I could live more cheaply and tastily if I learned cook, and I wouldn't call that an act of manipulation! Especially since that alone hasn't been enough to get me particularly enthusiastic about the idea; if it was enough to persuade her, then she must have been very close to being excited about it already.

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-09-01T12:35:03.960Z · LW · GW

I'm guessing that a significant part of the negative reaction that people are getting to this article could be changed by replacing the sentence "My goal for covid lockdown was to train Brittany to cook" from the first paragraph with something like "As a result, Brittany wanted to learn to cook, and my goal for covid lockdown was to help her succeed with that".

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-09-01T12:29:10.322Z · LW · GW

Did she consent to any of this

Didn't lsusr already answer this in the positive [1, 2]?

Comment by Kaj_Sotala on Training My Friend to Cook · 2021-08-29T09:06:29.994Z · LW · GW

This post felt really wholesome to read and made me feel happy. I'm glad for Brittany for having had such a wonderful experience and for having you as her friend.

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:46:26.068Z · LW · GW

Gain seeking (the opposite of loss aversion) in the stock market

Moreover, whereas real-world phenomena exist that appear consistent with loss aversion, as pointed out by Ert and Erev (2013), other phenomena occur that appear consistent with the opposite, namely gain seeking. For example, Barber and Odean (1999) identified the phenomenon of overtrading in the stock market, whereby investors trade more than would be justified by rationality assumptions. To the extent that maintaining the status quo is thought to represent loss aversion, this excess trading (i.e., changing of the status quo) could be interpreted to support gainseeking behavior. Further, individual investors exhibit insufficient diversification among assets (Barber & Odean, 2000). To the extent that diversification reduces risk, this behavior can also be interpreted as gain seeking.

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:43:22.411Z · LW · GW

Self-rated losses vs. gains

Arguably, perhaps the most straightforward test of loss aversion is to simply ask people to evaluate the impact of losing versus gaining the same object. However, when researchers have examined how people rate the impact of losing versus gaining the same amount of money, little support for loss aversion has emerged (Harinck, Van Dijk, Van Beest, & Mersmann, 2007; Liberman, Idson, & Higgins, 2005; Mellers, Schwartz, & Ritov, 1999; Mukherjee, Sahay, Pammi, & Srinivasan, 2017; Rozin & Royzman, 2001). For example, Rozin and Royzman (2001) write: “In its boldest form, losing $10 is worse than winning $10 is good. Although we are convinced of the general validity of loss aversion, and the prospect function that describes and predicts it, we confess that the phenomenon is only realizable in some frameworks. In particular, strict loss and gain of money does not reliably demonstrate loss aversion (unpublished data by the authors)” (Rozin & Royzman, 2001, p. 306). In fact, with low stakes, gains actually appear to loom larger than losses when using this paradigm (e.g., Harinck et al., 2007).

Whereas past work has focused on a comparison between losing versus gaining monetary amounts, we have recently examined how people react to losing nonmonetary objects (Gal & Rucker, 2017b). For example, how do people rate the impact of losing versus gaining a mug? For most everyday objects we examined (mugs, flashlights, notebooks), the positive impact anticipated from gaining the object was rated to be greater than the negative impact anticipated from losing the object. For example, using a scale ranging from !5 (“extremely negative”) to +5 (“extremely positive”) to describe their feelings, participants who rated their feelings about losing a mug said their feelings would be less affected (M = 1.38) than did participants who rated their feelings if they were to gain a mug (M = 2.71). Notably, for some objects, we found no statistical difference between the impact of gains versus losses (a watch, a mountain view, lakefront access), and for no object did we find losses were rated to be more impactful than gains.

McGraw, Larsen, Kahneman, and Schkade (2010) attempted to reconcile the inconsistency of such findings with loss aversion. Specifically, the authors proposed that losses and gains are evaluated on different subjective scales. Consequently, the comparison of the impact of a loss evaluated independently with the impact of a gain evaluated independently does not provide a fair relative comparison of the impact of losses versus gains. Instead, they argue for a fair comparison, the loss and gain of an object need to be evaluated jointly with respect to each other. To this end, McGraw et al. (2010) asked participants to evaluate the relative impact of losing versus gaining the same amount of money; for example, they asked participants which of losing or gaining $50 they thought would be more impactful. With this approach, McGraw et al. (2010) identified a pattern of results consistent with loss aversion: the majority of participants stated that the loss of money would be more impactful than its gain. [...]

... an important caveat is in order. Namely, the studies of McGraw et al. (2010) involved potentially significant amounts of money for the participants involved (i.e., $50 and $200 for undergraduates). As noted previously, when large amounts of money are involved, loss aversion is indistinguishable from risk aversion for changes in wealth, which is fully consistent with rational choice theory (cf. Rabin & Thaler, 2001). To put this in context, if losing $50 is more likely to impact one’s lifestyle and wellbeing than gaining $50 is likely to impact it, then it is perfectly rational that individuals would be more psychologically impacted by losing $50 than by gaining $50. However, it is assumed that the loss versus gain of small amounts of money do not differentially impact one’s objective wellbeing, and hence, it is considered irrational for losses to loom larger than gains when small amounts of money are involved (Rabin & Thaler, 2001).

Indeed, in a recent paper by Mukherjee et al. (2017), the authors replicated the procedure of McGraw et al. (2010) with low stakes. They observed that when stakes were low, gains were rated as having more psychological impact than losses. Conversely, when stakes were high, Mukherjee et al. (2017) found that participants tended to rate losses as more impactful than gains. Thus, consistent with the possibility of contextual factors affecting the relative impact of losses and gains, the findings of Mukherjee et al. suggest a moderator of when losses loom larger than gains. On the other hand, the definitiveness of this moderator must be tempered by potential concerns about the validity of the particular methodology used for testing the impact of losses versus gains and the fact that for high stakes it is difficult to distinguish risk aversion from differences in the psychological impact of losses and gains. Finally, in recent work (Gal & Rucker, 2017b), we also asked participants to rate the impact of gaining and losing various goods using McGraw et al.’s procedure. Although our results varied based on the nature of the good, we found no evidence for a predominance for losses to loom larger than gains. 

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:38:45.630Z · LW · GW

Risky choice

Kahneman and Tversky (1979) propose that individuals will tend to demand a substantial premium over an expected value of zero to accept a bet with even odds of winning and losing the bet. In the words of Kahneman and Tversky (1979), “most people find symmetrical bets of the form (x, 0.50; !x, 0.50) distinctly unattractive.” In a typical demonstration, which we refer to as the risky bet premium paradigm, if individuals are offered a bet with a 50% chance of losing $5 and a 50% chance of winning X, on average, they demand that X be $10 or more in order to accept the bet. This finding is assumed to reflect the greater perceived psychological impact of a loss compared with a gain (Tversky & Kahneman, 1992).

Gal (2006) points out that the risky bet premium can be conceived as a special case of the status quo bias paradigm where not accepting the bet is the status quo (or inaction) option and accepting the bet is the change (or action) option. As a result, similar explanations to those that can explain the status quo bias and endowment effect can explain the risky bet premium. Therefore, it is unclear whether the risky bet premium reflects a general tendency of losses to loom larger than gains or reflects processes associated with a propensity to favor inaction over action.

In order to decouple losses and gains from inaction and action in the context of risky choice, Gal (2006) presented participants with a risky bet, where no difference in action or inaction existed with respect to accepting the bet and not accepting the bet. Gal found no evidence that losses loomed larger than gains. Specifically, in a hypothetical decision to allocate funds ($100) between a safe alternative that returned 3% for sure and a mixed even bet with an expected return of zero, nearly 80% of individuals allocated at least some funds to the even bet, that is, to a risky option with lower expected value than the safe option, and approximately 20% of individuals allocated all the funds to the even bet, an amount which matched the percentage of individuals allocating all their funds to the safe option.

Rather than evidence for loss aversion, if anything, the behavior documented by Gal (2006) appears, on net, to reflect gain seeking. Other researchers have similarly found that when given multiple investment options, individuals tend to choose risky investment options over safer investment options with higher expected value (Ben-Zion, Erev, Haruvy, & Shavit, 2010). Such findings appear difficult to reconcile with a general principle of loss aversion (see also Erev, Ert, & Roth, 2010; Sonsino, Erev, & Gilat, 2002 for results with similar implications) and provide evidence against both the strong and weak versions of loss aversion considered here.

Other researchers have found that when accepting a risky bet is not framed as the sole action option, but as one option in a choice between two action options, no evidence for loss aversion emerges (Erev, Ert, & Yechiam, 2008; Ert & Erev, 2013; Ert & Yechiam, 2010; Hochman & Yechiam, 2011; Koritzky & Yechiam, 2010; Yechiam & Ert, 2007). For example, Erev et al. (2008) offered participants a choice between either (a) receiving 0 points for sure or (b) receiving a bet that offered a 50% chance to win 1000 points and a 50% chance to lose 1000 points (points were to be converted to money at a known ratio). Erev et al. found that 48% of participants chose the safe option (i.e., receiving 0 point for sure) and 52% of participants chose the risky option. Consistent with this finding, a review of over 30 papers finds little evidence that losses loom larger than gains in the context of risky choice when a bet with even odds of gaining and losing is not framed as the action option (Yechiam & Hochman, 2013). We recently found additional support for this conclusion in two separate runs of an experiment conducted with participants from MTurk. In particular, we asked participants to imagine they faced a choice between either (a) receiving $0 with 100% chance or (b) receiving $15 with 50% chance or losing $15 with 50% chance. In both runs, participants exhibited a trend toward the choice of the risky option (Figure 2). Thus, we did not find evidence for participants to avoid loss any more than they pursued gain in risky choice.

The stakes of the outcomes in risky choice experiments that do not show evidence for loss aversion tend to be low to moderate (from less than $1 to as high as $100). Conversely, some experiments that involve higher stakes (e.g., several hundred dollars) have shown a tendency among individuals to choose the safer alternative. However, loss aversion is assumed to be independent of the level of the stakes involved (Kahneman & Tversky, 1979). In fact, that the effects attributed to loss aversion have been found with small stakes is cited as particularly strong evidence for loss aversion (Rabin, 2003; Rabin & Thaler, 2001). The reason scholars have focused on small stakes is because avoidance of large magnitude losses can be explained by ordinary risk aversion for changes in wealth/circumstances, which is entirely consistent with rational choice theory, whereas the same is not true of avoidance of low stakes losses that do not materially impact wealth/circumstances. For example, it is rational to perceive a greater impact from losing $1000 that is needed to pay the rent than from gaining $1000 when basic needs are already covered. Conversely, if neither losing nor gaining $5 materially changes one’s circumstances, it can be viewed as irrational to view its loss as more impactful than its gain. Thus, the finding that people often exhibit risk neutrality in choices among low-stakes mixed gambles is evidence against loss aversion.

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:31:34.255Z · LW · GW

The endowment effect and loss aversion

The endowment effect is the phenomenon perhaps most often cited as evidence for loss aversion in the context of riskless choice (Kahneman et al., 1990; Thaler, 1980; Tversky & Kahneman, 1991). The endowment effect refers to the finding that owners of an object demand more to part with the object than nonowners are willing to pay to obtain it (Thaler, 1980). For example, in a classic study, Kahneman et al. (1990) found that individuals endowed with a mug demanded, on average, about $7 to part with it. In contrast, individuals not endowed with a mug were, on average, willing to pay only about $3 to obtain the same mug. The finding that individuals’ willingness to accept (WTA) is greater than their willingness to pay (WTP) appears robust across many different instantiations of the endowment paradigm (Kahneman et al., 1991). It is this central finding that is viewed as evidence for the general principle that losses exert a greater impact than gains.

Although taken as evidence for loss aversion, the endowment effect can be understood as a case of the status quo bias where maintaining the endowed option is the inaction (or status quo) alternative. As such, the endowment effect is subject to the same alternative explanations (e.g., inertia) to loss aversion as those described for the status quo bias. For example, the inertia explanation suggests that when individuals are indifferent between the endowed option and the nonendowed option, they will opt to maintain the endowed option due to lack of incentive to trade, not because the loss of the endowed option looms larger than the gain of the nonendowed option.

Another explanation of the endowment effect, which similarly does not require loss aversion, comes from Weaver and Frederick (2012) and Isoni (2011) (see also Simonson & Drolet, 2004; Yechiam, Ashby, & Pachur, 2017). These authors provide a differential reference price account. They argue that buyers and sellers face fundamentally different decisions that lead them to focus on different reference prices when setting WTP and WTA amounts, respectively. For buyers, their own personal utility from the acquisition of the object is the most salient reference. In contrast, for sellers, the market value of the object is the most salient reference. As a consequence, if market prices tend to exceed personal valuations, owners will ask more for a product than a prospective buyer is willing to pay. For example, if both owners and nonowners value an object at $3, but the market price is $7, owners will demand $7 to part with it, whereas nonowners will only be willing to pay $3 to acquire it. This account, as with inertia, requires no differential sensitivity to losses to explain the endowment effect.

Other potential confounds exist in the endowment effect paradigm. For example, WTP and WTA are assessed on different scales. WTP is bounded by one’s ability to pay (i.e., budget constraints), whereas WTA is not.

In the possible alternative explanations to loss aversion for the endowment effect discussed so far, the valuation of an option when it is the endowed versus the nonendowed option does not differ. However, research also shows that individuals confronted with the decision of whether to give up an endowed option tend to focus more on positive features and less on negative features of the option than those faced with the decision of whether to acquire the option (Carmon & Ariely, 2000; Nayakankuppam & Mishra, 2005; see also Johnson, Haubl, & Keinan, 2007). This process could result in greater valuation for an option when it is endowed than when it is not endowed and, therefore, could be interpreted as a process that leads losses to loom larger gains in the context of the endowment effect. However, two caveats are in order. First, because loss and gain are confounded with inaction and action in the endowment paradigm, rather than reflect a tendency to elevate the option that might be lost, this process could just as well reflect a tendency to elevate the inaction alternative. Second, even if one accepts the idea that a tendency exists to elevate options that might be lost in the context of the endowment effect, it would not imply acceptance of loss aversion itself; that is, the acceptance of a general principle whereby losses loom larger than gains. In particular, to accept a general principle of loss aversion would, at the least, require evidence that losses loom larger than gains across different contexts, including in contexts where losses and gains are not confounded with inaction and action.

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:28:10.306Z · LW · GW

Status quo bias and loss aversion

The status quo bias, the name given for individuals’ propensity to prefer the status quo to an alternative option, has been attributed to loss aversion (Kahneman, Knetsch, & Thaler, 1991) and thus taken as evidence supportive of loss aversion. In particular, the loss aversion account suggests that the loss of the status quo option looms larger than the gain of an alternative (change) option. However, Ritov and Baron (1992) provided evidence that the status quo bias was not a propensity to remain at the status quo per se, but a propensity to favor inaction over action (i.e., omission over commission).

In particular, Ritov and Baron showed that when presented with a choice that involved the option to do nothing or to do something, people tended to choose to do nothing; this decision resulted in a tendency toward the choice of the status quo option when doing nothing maintained the status quo, but a tendency toward the choice of the change option when doing nothing resulted in a change from the status quo. Others have found that a propensity toward the status quo sometimes persists even when action is required to maintain the status quo (Schweitzer, 1994), though Ritov and Baron (1992) did not find this to be the case.

Regardless, acceptance of the idea that individuals tend to favor inaction over action (rather than to favor the status quo over change per se) does not preclude the loss aversion explanation for the status quo bias. Instead, this observation merely qualifies the loss aversion explanation: if loss aversion explains the status quo bias, then the reference point must be inaction (i.e., the default situation of doing nothing) rather than the status quo. In other words, it is not the loss of the status quo that looms larger than the gain of the alternative; rather what is to be lost by action looms larger than what is to be gained by action.

At the same time, a propensity toward inaction does not, by any means, require loss aversion. Gal (2006)’s inertia account states that when people are indifferent between options, they should favor inaction over action because doing something requires a psychological motive. Alternatively, a preference for inaction might occur because individuals will tend to favor options that reduce processing and transaction costs. Other explanations for a propensity toward inaction are that errors of commission tend to involve greater regret than errors of omission (Ritov & Baron, 1995) and that individuals might rely on an “if ain’t broke, don’t fix it” heuristic (alluded to by Baron & Ritov, 1994).

To illustrate that loss aversion is not required to explain the status quo bias, Gal (2006) asked participants if they would trade one good (a quarter minted in Denver) for an essentially identical good (a quarter minted in Philadelphia). Kahneman (2011) has noted that loss aversion does not come into play when individuals exchange essentially identical goods (e.g., when trading one $5 bill for five $1 bills) because people do not code such exchanges in terms of losses and gains. Nonetheless, Gal (2006) found that more than 85% chose to retain their original quarter. We recently replicated this result by asking 149 MTurk participants whether they would prefer to trade a $20 bill they were slated to receive for another $20 bill (i.e., the change option) or to stick with the original $20 bill they were slated to receive (the status quo option). In one version, participants were only able to choose between these two options, whereas in another version, participants were able to indicate that they were indifferent between the options. Although, according to Kahneman (2011), loss aversion should not come into play in this context because the exchange would not be coded in terms of losses and gains, we observed a clear tendency of participants to indicate a preference for the status quo option (see Figure 1). Thus, again, the presence of a status quo bias should not be viewed as evidence of loss aversion.

In sum, the mere presence of a status quo bias (or inaction bias) does not provide insight into whether losses loom larger than gains. The status quo bias might be caused by the loss of the status quo looming larger than the gain of an alternative, but it might equally be caused by any of a number of other factors that lead toward a propensity toward inaction (and/or a propensity toward the status quo). As such, the presence of a status quo bias, in and of itself, cannot be taken as tantamount to evidence for loss aversion.

Comment by Kaj_Sotala on The Death of Behavioral Economics · 2021-08-23T16:26:32.520Z · LW · GW

There's also a second paper linked from that article which is quite interesting (some excerpts in child comments).

Here, we offer a review and discussion of the literature on loss aversion. Our main conclusion is that the weight of the evidence does not support a general tendency for losses to be more psychologically impactful than gains (i.e., loss aversion). Rather, our review suggests the need for a more contextualized perspective whereby losses sometimes loom larger than gains, sometimes losses and gains have similar psychological impact, and sometimes gains loom larger than losses.

Comment by Kaj_Sotala on Outline of Galef's "Scout Mindset" · 2021-08-20T20:27:13.659Z · LW · GW

I take it reading the book was worth it, then? :)

Comment by Kaj_Sotala on Outline of Galef's "Scout Mindset" · 2021-08-11T13:18:49.803Z · LW · GW

For effective altruists, I think (based on the topic and execution) it's straightforwardly the #1 book you should use when you want to recruit new people to EA. It doesn't actually talk much about EA, but I think starting people on this book will result in an EA that's thriving more and doing more good five years from now, compared to the future EA that would exist if the top go-to resource were more obvious choices like The Precipice, Doing Good Better, the EA Handbook, etc.

I passed this review to people in a local EA group and some of them felt unclear on why you think this way, since (as you say) it doesn't seem to talk about EA much. Could you elaborate on that part?

Comment by Kaj_Sotala on Incorrect hypotheses point to correct observations · 2021-08-01T13:43:15.282Z · LW · GW

No worries. :) Getting it into the curation e-mail was probably good.

Comment by Kaj_Sotala on Incorrect hypotheses point to correct observations · 2021-07-31T12:31:51.124Z · LW · GW

(I now edited out the curation notice, since the e-mail was already sent a while back.)

Comment by Kaj_Sotala on What does knowing the heritability of a trait tell me in practice? · 2021-07-30T17:22:37.920Z · LW · GW

Attempts to rethink education have failed

Or at least the particular set of reforms discussed in that article has failed? Even within the context of the US, there do seem to be occasional educational interventions that work, e.g.:

In a state once notorious for its low reading scores, the Mississippi state legislature passed new literacy standards in 2013. Since then Mississippi has seen remarkable gains. Its fourth graders have moved from 49th (out of 50 states) to 29th on the National Assessment of Educational Progress, a nationwide exam. In 2019 it was the only state to improve its scores. For the first time since measurement began, Mississippi’s pupils are now average readers, a remarkable achievement in such a poor state.

Ms Burk attributes Mississippi’s success to implementing reading methods supported by a body of research known as the science of reading. In 1997 Congress requested the National Institute of Child Health and Human Development and the Department of Education to convene a National Reading Panel to end the “reading wars” and synthesise the evidence. The panel found that phonics, along with explicit instruction in phonemic awareness, fluency and comprehension, worked best.

Comment by Kaj_Sotala on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T18:48:29.238Z · LW · GW

Didn't they train a separate MuZero agent for each game? E.g. the page you link only talks about being able to learn without pre-existing knowledge.

Comment by Kaj_Sotala on Working With Monsters · 2021-07-25T20:47:46.001Z · LW · GW

Interestingly, if one looks at this story in terms of "what message is this story sending", then it feels like the explicit and implicit message are the opposites of each other.

The explicit message seems to be something like "cooperation with the other side is good, it can be the only way to survive".

But then if we think of this representing a "pro-cooperation side", we might notice that the story doesn't really give any real voice to the "anti-cooperation side" - the one which would point out that actually, there are quite a few situations when you absolutely shouldn't cooperate with monsters. The setup of the story is such that it can present a view from which the pro-cooperation side is simply correct, as opposed to looking at a situation where it's more questionable.

In the context of a fictional story making a point about the real world, I would interpret "cooperating with the other side" to mean something like "making an honest attempt to fairly present the case for the opposite position". Since this story doesn't do that, it reads to me like it's saying that we should cooperate with those who disagree with us... while at the same time not cooperating with the side that it disagrees with. 

Comment by Kaj_Sotala on Book review: The Explanation of Ideology · 2021-07-21T03:48:05.370Z · LW · GW

Looking up government statistics for Finnish household size over the years, they give an average size of 2.46 persons for 1981 and 2.41 for 1986, which sounds pretty nuclear already. (There's a steady downwards trend from 3.35 in 1966 to 2.02 in 2016).

Comment by Kaj_Sotala on Book review: The Explanation of Ideology · 2021-07-20T17:47:01.794Z · LW · GW

Todd tells us that when there's a change in what family structure dominates a region, it's mostly due to a subpopulation becoming more dominant.

FWIW, while I can't speak for 1983 (three years before my birth), at least going by the definitions of this post, I would put Finland at equal, exogamous, nuclear (Finnish people move away from their parents at 21.8 years of age at average, one of the lowest in Europe, with the EU average being 25.9 years), and low parental authority. This seems to differ from the way Finland was classified in the book (I assume that "universalist" corresponds to "community"?), and if it has shifted from the original classification, I'd assume it to have happened by cultural drift rather than changes in subpopulations.

Comment by Kaj_Sotala on Why Subagents? · 2021-07-17T18:17:28.517Z · LW · GW

The way I'd think of it, it's not that you literally need unanimous agreement, but that in some situations there may be subagents that are strong enough to block a given decision. And then if you only look at the subagents that are strong enough to exert a major influence on that particular decision (and ignore the ones either who don't care about it or who aren't strong enough to make a difference), it kind of looks like a committee requiring unanimous agreement.

It gets a little handwavy and metaphorical but so does the concept of a subagent. :)

Comment by Kaj_Sotala on Relentlessness · 2021-07-07T11:13:45.163Z · LW · GW

Why is immersion the best way to learn a language?

I submit that it is because you do not get to stop.

I'm not sure of this. I got quite good at English basically by immersion - reading English books, watching English TV and movies, hanging out on English forums, playing video games in English - so that by the time I was in my mid-to-late teens, people online were already mistaking me for a native speaker (or writer, rather). But it's not that I was forced to do those things. I could have read only books in Finnish, just read the Finnish subtitles in English TV shows / movies, hang out exclusively on Finnish sites / with Finnish people, and do things other than play video games. In fact most kids my age and in that area did not spend as much time learning English as I did, nor did they get equally good at it.

I'd rather say that learning English felt valuable in that it gave me new options for what to do. If I wanted to pursue some of the things that felt the most interesting to me (e.g. read Star Wars novels that hadn't yet been translated into Finnish, obviously a supremely important task), then I needed to learn English. That seems related to the "you do not get to stop" criteria in the sense that learning the language is high-value to you - if you are in an environment where you don't get to stop learning a language, it means that you need to learn the language in order to be able to do anything. But it seems like the key is simply in it being of high value, and "you can't do anything without it" is just a particular special case that makes it maximally high value.

On the other side, there are all the parents who aren't actually very good and neglect or abuse their children. Even though they are forced to be around their kids too, they don't put equally high value on their child's well-being as a good parent does, so the relentlessness doesn't translate into good parenting.

Comment by Kaj_Sotala on Musings on general systems alignment · 2021-07-01T20:03:32.362Z · LW · GW

Upvoted for specificity, but I would characterize this as "we have some degree of influence and reputation" rather than "the world is turning to us for leadership". (I guess from the "somewhat influential" in your other comment that you agree.)

Comment by Kaj_Sotala on Musings on general systems alignment · 2021-07-01T14:37:26.879Z · LW · GW

But I'm sorry, the world simply is turning to this community for leadership. That is a thing that is happening in the world. There is a lot of very clear evidence.

Name three pieces of evidence?

Comment by Kaj_Sotala on Internal Information Cascades · 2021-06-26T18:42:08.020Z · LW · GW

In 1974 Braver confirmed that the perceived intentions of partners in game theoretic contexts were frequently more important for predicting behavior than a subject's personal payoff matrix, and in 1978 Goldman tried pre-sorting participants into predicted cooperators and defectors and found (basically as predicted) that defectors tended not to notice opportunities to cooperate even when those opportunities actually existed.

Consider the tragedy here: People can update on the evidence all they want, but initial social hypotheses can still channel them into a social dynamics where they generate the evidence necessary to confirm their prior beliefs, even when those beliefs lead to suboptimal results.  

Seems related to different worlds:

A few years ago I had lunch with another psychiatrist-in-training and realized we had totally different experiences with psychotherapy.

We both got the same types of cases. We were both practicing the same kinds of therapy. We were both in the same training program, studying under the same teachers. But our experiences were totally different. In particular, all her patients had dramatic emotional meltdowns, and all my patients gave calm and considered analyses of their problems, as if they were lecturing on a particularly boring episode from 19th-century Norwegian history.

I’m not bragging here. I wish I could get my patients to have dramatic emotional meltdowns. As per the textbooks, there should be a climactic moment where the patient identifies me with their father, then screams at me that I ruined their childhood, then breaks down crying and realizes that she loved her father all along, then ???, and then their depression is cured. I never got that. I tried, I even dropped some hints, like “Maybe this reminds you of your father?” or “Maybe you feel like screaming at me right now?”, but they never took the bait. So I figured the textbooks were misleading, or that this was some kind of super-advanced technique, or that this was among the approximately 100% of things that Freud just pulled out of his ass.

And then I had lunch with my friend, and she was like “It’s so stressful when all of your patients identify you with their parents and break down crying, isn’t it? Don’t you wish you could just go one day without that happening?”

And later, my supervisor was reviewing one of my therapy sessions, and I was surprised to hear him comment that I “seemed uncomfortable with dramatic expressions of emotion”. I mean, I am uncomfortable with dramatic expressions of emotion. I was just surprised he noticed it. As a therapist, I’m supposed to be quiet and encouraging and not show discomfort at anything, and I was trying to do that, and I’d thought I was succeeding. But apparently I was unconsciously projecting some kind of “I don’t like strong emotions, you’d better avoid those” field, and my patients were unconsciously complying.

Comment by Kaj_Sotala on Taboo "Outside View" · 2021-06-19T17:43:44.363Z · LW · GW

Eliezer also wrote an interesting comment on the EA Forum crosspost of this article, copying it here for convenience:

I worriedly predict that anyone who followed your advice here would just switch to describing whatever they're doing as "reference class forecasting" since this captures the key dynamic that makes describing what they're doing as "outside viewing" appealing: namely, they get to pick a choice of "reference class" whose samples yield the answer they want, claim that their point is in the reference class, and then claiming that what they're doing is what superforecasters do and what Philip Tetlock told them to do and super epistemically virtuous and anyone who argues with them gets all the burden of proof and is probably a bad person but we get to virtuously listen to them and then reject them for having used the "inside view".

My own take:  Rule One of invoking "the outside view" or "reference class forecasting" is that if a point is more dissimilar to examples in your choice of "reference class" than the examples in the "reference class" are dissimilar to each other, what you're doing is "analogy", not "outside viewing".

All those experimental results on people doing well by using the outside view are results on people drawing a new sample from the same bag as previous samples.  Not "arguably the same bag" or "well it's the same bag if you look at this way", really actually the same bag: how late you'll be getting Christmas presents this year, based on how late you were in previous years.  Superforecasters doing well by extrapolating are extrapolating a time-series over 20 years, which was a straight line over those 20 years, to another 5 years out along the same line with the same error bars, and then using that as the baseline for further adjustments with due epistemic humility about how sometimes straight lines just get interrupted some year.  Not by them picking a class of 5 "relevant" historical events that all had the same outcome, and arguing that some 6th historical event goes in the same class and will have that same outcome.

Comment by Kaj_Sotala on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-14T13:13:12.725Z · LW · GW

One Pfizer with no side-effects.

Comment by Kaj_Sotala on Against intelligence · 2021-06-09T16:30:29.664Z · LW · GW

But power seems to be very unrelated to intelligence.

On the level of individuals, perhaps. But one argument is that the more relevant question is that of species-level comparisons; if you need to understand people, to know them, befriend them, network with them, get them to like you, etc., then a human brain may be able to do it, but while a mice or dog brain might manage to do some of it, it's not going to get to a position of real power that way.

Eliezer making an argument on why one should explicitly not think of "intelligence" as corresponding to conceptual intelligence, but rather to "the thing that makes humans different from other animals":

General intelligence is a between-species difference, a complex adaptation, and a human universal found in all known cultures. There may as yet be no academic consensus on intelligence, but there is no doubt about the existence, or the power, of the thing-to-be-explained. There is something about humans that let us set our footprints on the Moon.

But the word “intelligence” commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from “book smarts” which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.

The danger of confusing general intelligence with g-factor is that it leads to tremendously underestimating the potential impact of Artificial Intelligence. (This applies to underestimating potential good impacts, as well as potential bad impacts.) Even the phrase “transhuman AI” or “artificial superintelligence” may still evoke images of booksmarts-in-a-box: an AI that’s really good at cognitive tasks stereotypically associated with “intelligence,” like chess or abstract mathematics. But not superhumanly persuasive; or far better than humans at predicting and manipulating human social situations; or inhumanly clever in formulating long-term strategies. So instead of Einstein, should we think of, say, the 19th-century political and diplomatic genius Otto von Bismarck? But that’s only the mirror version of the error. The entire range from village idiot to Einstein, or from village idiot to Bismarck, fits into a small dot on the range from amoeba to human.

If the word “intelligence” evokes Einstein instead of humans, then it may sound sensible to say that intelligence is no match for a gun, as if guns had grown on trees. It may sound sensible to say that intelligence is no match for money, as if mice used money. Human beings didn’t start out with major assets in claws, teeth, armor, or any of the other advantages that were the daily currency of other species. If you had looked at humans from the perspective of the rest of the ecosphere, there was no hint that the squishy things would eventually clothe themselves in armored tanks. We invented the battleground on which we defeated lions and wolves. We did not match them claw for claw, tooth for tooth; we had our own ideas about what mattered. Such is the power of creativity.

Comment by Kaj_Sotala on Rationalists should meet Integral Theory · 2021-06-05T14:17:58.484Z · LW · GW

I'm not familiar with Integral Theory, but I read an earlier book by Wilber that arguably also qualifies as "philosophy of life". I found it to contain some stuff that felt very valuable and some stuff that felt like obvious nonsense.

It strikes me that he was approaching the topics in a way that might be considered somewhat analogous to a study of cognitive biases - in that even if you do actually have a good theoretical understanding of biases that other people can learn from, it doesn't necessarily mean that you're any good at being less biased yourself. Or possibly you have managed to debias yourself with regard to some biases, but you keep getting blindsided by some trickier ones, even if you understand them in theory

This seems to me like a general issue with all these kinds of things, whether it's about cognitive bias or therapy or philosophy of life. You only ever see your mind from the inside, and simply knowing about how it works will (usually) not change how it actually works, so you can have a fantastic understanding of minds in general and manage to fix most of your issues and still fail to apply your skills to one gaping blindspot that's obvious to everyone around you. Or conversely, you can be a massive failure as a human being and have a million things you haven't addressed at all, but still be able to produce some valuable insights.

That said, I do agree that the more red flags there are around a person, the more cautious one should be - both in terms of epistemics (more risk of absorbing bad ideas) and due to general consequentialist reasons (if someone supports known abusers, then endorsing them may indirectly lend support to abuse). 

Comment by Kaj_Sotala on A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble · 2021-06-04T15:22:29.454Z · LW · GW

As someone who was very young at the time, I liked the idea of becoming "enlightened" and "letting go of my ego." I believed I could learn to use my time and energy for the benefit of other people and put away my 'selfish' desires to help myself, and even thought this was desirable. This backfired as I became a people-pleaser, and still find it hard to put my needs ahead of other peoples to this day.

I can't put this fully at the feet of my lone and ill advised forays into meditation, but it's only much later I learned the idea that in order to let go of something, you have to have it first. I don't think I had fully developed my ego at the point I started learning to "let it go" and healthy formation of identity is a crucial step to a happy life I think.

Great observation. I've experienced something similar (using meditative practices in an attempt to suppress my own needs and desires in a way that was ultimately detrimental). I also don't think it was really caused by meditation; rather it was an emotional wound (or a form of craving) masquerading as a noble intention. 

I don't recall hearing the "in order to let go of something, you have to have it first" line before, but I love it. You could say that I've been working to develop my ego recently, for a similar reason - wanting to get to a point where my needs are actually met rather than actively denying them.

Comment by Kaj_Sotala on Rationalists should meet Integral Theory · 2021-06-04T12:44:13.533Z · LW · GW

Agreed, I think this post would be much strengthened if it would include some kind of a summary of Integral Theory's main claims and some brief discussion of why Elo thinks they're correct.

Comment by Kaj_Sotala on TEAM: a dramatically improved form of therapy · 2021-06-03T17:05:27.568Z · LW · GW

If the therapist has to put down a probability on the patient having found the therapist empathic the therapist will be faster at learning when he's perceived as empathic by their patients then if the therapist just sees the numbers. 

I wonder how accurate these kinds of answers are going to be. At one point my self-improvement group was doing peer coaching sessions that involved giving your coach feedback at the end. I don't remember our exact questions, but questions about the coach's perceived empathy definitely sound like the kind of thing that could have been on the list.

I remember that when I'd been coached, I felt significantly averse to giving the person-who'd-just-done-their-best-to-help-me any critical feedback, especially on a trait such as empathy that people often interpret as reflecting on them as a person. I'd imagine that the status differential between a client and a therapist could easily make this worse, particularly in the case of clients who are specifically looking for help on something like poor self-esteem or excess people-pleasing. (Might not be a problem with patients who are there for being too disagreeable, though!)

Comment by Kaj_Sotala on TEAM: a dramatically improved form of therapy · 2021-06-03T16:50:15.315Z · LW · GW

Thanks, that's useful. I'd heard of some other reconsolidation-fans read Burns's new book and also highlight the "what's good about this" aspect of it as CBT "also coming around" to the "positive purpose" idea. So then when I thought I saw it in this post as well, I assumed that to be correct. Especially since that would have helped explain why TEAM is so effective.

Though interestingly this makes me somewhat more interested in TEAM, since it's obviously doing something different from what I already know, rather than just confirming my previous prejudices without adding new information. :-)

Comment by Kaj_Sotala on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-02T17:00:50.835Z · LW · GW

Oops, yeah. Edited.

Comment by Kaj_Sotala on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-02T15:56:21.142Z · LW · GW

In this post, I wish to share an opposing concern: that the EA and rationality communities have become systematically biased to ignore multi/multi dynamics, and power dynamics more generally.  

The EA and rationality communities tend to lean very strongly towards mistake rather than conflict theory. A topic that I've had in my mind for a while, but haven't gotten around writing a full post about, is of both of them looking like emotional defense strategies. 

It looks to me like Scott's post is pointing towards, not actually different theories, but one's underlying cognitive-emotional disposition or behavioral strategy towards outgroups. Do you go with the disposition towards empathizing and assuming that others are basically good people that you can reason with, or with the disposition towards banding together with your allies and defending against a potential threat?

And at least in the extremes, both of them look like they have elements of an emotional defense. Mistake doesn't want to deal with the issue that some people you just can't reason with no matter how good your intentions, so it ignores that and attempts to solve all problems by dialogue and reasoning. (Also, many Mistake Theorists are just bad at dealing with conflict in general.) Conflict doesn't want to deal with the issue that often people who hurt you have understandable reasons for doing so and that they are often hurting too, so it ignores that and attempts to solve all problems by conflict.

If this model is true, then it also suggests that Mistake Theorists should also be systematically biased against the possibility of things like power dynamics being genuinely significant. If power dynamics are genuinely significant, then you might have to resolve things by conflict no matter how much you invest in dialogue and understanding, which is the exact scenario that Mistake is trying to desperately avoid.

Comment by Kaj_Sotala on TEAM: a dramatically improved form of therapy · 2021-06-01T17:01:06.651Z · LW · GW

Hmm, so do you mean that TEAM does not actually assume issues to necessarily have a positive function, the idea that they might have is just one way of overcoming resistance?

Comment by Kaj_Sotala on Why don't long running conversations happen on LessWrong? · 2021-05-31T18:08:03.317Z · LW · GW

There are a few things that are tagged as "the great conversations on LessWrong" in my mind, and those are specifically ones that took the form of posts-as-responses. Two specific examples that I'm thinking of would be

  • Wei Dai's The Nature of Offense, which was a response to three earlier posts by Alicorn, orthonormal and Eliezer (posts which had in turn been responding to each other), and showed how each of them was a special case of a so-far unrecognized general principle of what offense is.
  • Morendil and my Red Paperclip Theory of Status; this was a post that Morendil and I co-authored after I had proposed a definition of status in the comments of a post that Morendil had made that was in turn responding to a number of other posts (mine one of them) about "what is status" on LW. I'm no doubt a little biased in considering this one of the great successes, but it felt pretty significant in that to me it felt like it did to the concept of status what Wei Dai's post did to the concept of offense: pulled together all the threads of the conflicting theories that'd been proposed so far to provide an overall synthesis and definition that the main participants in the discussion (in this case me and Morendil; not everyone seems to have found the post's model equally useful) agreed to have resolved it.

I would also want to be able to nominate Eliezer and Robin's FOOM debate, but while that one is certainly long and has them engaging each other, ultimately it didn't seem to bring their views substantially closer to each other - much unlike the two other examples I mentioned.

Comment by Kaj_Sotala on TEAM: a dramatically improved form of therapy · 2021-05-31T14:43:48.919Z · LW · GW

This sounds similar to memory reconsolidation -based therapies, which assume that any emotional issue that you have exists because that issue actually serves some purpose, and in fact your "problem" represents a solution for some other problem that you have had before. By acknowledging the positive purpose behind the issue, you can find a way to keep the purpose while changing the strategy.

I haven't listened to any of the episodes, though, so I'd be curious to hear whether you think that's talking about the same thing or something subtly different?

Comment by Kaj_Sotala on A Review and Summary of the Landmark Forum · 2021-05-28T15:43:39.962Z · LW · GW

I'm not familiar with Landmark, but the description of how they deal with narratives reminds me of therapy and memory reconsolidation; much of this sounds a lot like making unconscious beliefs and interpretations explicit so that they can then be disproven.

According to Landmark, the answer is simple, you just do (“all this time you thought you were trapped inside, but the door wasn’t even locked”). They illustrate this with the story of monkeys being trapped by putting a banana in a cage just big enough for them to put their hands through. As it goes, when the monkey tries to grab the banana, it finds its hand trapped as the hole isn’t big enough to pull it out. The monkey could escape, but it’s unwilling to let go of the banana. However, we could also interpret them as operating under the theory that if the understanding and realisation is strong enough and lands deep enough then it creates a shift automatically.

Unlocking the Emotional Brain notes that while making unconscious narratives explicit and conscious isn't always enough to disprove them, there are many cases where it is, because once they are explicit it is easier for the brain to notice how they contradict other things that it also believes. That would be in line with this kind of a theory.

Comment by Kaj_Sotala on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2021-05-28T14:33:35.179Z · LW · GW

However, I don't think this is the whole explanation. The technological advantage of the conquistadors was not overwhelming.

With regard to the Americas at least, I just happened to read this article by a professional military historian, who characterizes the Native American military technology as being "thousands of years behind their Old World agrarian counterparts", which sounds like the advantage was actually rather overwhelming.

There is a massive amount of literature to explain what is sometimes called ‘the Great Divergence‘ (a term I am going to use here as valuable shorthand) between Europe and the rest of the world between 1500 and 1800. Of all of this, most readers are likely only to be familiar with one work, J. Diamond’s Guns, Germs and Steel (1997), which is unfortunate because Diamond’s model of geographic determinism is actually not terribly well regarded in the debate (although, to be fair, it is still better than some of the truly trash nationalistic nonsense that gets produced on this topic). Diamond asks the Great Divergence question with perhaps the least interesting framing: “Why Europe and not the New World?” and so we might as well get that question out of the way first.

I am well aware that when EU4 was released, this particular question – and generally the relative power of New World societies as compared to Old World societies – was a point of ferocious debate among fans (particularly on Paradox’s own forums). What makes this actually a less central question (though still an important one) is that the answer is wildly overdetermined. That is to say, any of these causes – the germs, the steel (through less the guns; Diamond’s attention is on the wrong developments there), but also horses, ocean-going ships, and dense, cohesive, disciplined military formations would have been enough in isolation to give almost any complex agrarian Old-World society military advantages which were likely to prove overwhelming in the event. The ‘killer technologies’ that made the conquest of the New World possible were (apart from the ships) old technologies in much of Afroeurasia; a Roman legion or a Han Chinese army of some fifteen centuries earlier would have had many of the same advantages had they been able to surmount the logistical problem of actually getting there. In the face of the vast shear in military technology (though often not in other technologies) which put Native American armies thousands of years behind their Old World agrarian counterparts, it is hard not to conclude that whatever Afroeurasian society was the first to resolve the logistical barriers to putting an army in the New World was also very likely to conquer it.

(On these points, see J.F. Guilmartin, “The Cutting Edge: An Analysis of the Spanish Invasion and Overthrow of the Inca Empire, 1532-1539,” in Transatlantic Encounters: European and Andeans in the Sixteenth Century, eds. K. J. Andrien and R. Adorno (1991) and W.E. Lee, “The Military Revolution of Native North America: Firearms, Forts and Politics” in Empires and Indigenes: Intercultural Alliance, Imperial Expansion and Warfare in the Early Modern World, eds. W.E. Lee (2011). Both provide a good sense of the scale of the ‘technological shear’ between old world and new world armies and in particular that the technologies which were transformative were often not new things like guns, but very old things, like pikes, horses and metal axes.)

With regard to the Indian Ocean, he writes:

the Portuguese cartaz-system (c. 1500-c. 1700) [was] the main way that the Portuguese and later European powers wrested control over trade in the Indian Ocean; it only worked because Portuguese warships were functionally unbeatable by anything else afloat in the region due to differences in local styles of shipbuilding).