Posts

Are index funds still a good investment? 2020-12-02T21:31:40.413Z
Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare 2020-11-24T10:36:40.843Z
Retrospective: November 10-day virtual meditation retreat 2020-11-23T15:00:07.011Z
Memory reconsolidation for self-affection 2020-10-27T10:10:04.884Z
Group debugging guidelines & thoughts 2020-10-19T11:02:32.883Z
Things are allowed to be good and bad at the same time 2020-10-17T08:00:06.742Z
The Felt Sense: What, Why and How 2020-10-05T15:57:50.545Z
Public transmit metta 2020-10-04T11:40:03.879Z
Attention to snakes not fear of snakes: evolution encoding environmental knowledge in peripheral systems 2020-10-02T11:50:05.327Z
AI Advantages [Gems from the Wiki] 2020-09-22T22:44:36.671Z
The Haters Gonna Hate Fallacy 2020-09-22T12:20:06.050Z
(Humor) AI Alignment Critical Failure Table 2020-08-31T19:51:18.266Z
nostalgebraist: Recursive Goodhart's Law 2020-08-26T11:07:46.690Z
Collection of GPT-3 results 2020-07-18T20:04:50.027Z
Are there good ways to find expert reviews of popular science books? 2020-06-09T14:54:23.102Z
Three characteristics: impermanence 2020-06-05T07:48:02.098Z
On the construction of the self 2020-05-29T13:04:30.071Z
From self to craving (three characteristics series) 2020-05-22T12:16:42.697Z
Craving, suffering, and predictive processing (three characteristics series) 2020-05-15T13:21:50.666Z
A non-mystical explanation of "no-self" (three characteristics series) 2020-05-08T10:37:06.591Z
A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble 2020-05-05T19:09:44.484Z
Stanford Encyclopedia of Philosophy on AI ethics and superintelligence 2020-05-02T07:35:36.997Z
Healing vs. exercise analogies for emotional work 2020-01-27T19:10:01.477Z
The two-layer model of human values, and problems with synthesizing preferences 2020-01-24T15:17:33.638Z
Under what circumstances is "don't look at existing research" good advice? 2019-12-13T13:59:52.889Z
A mechanistic model of meditation 2019-11-06T21:37:03.819Z
On Internal Family Systems and multi-agent minds: a reply to PJ Eby 2019-10-29T14:56:19.590Z
Book summary: Unlocking the Emotional Brain 2019-10-08T19:11:23.578Z
System 2 as working-memory augmented System 1 reasoning 2019-09-25T08:39:08.011Z
Subagents, trauma and rationality 2019-08-14T13:14:46.838Z
Subagents, neural Turing machines, thought selection, and blindspots 2019-08-06T21:15:24.400Z
On pointless waiting 2019-06-10T08:58:56.018Z
Integrating disagreeing subagents 2019-05-14T14:06:55.632Z
Subagents, akrasia, and coherence in humans 2019-03-25T14:24:18.095Z
Subagents, introspective awareness, and blending 2019-03-02T12:53:47.282Z
Building up to an Internal Family Systems model 2019-01-26T12:25:11.162Z
Book Summary: Consciousness and the Brain 2019-01-16T14:43:59.202Z
Sequence introduction: non-agent and multiagent models of mind 2019-01-07T14:12:30.297Z
18-month follow-up on my self-concept work 2018-12-18T17:40:03.941Z
Tentatively considering emotional stories (IFS and “getting into Self”) 2018-11-30T07:40:02.710Z
Incorrect hypotheses point to correct observations 2018-11-20T21:10:02.867Z
Mark Eichenlaub: How to develop scientific intuition 2018-10-23T13:30:03.252Z
On insecurity as a friend 2018-10-09T18:30:03.782Z
Tradition is Smarter Than You Are 2018-09-19T17:54:32.519Z
nostalgebraist - bayes: a kinda-sorta masterpost 2018-09-04T11:08:44.170Z
New paper: Long-Term Trajectories of Human Civilization 2018-08-12T09:10:01.962Z
Finland Museum Tour 1/??: Tampere Art Museum 2018-08-03T15:00:05.749Z
What are your plans for the evening of the apocalypse? 2018-08-02T08:30:05.174Z
Anti-tribalism and positive mental health as high-value cause areas 2018-08-02T08:30:04.961Z
Fixing science via a basic income 2018-08-02T08:30:04.380Z

Comments

Comment by kaj_sotala on How can I bet on short timelines? · 2020-12-03T11:30:32.161Z · LW · GW

Don't know of a venture capital fund like that, but there's apparently a "Rise of the Robots" ETF.

Comment by kaj_sotala on The LessWrong 2019 Review · 2020-12-02T18:28:45.387Z · LW · GW

This page has the results.

Comment by kaj_sotala on The Schelling Choice is "Rabbit", not "Stag" · 2020-12-02T13:36:31.960Z · LW · GW

This helped make sense of situations which had frustrated me in the past.

Comment by kaj_sotala on The LessWrong 2019 Review · 2020-12-02T13:27:23.998Z · LW · GW

Nominated the eight 2019 posts that have felt the most long-term valuable to me.

(Posting this on the principle of "telling that you've nominated posts encourages others to do it as well".)

Comment by kaj_sotala on mAIry's room: AI reasoning to solve philosophical problems · 2020-12-02T13:24:24.718Z · LW · GW

Nominated for the reasons given in my curation notice.

Comment by kaj_sotala on Propagating Facts into Aesthetics · 2020-12-02T13:23:51.574Z · LW · GW

Aesthetics being partially fact-based seems like an important part of the cluster of ideas about how minds and values work that things like Focusing, IFS and Coherence Therapy are all about.

Comment by kaj_sotala on You Have About Five Words · 2020-12-02T13:08:57.172Z · LW · GW

I've found this valuable to keep in mind.

Comment by kaj_sotala on Why Subagents? · 2020-12-02T13:03:41.421Z · LW · GW

The "many decisions can be thought of as a committee requiring unanimous agreement" model felt intuitively right to me, and afterwards I've observed myself behaving in ways which seem compatible with it, and thought of this post.

Comment by kaj_sotala on AlphaStar: Impressive for RL progress, not for AGI progress · 2020-12-02T12:57:49.560Z · LW · GW

I had originally been quite impressed with AlphaStar, but this post showed me its actual limitations. It also gave me a good concrete example about what a lack of causal reasoning means in practice, and afterwards when I've seen posts about impressive-looking AI systems, I've asked myself "does this system fail to exhibit causal reasoning in the same way that AlphaStar did?".

Comment by kaj_sotala on Rest Days vs Recovery Days · 2020-12-02T12:54:16.814Z · LW · GW

I've had the recovery vs. rest day distinction in my head ever since reading this post. Sometimes when I take a day off and then end up feeling not-very-rested at the end of it, rather than being frustrated I remember this post and think "well it's not that I failed to rest, it's that this was a recovery day and the next one will be more restful"; and this has often been true.

It has also helped me consciously focus more on rest rather than recovery activities, on days when this has been feasible.

Comment by kaj_sotala on The Curse Of The Counterfactual · 2020-12-02T12:49:49.628Z · LW · GW

Nominated for similar reasons as the ones in my curation notice. I think this was the most long-term useful LW post that I read in 2019.

Comment by kaj_sotala on The LessWrong 2018 Book is Available for Pre-order · 2020-12-02T12:29:09.477Z · LW · GW

Fixed, thanks.

Comment by kaj_sotala on Book review: WEIRDest People · 2020-12-01T13:24:25.149Z · LW · GW

That is, the tributary system was a lot like "everyone aware of China swore fealty to China."

It sounds like in practice they didn't?

The "tribute" entailed a foreign court sending envoys and exotic products to the Chinese emperor. The emperor then gave the envoys gifts in return and permitted them to trade in China. Presenting tribute involved theatrical subordination but usually not political subordination. The political sacrifice of participating actors was simply "symbolic obeisance".[8] Actors within the "tribute system" were virtually autonomous and carried out their own agendas despite sending tribute; as was the case with Japan, Korea, Ryukyu, and Vietnam.[9] Chinese influence on tributary states was almost always non-interventionist in nature and tributary states "normally could expect no military assistance from Chinese armies should they be invaded". [...]

The gifts doled out by the Ming emperor and the trade permits granted were of greater value than the tribute itself, so tribute states sent as many tribute missions as they could. In 1372, the Hongwu Emperor restricted tribute missions from Joseon and six other countries to just one every three years. The Ryukyu Kingdom was not included in this list, and sent 57 tribute missions from 1372 to 1398, an average of two tribute missions per year. Since geographical density and proximity was not an issue, regions with multiple kings such as the Sultanate of Sulu benefited immensely from this exchange.[7] This also caused odd situations such as the Turpan Khanate simultaneously raiding Ming territory and offering tribute at the same time because they were eager to obtain the emperor's gifts, which were given in the hope that it might stop the raiding.

Comment by kaj_sotala on Measure's Shortform · 2020-11-27T19:13:13.724Z · LW · GW

Could it be a form of hyperfocus

Comment by kaj_sotala on Retrospective: November 10-day virtual meditation retreat · 2020-11-25T11:47:44.855Z · LW · GW

Welcome!

At the beginning of the retreat, the teachers mentioned that people ask them what to expect during a retreat, and they said something along the lines of "well basically anything can happen, every retreat is different". So maybe one result of extended practice is accepting that there isn't even such a thing as things going as expected, because you learn not to have any expectations in the first place - and that that's part of the fun, something new each time. :)

Comment by kaj_sotala on Manifesto of the Silent Minority · 2020-11-24T16:43:40.603Z · LW · GW

Well, Eliezer did suggest that a world created by an aligned AGI might be really weird.

Comment by kaj_sotala on Retrospective: November 10-day virtual meditation retreat · 2020-11-24T09:19:53.988Z · LW · GW

A little hard to compare, since the equivalent amount of sitting practice spread over a longer duration would also include much more off-the-cushion time, so more opportunities for things to process or come up via environmental triggers. But I do feel like retreats have caused insights that wouldn't have come up otherwise. Especially since if something caused a strong aversion to practice in my daily life, I would just practice less, whereas on retreat there isn't really any other option than try to work with it.

Comment by kaj_sotala on Retrospective: November 10-day virtual meditation retreat · 2020-11-24T09:13:50.645Z · LW · GW

Based on what the teachers said in the dharma talks, they expected most people to experience such a shift, and apparently most did. I didn't get that, though - my amount of surrender kept going up and down. 

I think I was most settled on the first half of the sixth or seventh day, where I had that experience of not making any decisions at all (and having a hard time remembering what it would even mean to make a decision) and just allowing all intentions relax themselves. That lasted until the successive relaxation brought up glimpses of some fear, and then that turned into powerful energy sensations in the forehead and strong and agential-feeling intentions to do something about those, and then I never had the same level of surrender again.

Comment by kaj_sotala on Public transmit metta · 2020-11-22T21:01:58.355Z · LW · GW

TWIM has been a pretty popular metta variant recently. I like it as well, even though I always forget what some of the Rs were again.

The linked Ron Crouch article also has good tips.

Comment by kaj_sotala on The ethics of AI for the Routledge Encyclopedia of Philosophy · 2020-11-22T17:12:59.205Z · LW · GW

You probably know of these already, but just in case: lukeprog wrote a couple of articles on the history of AI risk thought [1, 2] going back to 1863. There's also the recent AI ethics article in the Stanford Encyclopedia of Philosophy.

I'd also like to imagine that my paper on superintelligence and astronomical suffering might say something that someone might consider important, but that is of course a subjective question. :-)

Comment by kaj_sotala on AGI safety from first principles: Introduction · 2020-11-10T14:41:55.698Z · LW · GW

because who's talking about medium-size risks from AGI?

Well, I have talked about them... :-)

The capability claim is often formulated as the possibility of an AI achieving a decisive strategic advantage (DSA). While the notion of a DSA has been implicit in many previous works, the concept was first explicitly defined by Bostrom (2014, p. 78) as “a level of technological and other advantages sufficient to enable [an AI] to achieve complete world domination.”

However, assuming that an AI will achieve a DSA seems like an unnecessarily strong form of the capability claim, as an AI could cause a catastrophe regardless. For instance, consider a scenario where an AI launches an attack calculated to destroy human civilization. If the AI was successful in destroying humanity or large parts of it, but the AI itself was also destroyed in the process, this would not count as a DSA as originally defined. Yet, it seems hard to deny that this outcome should nonetheless count as a catastrophe.

Because of this, this chapter focuses on situations where an AI achieves (at least) a major strategic advantage (MSA), which we will define as “a level of technological and other advantages sufficient to pose a catastrophic risk to human society.” A catastrophic risk is one that might inflict serious damage to human well-being on a global scale and cause 10 million or more fatalities (Bostrom & Ćirković 2008).

Comment by kaj_sotala on Stupid Questions October 2020 · 2020-11-08T17:51:44.906Z · LW · GW

One would have to compete with the existing parties, which in e.g. the US is basically hopeless. There are countries where smaller parties have more of a chance, but even there it's a huge amount of work, and LW types are generally not the kind who would enjoy or excel at politics. Also LW users have a variety of political positions rather than having any unified ideology.

If one does want to have political influence, supporting or joining an existing party lets you leverage their existing resources while allowing you to specialize in the politics-things that you are good at, without needing to recreate everything from scratch. 

Comment by kaj_sotala on Nuclear war is unlikely to cause human extinction · 2020-11-08T08:47:12.779Z · LW · GW

Re: neutral countries not getting targeted; I've heard it claimed that some nuclear targeting plans involved hitting even neutral countries, on the assumption that anyone who survived a nuclear war unscathed would become the next major power so better to ensure that everyone goes down. I have no idea of whether this had a credible source, though; do we know anything about whether this might be true?

Comment by kaj_sotala on Open & Welcome Thread – November 2020 · 2020-11-03T21:46:01.473Z · LW · GW

The Pope asks people to pray for AI safety this month:

Each year, the Holy Father asks for our prayers for a specific intention each month. You are invited to answer the Holy Father's request and to join with many people worldwide in praying for this intention each month. [...]

November

Artificial Intelligence

We pray that the progress of robotics and artificial intelligence may always serve humankind.

Comment by kaj_sotala on Memory reconsolidation for self-affection · 2020-10-28T20:25:33.356Z · LW · GW

I would say that "being abrasive" may be something wrong that you did, but it's not something fundamentally wrong with you. This is a little tricky, but I'll try.

The distinction is one of actions being wrong versus people being wrong. In either case, you may feel bad because you were abrasive, but the "object that the badness is associated with" is different.

If something that you do is wrong, then it's possible for you to change that in the future. You were abrasive, but you recognize that it was wrong to be abrasive, and as a result you may do something so as to not be so abrasive in the future. Once you change your behavior, you can stop feeling bad, since the feeling-bad has achieved its purpose: causing you to act differently.

But if what you are is wrong, then the feeling of badness is associated with what feels something like "your fundamental essence". It's not just that the abrasiveness was bad, it's that the abrasiveness was a signal of something that you are, which remains unchanged even if you manage to eliminate the abrasive behavior entirely. So even if you do succeed in changing your behavior and the people that you hurt forgive you, you may continue to feel bad over once having behaved that way. In which case the feeling of badness isn't serving a useful purpose anymore, you are just feeling generally bad for no reason.

Also, "realizing that there's nothing fundamentally wrong with me" doesn't directly eliminate guilt. As I understand it, guilt is a feeling that you've wronged someone and need to make reparations. That's about actions rather than your character. What's eliminated is something like shame, which I understand to be the feeling of there being something wrong with you. Interestingly, I feel that eliminating shame may make the guilt easier to deal with productively: since there's often a concrete approach for dealing with guilt (apologize and make reparations until the other person forgives you), you can focus on just making that happen. But because shame is a feeling of fundamental badness that can't really be dealt with, the only possible reaction is to try to suppress it or avoid it. Which means that if something that you did causes you both guilt and shame, the shame may cause you to flinch away from thinking the whole thing, and then you can't do anything that would help with the guilt.

On a functional level, both my intuition as well as my cursory look at relevant emotion research suggest that one of the functions of shame is related to a fear of moral condemnation. Suppose that you say something abrasive, and you also live in a society where abrasive people are generally looked down upon, and where it's hard to be forgiven for abrasiveness. Shame, then is something like rolled-up metacognition: it acts as a judgment of "if other people found out that I have been abrasive, they would judge me harshly" and motivates you to do things like hide or deny your past abrasiveness, or at least punish yourself for it before others do.

But subjectively, shame usually doesn't just feel like "I need to hide this so that people won't judge me", it feels like there's a fact of the matter saying "I am bad for having done this thing". "I am bad" is the brain's social-punishment machinery acting on the person themselves. Even if it is genuinely the case that you have done something that you would be better off hiding from others, it's better to do that without your social punishment machinery kicking in. Because as the linked article covers, your punishment machinery doesn't actually care about finding solutions to problems, it just cares about punishing you:

You can want to end death, disease, and suffering, without rejecting the reality of death, disease and suffering.

Moral judgment and preferences are two entirely different and separate things. And when moral judgment is involved, trade-offs become taboo.

When Ingvar was procrastinating, and felt he should do his work faster, his brain spent absolutely zero time considering how he might get it done at all, let alone how he might do it faster.

Why? Because to the moral mind, the reasons he is not getting it done do not matter. Only punishing the evildoer matters, so even if someone suggested ways he could make things easier, his moral brain rejects them as irrelevant to the real problem, which is clearly his moral failing. Talking or thinking about problems or solutions isn’t really “working”, therefore it’s further evidence of his failing. And making the work easier would be lessening his rightful punishment!

So when moral judgment is involved, actually reasoning about things feels wrong. Because reasoning might lead to a compromise with the Great Evil: a lessening of punishment or a toleration of non-punishers.

This is only an illusion, albeit a very persistent one.

The truth is that, when you switch off moral judgment, preference remains. Most of us, given a choice, actually prefer that good things happen, that we actually act in ways that are good and kind and righteous, that are not about fighting Evil, but simply making more of whatever we actually want to see in the world.

And ironically, we are more motivated to actually produce these results, when we do so from preference than from outrage. We can be creative, we can plan, or we can even compromise and adjust our plans to work with reality as it is, rather than as we would prefer it to be.

After all, when we think that something is how the world should be, it gives us no real motivation to change it. We are motivated instead to protest and punish the state of the world, or to “speak out” against those we believe responsible... and then feel like we just accomplished something by doing so!

And so we end up just like Ingvar, surfing the net and punishing himself, but never actually working... nor even choosing not to work and to do something more rewarding instead.

Comment by kaj_sotala on Memory reconsolidation for self-affection · 2020-10-27T21:17:22.447Z · LW · GW

It can definitely be very difficult, yeah.

Comment by kaj_sotala on Memory reconsolidation for self-affection · 2020-10-27T16:07:24.059Z · LW · GW

Yeah, I noticed the resemblance too. For some reason I never found PTR intuitive to work with, though, despite trying it several times.

I think one difference is that most versions of timeline reimprinting that I've heard imply that you're supposed to go through your entire life in chronological order, but I can't just tell my brain to "retrieve all of my memories from age 8", so I mostly end up with some memories that feel most prototypically associated with specific ages but aren't necessarily very relevant for that core state. Whereas here I don't have any particular expectation of getting it all on one go or in a linear order, I just sit down and work with whatever memories come up on that particular sit.

Also I think I never really properly got how the whole "imagine what your parent would have been like with this core state" bit was supposed to make things feel different.

Comment by kaj_sotala on Memory reconsolidation for self-affection · 2020-10-27T12:51:10.153Z · LW · GW

Hmm, interestingly I don't feel like any physically painful experiences have given me significant trauma, even though I've had broken bones a few times etc.

I think this is because I've generally felt socially supported during those times, and confident that the experiences will eventually pass: my impression is that a feeling of helplessness plays a big role in whether a physically painful experience gets interpreted as traumatic or not. So in principle giving your past self affection and a feeling of being safe and supported could also help with that. At least that would be my guess based on my limited experience.

Comment by kaj_sotala on Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely · 2020-10-22T09:50:12.572Z · LW · GW

My actual claim is that learning becomes not just harder linearly, or quadratically (e.g. when you have to spend, say, an extra hour on learning the same amount of new material compared to what you used to need), but exponentially (e.g. when you have to spend, say, twice as much time/effort as before on learning the same amount of new material).

This is an interesting claim, but I'm not sure if it matches my own subjective experience. Though I also haven't dug deeply into math so maybe it's more true there - it seems to me like this could vary by field, where some fields are more "broad" while others are "deep". 

And looking around, e.g. this page suggests that at least four different kinds of formulas are used for modeling learning speed, apparently depending on the domain. The first one is the "diminishing returns" curve, which sounds similar to your model:

From the source: "This describes a situation where the task may be easy to learn and progression of learning is initially fast and rapid."

But it also includes graphs such as the s-curve (where initial progress is slow but then you have a breakthrough that lets you pick up more faster, until you reach a plateau) and the complex curve (with several plateaus and breakthroughs).

From the source: "This model is the most commonly cited learning curve and is known as the “S-curve” model.  It measures an individual who is new to a task. The bottom of the curve indicates slow learning as the learner works to master the skills required and takes more time to do so. The latter half of the curve indicates that the learner now takes less time to complete the task as they have become proficient in the skills required. Often the end of the curve begins to level off, indicating a plateau or new challenges."

 

From the source: "This model represents a more complex pattern of learning and reflects more extensive tracking."

Why do I see this apparent phenomenon of inefficient learning important? One example I see somewhat often is someone saying "I believe in the AI alignment research and I want to contribute directly, and while I am not that great at math, I can put in the effort and get good."  Sadly, that is not the case. Because learning is asymptotically inefficient, you will run out of time, money and patience long before you get to the level where you can understand, let alone do the relevant research: it's not the matter of learning 10 times harder, it's the matter of having to take longer than the age of the universe, because your personal exponent eventually gets that much steeper than that of someone with a natural aptitude to math. 

This claim seems to run counter to the occasionally-encountered claim that people who are too talented at math may actually be outperformed by less talented students at one point, as the people who were too talented hit their wall later and haven't picked up the patience and skills for dealing with it, whereas the less talented ones will be used to slow but steady progress by then. E.g. Terry Tao mentions it:

Of course, even if one dismisses the notion of genius, it is still the case that at any given point in time, some mathematicians are faster, more experienced, more knowledgeable, more efficient, more careful, or more creative than others. This does not imply, though, that only the “best” mathematicians should do mathematics; this is the common error of mistaking absolute advantage for comparative advantage. The number of interesting mathematical research areas and problems to work on is vast – far more than can be covered in detail just by the “best” mathematicians, and sometimes the set of tools or ideas that you have will find something that other good mathematicians have overlooked, especially given that even the greatest mathematicians still have weaknesses in some aspects of mathematical research. As long as you have education, interest, and a reasonable amount of talent, there will be some part of mathematics where you can make a solid and useful contribution. It might not be the most glamorous part of mathematics, but actually this tends to be a healthy thing; in many cases the mundane nuts-and-bolts of a subject turn out to actually be more important than any fancy applications. Also, it is necessary to “cut one’s teeth” on the non-glamorous parts of a field before one really has any chance at all to tackle the famous problems in the area; take a look at the early publications of any of today’s great mathematicians to see what I mean by this.

In some cases, an abundance of raw talent may end up (somewhat perversely) to actually be harmful for one’s long-term mathematical development; if solutions to problems come too easily, for instance, one may not put as much energy into working hard, asking dumb questions, or increasing one’s range, and thus may eventually cause one’s skills to stagnate. Also, if one is accustomed to easy success, one may not develop the patience necessary to deal with truly difficult problems (see also this talk by Peter Norvig for an analogous phenomenon in software engineering). Talent is important, of course; but how one develops and nurtures it is even more so.

Comment by kaj_sotala on Group debugging guidelines & thoughts · 2020-10-20T08:07:01.729Z · LW · GW

Thank you!

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-19T13:11:27.271Z · LW · GW

Neat! This page has more works by the same artist.

Comment by kaj_sotala on What posts do you want written? · 2020-10-19T11:03:41.279Z · LW · GW

As a response to this request, wrote something here.

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-18T18:32:55.212Z · LW · GW

They certainly apply, but the formulation of the instrumental convergence thesis is very general, e.g. as stated in Bostrom's paper:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by many intelligent agents.

That only states that those instrumental values are likely to be pursued by many agents to some extent, depending on how useful they are for fulfilling the ultimate values of the agents. But there's nothing to say that it would be particularly useful for the goals of most humans to pursue them to the point of e.g. advancing space colonization.

Comment by kaj_sotala on Things are allowed to be good and bad at the same time · 2020-10-18T18:17:45.503Z · LW · GW

Was pointed to this great article that makes the same point with several additional examples, e.g.:

If you take suffering seriously (farmed chickens, children in poor countries, etc), you're in a lot of trouble—because there's a lot of suffering, and suffering is very important. So you should drop whatever you're doing, and start doing something about suffering. Or at the very least, you should donate money.

Same if you take politics seriously. Same if you take many other things seriously.

The easy solution is to say: "Those aren't that important". I've been doing that for years. "Actually, I don't care about chickens, or any other animals".

With synthesis, I have arrived at a much better solution:

Things are important
and I won't work on them
and it doesn't make me a bad person

This means that I can care about chickens now—because caring about chickens, or poor children, or anything, no longer compels me to start doing something to help them. This has amazing long-term implications:

  • I am more likely to help chickens in the future, because this is easier when I care;
  • I am more likely to spend time helping whoever I want, become good at it, level up at various skills like "execution", and if I decide to help chickens in the future, I will be more efficient at that.
Comment by kaj_sotala on Things are allowed to be good and bad at the same time · 2020-10-18T12:54:01.576Z · LW · GW

If you now had to make a decision on whether to take the job, how would you use this electrifying zap help you make the decision?

My current feeling is that I'd probably take it. (The job example was fictional, as the actual cases where I've used this have been more personal in nature, but if I translate your question to those contexts then "I'd take it" is what I would say if I translated the answer back.)

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-18T12:41:07.855Z · LW · GW

While it's technically possible to have a preference that doesn't value things that can be made out of galaxies, it would be shocking if there is a statistically significant number of humans whose correct idealization has that property.

I have pretty broad uncertainty on whether "people's correct idealization" is a useful concept in this kind of a context, and assuming that it is, what those idealizations would value - seems to me like they might incorporate a fair amount of path dependence, with different equally correct idealizations arriving at completely different ultimate outcomes.

which makes habryka's appeal to values relevant, where it would be a much weaker argument if we were only discussing aesthetic preference.

I tend to think that (like identities) aesthetics are something like cached judgements which combine values and strategies for achieving those values.

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-18T09:51:38.110Z · LW · GW

I hesitated a little on whether to post this, given that it has been pointed out that the curvature of a ringworld wouldn't actually be that obvious from the inside, so it's in tension with the spirit of rationality to post a picture that depicts something physically impossible.

Still, after almost posting this, then deleting it, then feeling like I wanted to post it anyway, I decided to just do it. As I feel that it captures a combination of joy and love of life co-existing  with, and made possible by, science and rationality.

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-18T09:45:58.249Z · LW · GW
"Two girls sightseeing on a ringworld", by /u/Von_Grechii
Comment by kaj_sotala on Philosophy of Therapy · 2020-10-18T09:39:16.856Z · LW · GW

Curated.

I have generally been of the view that therapy is important for rationality [12], and this articulation of the goal of therapy as being about "getting people unstuck" feels like a framing that is particularly compatible with the rationalist project: after all, if "rationality is about winning", then getting unstuck and solving your previously-unsolvable problems is quite necessary for winning!

I generally liked getting a historical look at how therapy has evolved and what might then be called the "approaches for getting unstuck", and particularly appreciated getting it from someone who's actually a trained and working therapist - a profession I'd like to see represented more on LW. Though the post could have gone into more detail on it, the four philosophies suggest four different ways for trying to solve your own problems, which I expect to be helpful for e.g. people looking for a good therapist or ideas on how to solve their issues.

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-18T09:36:45.885Z · LW · GW

I guess the key word here might be "the Art of Rationality, as practiced on LessWrong". I do somewhat resonate with what you describe, but that feels more associated with a specific set of values that's predominant on LW due to a founder effect, rather than an integral part of rationality. So someone could still be rational in the sense that LW conceives rationality, without sharing the values implied by the concept of a cosmic endowment.

(That said, I'm cool with this thread being about that particular aesthetic, rather than rigorously just the art of rationality.)

Comment by kaj_sotala on What are some beautiful, rationalist artworks? · 2020-10-17T18:12:33.030Z · LW · GW
  • Pictures should be somehow relate to the Art of Rationality, as practiced on LessWrong. 

Allowed: a breathtaking shot of a SpaceX launch

Not that I would have anything against nice space exploration -themed imagery, but what makes that particularly connected to the art of rationality?

(I really like this post in general though, strong-upvoted.)

Comment by kaj_sotala on Things are allowed to be good and bad at the same time · 2020-10-17T16:59:16.289Z · LW · GW

Yes.

Comment by kaj_sotala on Things are allowed to be good and bad at the same time · 2020-10-17T16:02:41.139Z · LW · GW

That would make sense as an alternative hypothesis, but I'm not sure how I'd test it. 

Comment by kaj_sotala on Have the lockdowns been worth it? · 2020-10-17T12:31:31.456Z · LW · GW

This seems to be only counting the direct QALYs lost from deaths, but not the grief and general disruption that the dead people's loved ones suffer? Nor the lowered quality of life to people who survive, but suffer long-term damage.

Comment by kaj_sotala on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-15T08:50:08.388Z · LW · GW

I don't think you can, at least not if you want to have a worldview in the first place. Any system of reasoning needs to have some axioms.

See also Where Recursive Justification Hits Bottom for an argument for why this isn't a problem.

Comment by kaj_sotala on Has Eliezer ever retracted his statements about weight loss? · 2020-10-15T08:16:41.266Z · LW · GW

I'm similar. There have been one or two occasions in my life when I did feel like I was starting to put on a weight in a way that felt uncomfortable. But then I just stopped doing the thing that was causing it, and apart from that I've never needed to think about losing weight. 

Comment by kaj_sotala on The Felt Sense: What, Why and How · 2020-10-13T20:56:26.585Z · LW · GW

<3 wow. Happy to hear that you got it, I hope the felt sense of your inbox gets better eventually. :)

Comment by kaj_sotala on Philosophy of Therapy · 2020-10-13T16:05:39.038Z · LW · GW

"Chess Therapy" on your image made me go "what" so I looked it up

Chess therapy is a form of psychotherapy that attempts to use chess games between the therapist and client or clients to form stronger connections between them towards a goal of confirmatory or alternate diagnosis and consequently, better healing. [...] 

In psychoanalysis chess games are wish fulfillment, and that an important part of these wish fulfillment are the result of repressed desires—desires that can scare a person so much that their games may turn into a series of defeats. Chess games can be divided into wishful games, anxiety games, and punitive games.

I was about to say something like "okay, this is starting to sound silly even to me", but then I remembered that a linkpost to an article about self-sabotage in Magic: the Gathering has been called one of the top most valuable things on LW, so I guess it's not that silly after all.

Comment by kaj_sotala on Philosophy of Therapy · 2020-10-13T15:47:18.517Z · LW · GW

3. Give 1 and 2 and the fact that therapy works for some reason and the fact that different types of therapeautic theories contradict each other, therapy must work not only because because it improves the patient's map of the territory, but also by another mechanism. 

It felt to me like the overview of the therapeutic philosophies suggested a partial answer to this one: part of why different therapies contradict each other is that they describe different parts of the territory / have different mechanisms of action. E.g. if behaviorist therapy changes a person's conditioning and systematic therapy looks at the social system they are in, then there doesn't need to be a conflict: a person has their own individual conditioning, and that conditioning is also affected by the signals that they get from their social system. (Both are describing the same territory but emphasizing different aspects / levels of it, kind of analogous to physics and chemistry.)

Comment by kaj_sotala on Philosophy of Therapy · 2020-10-13T15:44:31.216Z · LW · GW

What would you say would be one-sentence (or one paragraph) descriptions of the "main principle" behind each philosophy that you've covered?

Extracting what I got from your descriptions, it might be something like:

  • Psychoanalytic: Discuss your issues in relation to your past.
  • Behaviorist: Apply reinforcement to increase positive behaviors and punishment to decrease negative behaviors.
  • Existential: Come to better understand your current experience and needs.
  • Systemic: Understand and change how you are affected by the patterns of the system that you are in.