Posts

Rationalist Lent is over 2018-03-30T05:57:03.117Z
The Math Learning Experiment 2018-03-21T21:59:04.682Z
Deciphering China's AI Dream 2018-03-18T03:26:13.471Z
Are you the rider or the elephant? 2018-02-21T07:25:04.371Z
Rationalist Lent 2018-02-13T23:55:29.713Z
Paper Trauma 2018-01-31T22:05:43.859Z
CFAR workshop with new instructors in Seattle, 6/7-6/11 2017-05-20T00:18:22.109Z
In memory of Thomas Schelling 2016-12-13T22:17:51.257Z
Against utility functions 2014-06-19T05:56:29.877Z
What resources have increasing marginal utility? 2014-06-14T03:43:14.195Z
The January 2013 CFAR workshop: one-year retrospective 2014-02-18T18:41:13.935Z
Useful Questions Repository 2013-07-25T02:58:35.717Z
Evidential Decision Theory, Selection Bias, and Reference Classes 2013-07-08T05:16:48.460Z
[LINK] Cantor's theorem, the prisoner's dilemma, and the halting problem 2013-06-30T20:26:03.002Z
[LINK] The Selected Papers Network 2013-06-14T20:20:21.542Z
Useful Concepts Repository 2013-06-10T06:12:49.639Z
[LINK] Sign up for DAGGRE to improve science and technology forecasting 2013-05-26T00:08:55.793Z
[LINK] Soylent crowdfunding 2013-05-21T19:09:31.034Z
Privileging the Question 2013-04-29T18:30:35.545Z
[LINK] Causal Entropic Forces 2013-04-20T23:57:34.160Z
Post Request Thread 2013-04-11T01:28:46.351Z
Solved Problems Repository 2013-03-27T04:51:54.419Z
Boring Advice Repository 2013-03-07T04:33:41.739Z
Think Like a Supervillain 2013-02-20T08:34:55.618Z
Rationalist Lent 2013-02-14T06:32:40.415Z
Thoughts on the January CFAR workshop 2013-01-31T10:16:08.725Z

Comments

Comment by Qiaochu_Yuan on April Coronavirus Open Thread · 2020-04-15T22:03:22.591Z · LW · GW

I'm personally quite worried about disruptions to the food supply chain severe enough to cause food shortages in e.g. the Bay Area in the next few months but not sure what to do with that worry other than to stock up more on non-perishables. Would very much appreciate seeing more people thinking and researching about this.

Comment by Qiaochu_Yuan on We run the Center for Applied Rationality, AMA · 2019-12-30T08:56:47.488Z · LW · GW

FWIW, I don't feel this way about timelines anymore. Lot more pessimistic about estimates being mostly just noise.

Comment by Qiaochu_Yuan on Book summary: Unlocking the Emotional Brain · 2019-10-13T21:20:26.654Z · LW · GW
The only part of these processes that actually requires real-time interaction is getting people over what I call their "meta-issue" -- the schema they have that gets in the way of being able to reflect on their issues.
For example, I've had clients who had what you might call a "be a good student" schema that keeps them from accurately reporting their emotions, responses, or progress in applying a reconsolidation technique. Others who would deflect and deny ever having any negative experiences or even any problems, despite having just asked me for help with same. These kinds of meta-issues are the hardest and most time-consuming part of getting someone ready to change.

Oof, yeah, this resonates a lot with experiences I've been having with myself and others the last few months, since coming out of a workshop on the bio-emotive framework. There are towers of meta-issues, meta-issues that prevent themselves from being looked at... what a mess.

In retrospect this illuminates something for me about the CFAR workshop and its techniques - a pattern I ran into for years was that I gradually became averse to every single CFAR technique I tried, so I never used them on my own, and I don't think I'm alone in that. I think - and this is deeply ironic - that the CFAR techniques as a whole never went meta enough to catch "meta-issues," not in any really systematic way.

Comment by Qiaochu_Yuan on Building up to an Internal Family Systems model · 2019-10-13T21:04:29.916Z · LW · GW

Wow, thank you for writing this. This really clarified something for me that I'm in the process of digesting.

Comment by Qiaochu_Yuan on Subagents, trauma and rationality · 2019-10-04T06:30:20.351Z · LW · GW

I just got around to reading this; thank you for writing it!

I hadn't thought much about the role of memory in trauma and emotional stuff until pretty recently, possibly based on some kind of present-moment-experience-focused thing I inherited from circling culture. But my experiences using the bio-emotive framework were memory-based in a really important way, and reading this helped something click into place for me about integration being literally integration of memory networks, parts as memory networks, etc.

Using bio-emotive to examine the relationship between an emotional reaction I'm having now and a related memory has given the phrase "being present" a meaning it didn't have for me before; often when we aren't present it's because we're in a real sense in the past, possibly way back in the past depending on what memories are being activated.

Comment by Qiaochu_Yuan on How to find a lost phone with dead battery, using Google Location History Takeout · 2019-05-30T22:11:05.551Z · LW · GW

Google Activity History is sort of terrifying but also great. I used it when someone stole my laptop to learn that the thief had googled pawn shops in the area; I contacted one of the pawn shops they looked up and a bit later they called me telling me someone had brought in a laptop matching my description. They lied to her and told her they needed to process the laptop for a few hours and she needed to come back, and in that time the police were called, she was arrested, and I got my laptop back the same day it was stolen.

Comment by Qiaochu_Yuan on The Relationship Between the Village and the Mission · 2019-05-22T05:58:15.250Z · LW · GW

I've been getting a fair number of requests on Facebook for the doc (esp. from community organizers, which I appreciate), and response has been pretty positive. That plus a few other things have me more inclined to write a public draft, but still a little wary of making promises yet.

Comment by Qiaochu_Yuan on The Relationship Between the Village and the Mission · 2019-05-13T06:50:23.108Z · LW · GW

Here is my brain dump: I have mostly given up on the Berkeley rationality community as a possible village. I think the people who showed up here were mostly selected for being bad at villaging, and that the awful shit that's been happening around here lately is downstream of that. I think there is something toxic / dysfunctional woven deep into the community fabric (which has something to do with the ways in which the Mission interacts poorly with people's psychologies) and I don't feel very hopeful about even being able to show it clearly to half of the community, let alone doing anything about it.

In February I wrote a 20-page Google Doc describing what I think is wrong with Berkeley in more detail, which I've shared with some of you but don't plan to make public. (Message me on Facebook if you'd like to request access to a PDF of it; I might not say yes, though.) I'd like to get around to writing a second public draft but again, I've been feeling less hopeful, so... we'll see.

Comment by Qiaochu_Yuan on The Hard Work of Translation (Buddhism) · 2019-04-14T23:22:35.928Z · LW · GW

I upvoted this because it gave me some concepts to use to look at some experiences I've had. The speculations at the level of physical mechanism aren't really cruxes for me so I mostly don't care about them, and same with facts of the matter about what any particular Pali text actually says. What's interesting to me is what Romeo gets out of a combination of reading them and reflecting on his own experience, that might be relevant to me reflecting on my own experience.

Why should I believe any of this?

Gut reaction to this question is that it's the wrong question. I don't view this post as telling you anything you're supposed to believe on Romeo's word.

Comment by Qiaochu_Yuan on How do people become ambitious? · 2019-04-08T06:54:58.247Z · LW · GW

It's goodharting from the point of view of natural selection's values but it doesn't have to be goodharting from the point of view of your values. We can enjoy art even if art is in some sense goodharting on e.g. being in beautiful places or whatever.

Comment by Qiaochu_Yuan on The Hard Work of Translation (Buddhism) · 2019-04-08T06:52:53.406Z · LW · GW

This is fantastic and absolutely the conversation I want to be having. Resonates quite a lot with my experience, especially as a description of what it is exactly that I got out of circling.

In your language circling naturally stirs up sankharas because relational shit is happening (e.g. people are paying attention to you or ignoring you, liking or disliking what you say, etc) and then hopefully, if the circle is being well-facilitated, you sometimes get coached into a state where you can notice and work with your "causal links in the perceptual system between physical sensations, feelings, and mental reactions," e.g. by the facilitator remaining very calm and holding space, then gently pointing out their observations about your causal links, that kind of thing. Unfortunately with less skilled facilitation this doesn't happen and shit just gets stirred up and not resolved; worst case people get retraumatized.

Very excited to talk to you more about this.

Comment by Qiaochu_Yuan on How do people become ambitious? · 2019-04-08T06:43:49.083Z · LW · GW

Yeah, I agree that at the earlier stages it's not clear that ambition is a thing to aim for, and I would also advise people to prioritize health broadly.

I agree that encouragement and guidance is good, and more generally think that mentorship is really, really deeply important. I am not about this "individual rationality" life anymore. It's group rationality or nothing.

Comment by Qiaochu_Yuan on How do people become ambitious? · 2019-04-07T08:22:33.866Z · LW · GW

Right, this is the kind of thing I had in mind with the phrase "pathological need to do something." Cf. people who are obsessed with making way more money than they could ever possibly spend.

Comment by Qiaochu_Yuan on Has "politics is the mind-killer" been a mind-killer? · 2019-04-06T10:28:02.950Z · LW · GW
He says that "Politics is an extension of war by other means. Arguments are soldiers". I'd update this to say that "ONE WAY OF THINKING ABOUT politics is AS IF IT IS an extension of war by other means. Arguments CAN BE THOUGHT OF AS soldiers". 

This is a good shift for you to have made and I'm glad you can make it. Now you can just do this mentally to everyone's writing (and speaking, for that matter) all the time.

But asking writers to do it themselves is crippling. The new sentence you propose is obviously technically more accurate but it's also clunky, anemic prose.

Someone ages ago (maybe Viliam_Bur?) said that people thought they were drawn to LW by the good ideas but they were equally if not more drawn to LW by the quality of Eliezer's writing, and a key quality of Eliezer's writing is that he knew how to be punchy when he needed to be.

"Arguments are soldiers" is, and was always, poetry, not a mathematical identity. The point was to shock you into a frame shift in how you look at arguments (away from e.g. an implicit frame of "arguments are neutral tools we use to search for truth"), not tell you a Platonic Truth that you Believe Forever. And punchy writing is key to provoking these kinds of frame shifts; clunky writing just doesn't actually get through to any part of you that matters.

Comment by Qiaochu_Yuan on Open Thread April 2019 · 2019-04-06T10:15:27.730Z · LW · GW

Terence Tao is great; I haven't read that book but I like his writing a lot in general. I am a big fan of the Princeton Lectures in Analysis by Stein and Shakarchi; clear writing, good exercises, great diagrams, focus on examples and applications.

(Edit: also, fun fact, Stein was Tao's advisor.)

Epistemic status: ex-math grad student

Comment by Qiaochu_Yuan on How do people become ambitious? · 2019-04-06T10:11:56.689Z · LW · GW

I think you're equivocating between two possible meanings of "choose" here. There's "choose" as in you start telling people "I want to write a book" and then there's "choose" as in you actually decide to actually write the book, which is quite different. I think Ray is asking about something like how to cultivate the capacity to do the latter. It is not at all trivially easy. Most goals are fake; making them real is a genuine skill.

Comment by Qiaochu_Yuan on How do people become ambitious? · 2019-04-05T21:18:53.814Z · LW · GW

Tongue-in-cheek: "when their pathological need to do something outweighs their pathological need to do nothing."

In more detail: there are several different kinds of deep-rooted psychological needs that ambition might be powered by, and I think the resulting different kinds of ambition are different enough to discuss as distinct entities (in particular, they vary in how prosocial they are). Some possibilities off the top of my head, not mutually exclusive, inspired by Enneagram types:

1. Reinforce a particular identity / self-narrative (e.g. "I'm special" -> strive to become a celebrity or w/e, see Instagram influencers); Enneagram 4

2. Get people to like you (again see Instagram influencers); Enneagram 3

3. Have power over people (some politicians, maybe); Enneagram 8

4. Have fun to avoid feeling bad; Enneagram 7

(One way to probe this in a given ambitious person is to look at what coping mechanisms they turn to when they fail. E.g. if it's about reinforcing a given identity through some ambitious project, when that project fails do they start reinforcing that identity in other ways?)

Then there's genuine compassion, which is the cleanest power source for ambition I've found so far, and arguably the most prosocial (there might be others, e.g. childlike joy and wonder). I am quite concerned that most of the ambition in the rationality / EA space is not being powered by genuine compassion; personally, most of the time I've been here I've been powered by a combination of #1 and #2.

There are also several different kinds of deep-rooted psychological needs that lack of ambition might be powered by. Again, some possibilities off the top of my head, inspired by Enneagram types:

1. Not drawing criticism / pissing people off; Enneagram 2, Enneagram 3, or Enneagram 9

2. Avoiding the feeling of not knowing what to do; Enneagram 5, Enneagram 6

3. Sense that ambition is morally wrong / corrupting; Enneagram 1

4. Sense that ambition is not your place / not the sort of thing people like you are allowed to do; Enneagram 2, Enneagram 3, Enneagram 4

Historically I think a lot of my lack of ambition was powered by a combination of #1, #2, and #4, although it's hard to disentangle. There were also less psychological obstacles, e.g. I was tired all the time because I was eating, sleeping, and exercising poorly, and had an awful social life; real hard to be ambitious or agentic in that state.

To summarize, I mostly relate to ambition as a relatively surface-level psychological phenomenon that's being powered by deeper dynamics, and I think at least as much in terms of obstacles to ambition as in terms of ways to cultivate ambition.

Epistemic status: based on lots of personal development work and looking at other people's psychology and personal development, e.g. via circling; especially, noticing my own level of ambition increase drastically the more work I do on myself, and looking at what seem to be the gears of that.

Comment by Qiaochu_Yuan on Dependability · 2019-03-27T02:47:12.000Z · LW · GW

I also don't have much of this skill and made it through life without needing to have it; I was able to coast on raw intelligence for quite a long time, up through my 2nd year of grad school or so. Welp.

Except in romantic relationships; I've historically consistently found it easy to have commitment, follow-through, reliability, focused attention, etc. in that context (although it was kinda being fueled by neediness so there were other things going on there).

It feels like I have not yet found e.g. a job that I deeply value enough to commit to in the same way that I valued my relationships enough to commit to them, and that I can take my lack of commitment to various things I've attempted to do so far as evidence that at least some part of me didn't find them worth committing to. I think I'm okay with having high standards for what I commit to in this way, although I might be out of practice committing as a result.

Comment by Qiaochu_Yuan on What Vibing Feels Like · 2019-03-11T23:52:17.674Z · LW · GW

Yes, I strongly agree that this is missing and it sucks. I have a lot to say about why I think this is happening, which hopefully will be converted from a Google Doc into a series of blog posts soonish.

Comment by Qiaochu_Yuan on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:27:34.381Z · LW · GW

There's an interesting thing authentic relating people do at workshops that they call "setting intentions," and I think it works in a different way than either of these. The difference seems to me to be that the intention is being "held by the group." I'm not sure how to explain what I mean by this. There are at least two visible signs of it:

1) people remind each other about the intention, and

2) people reward each other for following through on the intention.

If everyone knows the intention is being held by the group in this way, it both comes to mind more easily and feels more socially acceptable to follow through on (the latter might be causing the former). In my experience group intentions also require almost no willpower, but they also don't feel quite like policies to me (that would be "agreements") - they're more like especially salient affordances.

The key ritual is that at some point someone asks "so, do we all agree to hold this intention?" and we raise our hands if so - and we look around the room so we can see each other's hands. That way the collective holding of the intention can enter common knowledge.

Said another way, it's something like trying to define the values of the group-mind we're attempting to come together to form.

---

I relate to your resistance to willpower-based intentions. It's something like, a lot of people have an "inner authoritarian" or "inner tyrant" that is the internalized avatar of other people making them do stuff when they were younger (parents, teachers, etc.), whose job it is to suppress the parts of them that are unacceptable according to outer tyrants. You can live under your inner tyrant's reign of terror, which works as long as submitting to the inner tyrant keeps your life running smoothly, e.g. it placates your outer tyrants and they feed you and stuff.

At some point this strategy can stop working, and then other parts of you might engage in an internal rebellion against your inner tyrant; I think I was in a state like this for most of 2017 and 2018, and probably still am now to some extent. At this stage using willpower can feel like giving in to the inner tyrant.

Then I think there's some further stage of development that involves developing "internal leadership," whatever that is.

There's a bit in the Guru Papers about this. One quote, I think in the context of submitting to renunciate morality (e.g. Christian morality):

Maintaining denial actually requires constant surveillance of the thing you are pretending isn’t there. This deepens the internal splits that renunciation promises to heal. It requires the construction of a covert inner authoritarian to keep control over the “bad” stuff you reject. This inner tyrant is probably not strong enough to do the job on its own, so you submit to an external authority whose job is to strengthen the internal tyrant.
Comment by Qiaochu_Yuan on Informal Post on Motivation · 2019-02-27T09:30:46.803Z · LW · GW

Glad to see you're writing about this! I think motivation is a really central topic and there's lots more to be said about it than has been said so far around here.

When we're struggling with motivation to do X, it's because only S2 predicts/believes that X will lead to reward. S1 isn't convinced. Your S2 models say X is good, but S1 models don't see it. This isn't necessarily a bug. S2 reasoning can be really shitty, people can come up with all kinds of dumb plans for things that won't help, and it's not so bad that their S1 models don't go along with them.

I think these days S1 and S2 have become semantic stopsigns, and in general I recommend that people stop using these terms both in their thinking and verbally, and instead try to get more specific about what parts of their mind actually disagree and why. I can report, for example, that CFAR doesn't use these terms internally.

Anna Salamon used to say, in the context of teaching internal double crux at CFAR workshops, that there's no such thing as an S1 vs. S2 conflict. All conflicts are "S1 vs. S1." "Your S2," whatever that means, may be capable of engaging in logical reasoning and having explicit verbal models about things, but the part of you that cares about the output of all of that reasoning is a part of your S1 (in my internal dialect, just "a part of you"), and you'll make more progress once you start identifying what part that is.

---

Here's an example of what getting more specific might look like. Suppose I'm a high school student and "my S1" says play video games and "my S2" says do my homework. What is actually going on here?

One version could be that I know I get social reinforcement from my parents and my teachers to do homework, or more sinisterly that I get socially punished for not doing it. So in this case "my S2" is a stopsign blocking an inquiry into the power structure of school, and generally the lack of power children have in modern western society, which is both harder and less comfortable to think about than "akrasia."

Another version is someone told me to do my homework so I'll go to a good college so I'll get a good job. In this case "my S2" is a stopsign blocking an inquiry into why I care about any of the nodes in this causal diagram - maybe I want to go to a good college because it'll make me feel better about myself, maybe I want to get a good job to avoid disappointing my parents, etc.

That's on the S2 side, but "my S1" is also blocking inquiry. Why do I want to play video games? Not "my S1," just me; I can own that desire. There are obvious stories to tell about video games being more immediately pleasurable and addictive than most other things I could do, and those stories have some weight, but they're also a distraction; much easier to think about than why I wouldn't rather do anything else. In my actual experience, the times in my life I have played video games the most, the reasons were mostly emotional: I was really depressed and lonely and felt like a failure, and video games (and lots of other stuff) distracted me from feeling those things. Those feelings were very painful to think about, and that pain prevented me from even looking at the structure of this problem, let alone debugging it, for a long time.

(One sign I was doing this is that the video games I chose were not optimized for pleasure. I deliberately avoided video games that could be fun in a challenging way, because I didn't want to feel bad about doing poorly at them. Another sign is that everything else I did was also chosen for its ability to distract: for example, watching anime (never live-action TV, too uncomfortably close to real life), reading fiction (never nonfiction, again too uncomfortably close to real life), etc.)

An inference I take from this model is that you are best able to focus on long-term S2 goals (those which have the least inherent ability to influence you) if you have taken care of the rest of things which motivate you. Eat enough, sleep enough, spend time with friends. When you're trying to fight the desire to address those things, you're using willpower, and willpower is a stopgap measure.

Strongly agree, except that I wouldn't use the term "S2 goals." That's a stopsign. Again I suggest getting more specific: what part of you has those goals and why? Where did they come from?

So part of what I need to do now is really figure out how to do green-brain, growth-orientation motivation over red-brain, deficit-reduction motivation.

If I understand correctly what you mean by this, I have a lot of thoughts about how to do this. The short, unsatisfying version, which will probably surprise no one, is "find out what you actually want by learning how to have feelings."

The long version can be explained in terms of Internal Family Systems. The deal is that procrastinative behaviors like playing a lot of video games are evidence that you're trying to avoid feeling a bad feeling, and that that bad feeling is being generated by a part of you that IFS calls an "exile." Exiles are producing bad feelings in order to get you to avoid a catastrophic situation that resembles a catastrophic situation earlier in your life that you weren't prepared for; for example, if you were overwhelmed by criticism from your parents as a child, you might have an exile that floods you with pain whenever people criticize you, especially people you really respect in a way that might cause you to project your parents onto them.

Exiles are paired with parts called protectors, whose job it is to protect exiles from being triggered. In the criticism example, that might look like avoiding people who criticize you, avoiding doing things you might get criticized for, or feeling sleepy or confused when someone manages to criticize you anyway.

Behavior that's driven by exile / protector dynamics (approximately "red-brain, deficit-reduction," if I understand you correctly) can become very dysfunctional, as the goal of avoiding psychological pain becomes a worse and worse proxy for avoiding bad situations. In extreme cases it can almost completely block your access to what you want, as that becomes less of a priority than avoiding pain. In the criticism example, you might be so paralyzed by the possibility that someone could criticize you for doing things that you stop doing anything.

There are lots of different ways to get exiles and protectors to chill the fuck out, and once they do you get to find out what you actually want when you aren't just trying to avoid pain. It's good times. See also my comment on Kaj's IFS post.

Comment by Qiaochu_Yuan on Epistemic Tenure · 2019-02-27T01:32:57.477Z · LW · GW

This seems like a bad idea to me; I think people who are trying to have good ideas should develop courage instead. If you don't have courage your ideas are being pushed around by fear in general, and asking for a particular source of that fear to be ameliorated will not solve the general problem.

Comment by Qiaochu_Yuan on Two Small Experiments on GPT-2 · 2019-02-21T04:33:52.864Z · LW · GW

Thanks for writing this up! I'm excited to see more people running experiments like this.

When you say "if I take X as a prompt, I get Y," how many trials did you wait? In my own experimentation I've found lil' GPT-2's performance to be really variable across trials, and I've needed to wait 5 trials in some cases to get results I even sort of liked.

My sense overall of how lil' GPT-2 functions after playing with it for awhile on several different kinds of prompts is that it has a strong sense of genre, and has done something like learned a bunch of different genre conventions for the different types of texts in the training set. If the prompt strongly resembles a genre familiar from training it will run with that genre, although sometimes it'll wander off into another genre. It does quite poorly on prompts that I suspect don't strongly match to a genre in the training set.

For example, I tried to run a Turing test (mostly as a joke) by prompting with "Question: what is your name? Answer:" and I got this on my ~2nd to 4th trial (don't remember), with my speculations as to genre in [square brackets]:

Foulplay
I'm a Guardian Huntress, a deviation of the <Here> Monks keyword. I'm like a little "mechanic-creature hire", but is Disassembler which is general clique-of-guy? I usually live in forms of Priests or Whispers of the Old Gods, but if it's Asian Brotherhood of Magi..... [RP forum?]
deepmen
Reply ~29000 0 ~13 min 0 By TrueBeliever For MA teenage adventurer: Ooops It looks stupid but I'm given the wrong answer, and this logic is corrupt. Other first hand studies: sometimes your famed mechanical fiends attack other nvrock's - I had such too - but you're supposed to have flying paths to defend them would not it? This alien spawn is being unnecessary and a waste of time Man - you stole the Saalis... [gaming forum?]
Shadow Netherlands
Reply ~120200 0 ~30 min 0 By john : I know there's no better examples of
a landing back off 2lands ( ou ever?) Low challenge difficulty
a lot of new cards in my deck,
which doesn't draw cards: stacks high levels Some need life if w/o rest of deck already Defense Emperor's | Inferno/Neck Scorer Necronomicon Mysticpetal Mana Enigma Mortalswords Mysticmist Scout Zarthai Sniper [MtG or Hearthstone forum?]
It's a multiplier and it increases the yield on a lv301 fighter fighter, next to nothing it's probably never in my deck or some random deck Dofrone's | Boltforge Fastboat Sling dmt3 Efreet Flattestalker Infernal Eater Toxic Water Hurricane For another Holy orb suggested... [gaming forum? LoL?]

Was planning on posting a longer (mostly humorous) post with my own results but that post is low priority so I don't know when it's going to happen.

Comment by Qiaochu_Yuan on Building up to an Internal Family Systems model · 2019-01-29T01:29:11.129Z · LW · GW

Thanks for writing this! I am very excited that this post exists. I think what this model suggests about procrastination and addiction alone (namely, that they're things that managers and firefighters are doing to protect exiles) are already huge, and resonate strongly with my experience.

In the beginning of 2018 I experienced a dramatic shift that I still don't quite understand; my sense of it at the time was that there was this crippling fear / shame that had been preventing me from doing almost anything, that suddenly lifted (for several reasons, it's a long story). That had many dramatic effects, and one of the most noticeable ones was that I almost completely stopped wanting to watch TV, read manga, play video games, or any of my other addiction / procrastination behaviors. It became very clear that the purpose of all of those behaviors was numbing and distraction ("general purpose feeling obliterators" used by firefighters, as waveman says in another comment) from how shitty I felt all the time, and after the shift I basically felt so good that I didn't want or need to do that anymore.

(This lasted for awhile but not forever; I crashed hard in September (long story again) before experiencing a very similar shift again a few weeks ago.)

Another closely related effect is that many things that had been too scary for me to think about became thinkable (e.g. regrettable dynamics in my romantic relationships), and I think this is a crucial observation for the rationality project. When you have exile-manager-firefighter dynamics going on and you don't know how to unblend from them, you cannot think clearly about anything that triggers the exile, and trying to make yourself do it anyway will generate tremendous internal resistance in one form or another (getting angry, getting bored, getting sleepy, getting confused, all sorts of crap), first from managers trying to block the thoughts and then from firefighters trying to distract you from the thoughts. Top priority is noticing that this is happening and then attending to the underlying emotional dynamics.

Comment by Qiaochu_Yuan on Transhumanism as Simplified Humanism · 2018-12-12T03:13:29.687Z · LW · GW

I like this reading and don't have much of an objection to it.

Comment by Qiaochu_Yuan on Transhumanism as Simplified Humanism · 2018-12-07T07:12:11.186Z · LW · GW

This is a bad argument for transhumanism; it proves way too much. I'm a little surprised that this needs to be said.

Consider: "having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology." But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.

As for "common sense": in many human societies it was "common sense" to own slaves, to beat your children, again etc. Today it's "common sense" to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it's mostly the memes that were most virulent when you were growing up, and it's clearly unreliable as a guide to moral behavior in general.

Figuring out the right thing to do is hard, and it's hard for comprehensible reasons. Value is complex and fragile; you were the one who told us that!

---

In the direction of what I actually believe: I think that there's a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don't consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:

Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.

Consider: "either it's better to be male than female, in which case we should transition all women to men. Or it's better to be female than male, in which case we should transition all men to women."

---

What I can appreciate about this post is that it's an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn't have a problem with it.

Comment by Qiaochu_Yuan on Preliminary thoughts on moral weight · 2018-08-14T18:52:27.014Z · LW · GW

This whole conversation makes me deeply uncomfortable. I expect to strongly disagree at pretty low levels with almost anyone else trying to have this conversation, I don't know how to resolve those disagreements, and meanwhile I worry about people seriously advocating for positions that seem deeply confused to me and those positions spreading memetically.

For example: why do people think consciousness has anything to do with moral weight?

Comment by Qiaochu_Yuan on [deleted post] 2018-08-14T05:16:00.774Z

Relevant reading: gwern's The Narrowing Circle. He makes the important point that moral circles have actually narrowed in various ways, and also that it never feels that way because the things outside the circle don't seem to matter anymore. Two straightforward examples are gods and our dead ancestors.

Comment by Qiaochu_Yuan on Open Thread August 2018 · 2018-08-02T23:49:38.583Z · LW · GW

Does anyone else get the sense that it feels vaguely low-status to post in open threads? If so I don't really know what to do about this.

Comment by Qiaochu_Yuan on Strategies for Personal Growth · 2018-07-30T18:36:08.130Z · LW · GW

This makes sense, but I also want to register that I viscerally dislike "controlling the elephant" as a frame, in roughly the same way as I viscerally dislike "controlling children" as a frame.

Comment by Qiaochu_Yuan on Strategies for Personal Growth · 2018-07-28T19:26:48.385Z · LW · GW

Huh. Can you go into more detail about what you've done and how it's helped you? Real curious.

Comment by Qiaochu_Yuan on Strategies for Personal Growth · 2018-07-28T18:52:50.308Z · LW · GW
I think the original mythology of the rationality community is based around cheat codes

A lot of the original mythology, in the sense of the things Eliezer wrote about in the sequences, is about avoiding self-deception. I continue to think this is very important but think the writing in the Sequences doesn't do a good job of teaching it.

The main issue I see with the cheat code / munchkin philosophy as it actually played out on LW is that it involved a lot of stuff I would describe as tricking yourself or the rider fighting against / overriding the elephant, e.g. strategies like attempting to reward yourself for the behavior you "want" in order to fix your "akrasia." Nothing along these lines, e.g. Beeminder, worked for me when I experimented with them, and the whole time my actual bottleneck was that I was very sad and very lonely and distracting myself from and numbing that (which accounted for a huge portion of my "akrasia," the rest was poor health, sleep and nutrition in particular).

Comment by Qiaochu_Yuan on ISO: Name of Problem · 2018-07-24T18:26:37.375Z · LW · GW

This question feels confused to me but I'm having some difficulty precisely describing the nature of the confusion. When a human programmer sets up an IRL problem they get to choose what the domain of the reward function is. If the reward function is, for example, a function of the pixels of a video frame, IRL (hopefully) learns which video frames human drivers appear to prefer and which they don't, based on which such preferences best reproduce driving data.

You might imagine that with unrealistic amounts of computational power IRL might attempt to understand what's going on by modeling the underlying physics at the level of atoms, but that would be an astonishingly inefficient way to reproduce driving data even if it did work. IRL algorithms tend to have things like complexity penalties to make it possible to select e.g. a "simplest" reward function out of the many reward functions that could reproduce the data (this is a prior but a pretty reasonable and justifiable one as far as I can tell) and even with large amounts of computational power I expect it would still not be worth using a substantially more complicated reward function than necessary.

Comment by Qiaochu_Yuan on ISO: Name of Problem · 2018-07-24T17:26:19.620Z · LW · GW

IRL does not need to answer this question along the way to solving the problem it's designed to solve. Consider, for example, using IRL for autonomous driving. The input is a bunch of human-generated driving data, for example video from inside a car as a human drives it or more abstract (time, position, etc.) data tracking the car over time, and IRL attempts to learn a reward function which produces a policy which produces driving data that mimics its input data. At no point in this process does IRL need to do anything like reason about the distinction between, say, the car and the human; the point is that all of the interesting variation in the data is in fact (from our point of view) being driven by the human's choices, so to the extent that IRL succeeds it is hopefully capturing the human's reward structure wrt driving at the intuitively obvious level.

In particular a large part of what is selecting the level at which to work is the human programmer's choice of how to set up the IRL problem, in the selection of the format of the input data, the selection of the format of the reward function, and in the selection of the format of the IRL algorithm's actions.

In any case, in MIRI terminology this is related to multi-level world models.

Comment by Qiaochu_Yuan on Replace yourself before you stop organizing your community. · 2018-07-23T18:49:54.698Z · LW · GW

Thanks for the mirror! My recommendation is more complicated than this, and I'm not sure how to describe it succinctly. I think there is a skill you can learn through practices like circling which is something like getting in direct emotional contact with a group, as distinct from (but related to) getting in direct emotional contact with the individual humans in that group. From there you have a basis for asking yourself questions like, how healthy is this group? How will the health of the group change if you remove this member from it? Etc.

It also sounds like there's an implicit thing in your mirror that is something like "...instead of doing explicit verbal reasoning," and I don't mean to imply that either.

Comment by Qiaochu_Yuan on Replace yourself before you stop organizing your community. · 2018-07-23T01:24:45.279Z · LW · GW

I appreciate the thought. I don't feel like I've laid out my position in very much detail so I'm not at all convinced that you've accurately understood it. Can you mirror back to me what you think my position is? (Edit: I guess I really want you to pass my ITT which is a somewhat bigger ask.)

In particular, when I say "real, living, breathing entity" I did not mean to imply a human entity; groups are their own sorts of entities and need to be understood on their own terms, but I think it does not even occur to many people to try in the sense that I have in mind.

Comment by Qiaochu_Yuan on Replace yourself before you stop organizing your community. · 2018-07-22T22:00:33.210Z · LW · GW

(For additional context on this comment you can read this FB status of mine about tribes.)

There's something strange about the way in which many of us were trained to accept as normal that two of the biggest transitions in our lives - high school to college, college to a job - get packaged in with abandoning a community. In both of those cases it's not as bad as it could be because everyone is sort of abandoning the community at the same time, but it still normalizes the thing in a way that bugs me.

There's a similar normalization of abandonment, I think, in the way people treat break-ups by default. Yes, there are such things as toxic relationships, and yes, I want people to be able to just leave those without feeling like they owe their ex-partner anything if that's what they need to do, but there are two distinct moves that are being bucketed here. I've been lucky enough to get to see two examples recently of what it looks like for a couple to break up without abandonment: they mutually decide that the relationship isn't working, but they don't stop loving each other at all throughout the process of getting out of the relationship, and they stay in touch with the emotional impact the other is experiencing throughout. It's very beautiful and I feel a lot of hope that things can be better seeing it.

What I think I'm trying to say is that there's something I want to encourage that's upstream of all of your suggestions, which is something like seeing a community as a real, living, breathing entity built out of the connections between a bunch of people, and being in touch emotionally with the impact of tearing your connections away from that entity. I imagine this might be more difficult in local communities where people might end up in logistically important roles without... I'm not sure how to say this succinctly without using some Val language, but like, having the corresponding emotional connections to other community members that ought to naturally accompany those roles? Something like a woman who ends up effectively being a maid in a household without being properly connected to and respected as a mother and wife.

Comment by Qiaochu_Yuan on [deleted post] 2018-07-18T18:59:42.547Z

Yes, absolutely. This is what graduate school and CFAR workshops are for. I used to say both of the following things back in 2013-2014:

  • that nearly all of the value of CFAR workshops came from absorbing habits of thought from the instructors (I think this less now, the curriculum's gotten a lot stronger), and
  • that the most powerful rationality technique was moving to Berkeley (I sort of still think this but now I expect Zvi to get mad at me for saying it).

I have personally benefited a ton over the last year and a half through osmosing things from different groups of relationalists - strong circling facilitators and the like - and I think most rationalists have a lot to learn in that direction. I've been growing increasingly excited about meeting people who are both strong relationalists and strong rationalists and think that both skillsets are necessary for anything really good to happen.

There is this unfortunate dynamic where it's really quite hard to compete for the attention of the strongest local rationalists, who are extremely deliberate about how they spend their time and generally too busy saving the world to do much mentorship, which is part of why it's important to be osmosing from other people too (also for the sake of diversity, bringing new stuff into the community, etc.).

Comment by Qiaochu_Yuan on A framework for thinking about wireheading · 2018-07-16T23:54:58.631Z · LW · GW

I think your description of the human relationship to heroin is just wrong. First of all, lots of people in fact do heroin. Second, heroin generates reward but not necessarily long-term reward; kids are taught in school about addiction, tolerance, and other sorts of bad things that might happen to you in the long run (including social disapproval, which I bet is a much more important reason than you're modeling) if you do too much heroin.

Video games are to my mind a much clearer example of wireheading in humans, especially the ones furthest in the fake achievement direction, and people indulge in those constantly. Also television and similar.

Comment by Qiaochu_Yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-15T20:43:29.559Z · LW · GW
In particular, you shouldn't force yourself to believe that you're attractive.

And I never said this.

But there's a thing that can happen when someone else gaslights you into believing that you're unattractive, which makes it true, and you might be interested in undoing that damage, for example.

Comment by Qiaochu_Yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-15T20:41:43.874Z · LW · GW

Yes, this.

There's a thing MIRI people talk about, about the distinction between "cartesian" and "naturalized" agents: a cartesian agent is something like AIXI that has a "cartesian boundary" separating itself from the environment, so it can try to have accurate beliefs about the environment, then try to take the best actions on the environment given those beliefs. But a naturalized agent, which is what we actually are and what any AI we build actually is, is part of the environment; there is no cartesian boundary. Among other things this means that the environment is too big to fully model, and it's much less clear what it even means for the agent to contemplate taking different actions. Scott Garrabrant has said that he does not understand what naturalized agency means; among other things this means we don't have a toy model that deserves to be called "naturalized AIXI."

There's a way in which I think the LW zeitgeist treats humans as cartesian agents, and I think fully internalizing that you're a naturalized agent looks very different, although my concepts and words around this are still relatively nebulous.

Comment by Qiaochu_Yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-12T21:26:48.137Z · LW · GW
The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

I want to point out that this is not an esoteric abstract problem but a concrete issue that actual humans face all the time. There's a large class of propositions whose truth value is heavily affected by how much you believe (and by "believe" I mean "alieve") them - e.g. propositions about yourself like "I am confident" or even "I am attractive" - and I think the LW zeitgeist doesn't really engage with this. Your beliefs about yourself express themselves in muscle tension which has real effects on your body, and from there leak out in your body language to affect how other people treat you; you are almost always in the state Harry describes in HPMoR of having your cognition constrained by the direct effects of believing things on the world as opposed to just by the effects of actions you take on the basis of your beliefs.

There's an amusing tie-in here to one of the standard ways to break the prediction market game we used to play at CFAR workshops. At the beginning we claim "the best strategy is to always write down your true probability at any time," but the argument that's supposed to establish this has a hidden assumption that the act of doing so doesn't affect the situation the prediction market is about, and it's easy to write down prediction markets violating this assumption, e.g. "the last bet on this prediction market will be under 50%."

Comment by Qiaochu_Yuan on An Exercise in Applied Rationality: A New Apartment · 2018-07-09T22:04:34.878Z · LW · GW

I do not. Fortunately, you can just test it empirically for yourself!

Comment by Qiaochu_Yuan on An Exercise in Applied Rationality: A New Apartment · 2018-07-09T08:04:11.882Z · LW · GW

General advice that I think basically applies to everybody is to try to lock down sleep, diet, and exercise (not sure what order these go in exactly) solidly.

Random sleep tips:

  • Try to sleep in as much darkness as possible. Blackout curtains + a sleep mask is as dark as I know how to easily make things, although you might find the sleep mask takes some getting used to. Just a sleep mask is already pretty good.
  • Blue light from screens at night disrupts your sleep; use f.lux or equivalent to straightforwardly deal with this.
  • Lower body temperature makes it easier to sleep, so take hot showers at night, which cause your body to cool down in response.
  • If you're having trouble falling asleep at a consistent time, consider supplementing small (on the order of 0.1 mg) amounts of melatonin. (Edit: see SSC post on melatonin for more, which recommends 0.3 mg.) A lot of the melatonin you'll find commercially is 3-5 mg and that's too much. I deal with this by biting off small pieces, not sure if that's a good idea. (Melatonin stopped working for me in February anyway, not sure what's up with that.)

I have thoughts about diet and exercise but fewer general recommendations; the main thing you want here is something that feels good and is sustainable to you.

Other than that, something feels off to me about the framing of the question. I feel like I'd have to know a lot more about what kind of person you are and what kind of things you want out of your life to give reasonable answers. Everything is just very contextual.

Comment by Qiaochu_Yuan on Stories of Summer Solstice · 2018-07-09T07:51:26.854Z · LW · GW

The drum circle leading up to sunset was beautiful, but the drum (+ dance + singing) circle after sunset was really fun. I drummed and it was fun! Then I danced and it was fun! Then I sang and it was fun! Nat started improvising a melody and I tried that and it was fun, and then Nat started improvising lyrics and I tried that and it was even better

and then we played one of my favorite games, sing-as-many-songs-with-the-same-chord-progression-at-the-same-time-as-possible, with the most people I've ever gotten to do it with –

anyway, all of that made me really happy, and I feel really grateful to everyone who helped make it all possible.

Comment by Qiaochu_Yuan on [deleted post] 2018-06-28T13:11:12.539Z

Whoops, hang on, I definitely did not intend for all of these posts to be crossposted to LW. I thought Ben Pace had set things up so that only things tagged #lw would be crossposted, but that doesn't seem to have been what happened. My bad. I can't seem to delete the posts or make them invisible.

Comment by Qiaochu_Yuan on Last Chance to Fund the Berkeley REACH · 2018-06-28T11:09:07.927Z · LW · GW

Pledged $50 / mo. I haven't been to an event at REACH yet but I'm happy about the events I've seen on Facebook being hosted there, and expect to attend and/or host something there in the nearish future if it keeps existing. Everything Ray et al. have been writing about community health and so forth resonates with me and I'm happy to put my money where my resonance is.

Comment by Qiaochu_Yuan on Why kids stop asking why · 2018-06-06T00:25:18.948Z · LW · GW

I think a pattern that makes sense to me is cycles of exploration and exploitation: learn about the world, act on that understanding, use the observations you acquired from acting to guide further learning, etc. The world is big and complicated enough that I think you don't hit anything close to diminishing marginal returns on asking "why?" (my experience, if anything, has been increasing marginal returns as I've gotten better at learning things), although I agree that it's important to get some acting going on in there too.

Comment by Qiaochu_Yuan on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-06-04T18:26:22.200Z · LW · GW

Benquo, this is really great to hear. This is a shift I went through gradually throughout 2017 and I think it's really important.

Comment by Qiaochu_Yuan on Teaching Methodologies & Techniques · 2018-06-04T15:20:43.853Z · LW · GW

Teaching is not about methodology; it's metis, not episteme. (I am also not a schoolteacher but I have taught at CFAR workshops.)

I love cousin_it's suggestion that you should start teaching a student regularly as soon as possible, but I have an additional suggestion about how to spend that time: namely, your goal should not be to teach anyone anything but to find out how students' minds work (and since anyone can be a student, this means your goal is to find out how people's minds work), and how those minds interface with the material you want to teach. E.g. if you attempt to teach your student X and they're not getting it, instead of being frustrated at how they're not getting it, be curious about what's happening for the student instead of getting it. How are they interpreting the words you're saying? What models, if any, are they building in their head of the situation? Etc. etc.