Transcript: "You Should Read HPMOR"

post by TurnTrout · 2021-11-02T18:20:53.161Z · LW · GW · 11 comments

Contents

  On Caring
  On Foolishness
None
11 comments

The following is the script of a talk I gave for some current computer science students at my alma mater, Grinnell College. This talk answers "What do I wish I had known while at Grinnell?".

Hi, I'm Alex Turner. I’m honored to be here under Sam’s invitation. I'm in the class of 2016. I miss Grinnell, but I miss my friends more—enjoy the time you have left together. 

I’m going to give you the advice I would have given Alex2012. For some of you, this advice won’t resonate, and I think that’s OK. People are complicated, and I don’t even know most of you. I don’t pretend to have a magic tip that will benefit everyone here. But if I can make a big difference for one or two of you, I’ll be happy. 

I’m going to state my advice now. It’s going to sound silly. 

You should read a Harry Potter fanfiction called Harry Potter and the Methods of Rationality (HPMOR). 

I’m serious. The intended benefits can be gained in other ways, but HPMOR is the best way I know of. Let me explain.

When I was younger, I was operating under some kind of haze, a veil, distancing me from what I really would care about.

I responded to social customs and pressure, instead of figuring out what is good and right by my own lights, how to make that happen, and then executing. Usually it’s fine to just follow social expectations. But there are key moments in life where it’s important to reason on your own. 

At Grinnell, I exemplified a lot of values I now look down on. I was extremely motivated to do foolish or irrelevant things. I fought bravely for worthless side pursuits. I don’t even like driving, but I thought I wanted a fancy car. I was trapped in my own delusions because I wasn’t thinking properly. 

Why did this happen, and what do I think has changed? 

On Caring

First, I was disconnected from what I would have really cared about upon honest, unflinching reflection. I thought I wanted Impressive Material Things. I thought I wanted a Respectable Life. I didn’t care about the bible, but I brought it with me to my dorm anyways so that I’d be more “wholesome” according to my cultural background. Chasing something someone convinced me to believe I wanted, but which I don’t care about. 

I became motivated to unironically reflect on what is good, how I want the universe to look by the time I’m done with it—to reason about what matters without asking for permission. Not so that you can show how caring you are on social media. But because some things are fucking important. Peace, learning, freedom, health, justice. Human flourishing. Happiness. 

When I inhabit my old ways of thinking about altruism, they evoke guilt and concern: “The world will burn. I have to do my part.” If, however, I’ve discharged my duties by donating and recycling and such, then I no longer feel guilty. But the cruel fact is that no matter what I do, millions of people will die of starvation this year. Due to a coincidence of space and time, none of these people happen to be my brother or sister, my mother or father. None are starving two feet away from me. But who cares if someone starves two feet away, or 42 million feet away—they’re still starving! 

What I’m saying here is subtler than “care a lot.” I’m gesturing at a particular kind of caring. The kind from the assigned essay [EA · GW]. Since you all read it, I probably don’t need to explain further, but I will anyways. Some extreme altruists give almost everything they have to charity. It’s natural to assume they have stronger “caring” feelings than you do, but that may not be true. 

The truth is that I am biologically incapable of caring as much as 9 million x (how much I would care if my brother starved). My internal “caring system” doesn’t go up that many decibels, it just silently throws an emotion overflow error. Does that mean I can’t, or don't want to, dedicate my life to altruism? No. It means I ignore my uncalibrated emotions, that I do some math and science to estimate how I can make the biggest difference, and then do that. 

What does this have to do with Harry Potter? HPMOR made me realize I should care in this way. HPMOR let me experience the point of view of someone intelligently optimizing the world to be a better, more moral place. HPMOR let me look through the eyes of someone who deeply cares about the world and who tries to do the most good that they can. The experience counts.

You’ll notice that CS-151 doesn’t start off with a category theoretic-motivation of functional programming in Scheme, with armchair theorizing about loop invariants, parametric polymorphism, and time complexity. There are labs. You experience it yourself. That’s how the beauty of computer science sticks to you

HPMOR is the closest thing I know to a lived experience of gut-level caring about hammering the world into better shape

On Foolishness

Second, in 2016, I was enthusiastic, optimistic, and hard-working. I was willing to swim against social convention. I was also foolish.

By “foolish”, I don’t quite mean “I did pointless things.” I mean: “My cognitive algorithm was not very good, and so I did pointless things.” By analogy, suppose it’s 2026, and you’re doing research with the assistance of a machine learning model. Given a hypothesis, the model goes off and finds evidence. But suppose that the model anchors on the first evidence it finds: Some study supports the idea, and then the model selectively looks for more evidence for its existing beliefs! Wouldn’t this just be so annoying and stupid

In some parts of your life, you are like this. Yes, you. Our brains regularly make embarrassing, biased mistakes. For example, I stayed in a relationship for a year too long because I was not honest with myself about how I felt. 

In 2014, I scrolled past a News Feed article in which Elon Musk worried about extinction from AI. I rolled my eyes—“Elon, AI is great, you have no idea what you’re talking about.” And so I kept scrolling. (If someone made a biopic about me, this is where the canned laugh track would play.) 

The mistake was that I had a strong, knee-jerk opinion about something I’d never even thought about. In 2018, I reconsidered the topic. I ignored the news articles and sought out the best arguments from each side of the debate. I concluded that my first impression was totally, confidently wrong. What an easy way to waste four years. I’m now finishing my dissertation on reducing extinction risk from AI, publishing papers in top AI conferences. 

My cognitive algorithm was not that great, and so I made many costly mistakes. Now I make fewer. 

What, pray tell, does this have to do with Harry Potter? HPMOR channels someone who tries to improve their thinking with the power and insight granted by behavioral economics and cognitive psychology, all in pursuit of worthy goals. The book gave me a sense that more is possible, in a way that seems hard to pick up from a textbook. (I took cog-psych classes at Grinnell. They were evidently insufficient for this purpose: I didn’t even realize that I should try to do better!)

HPMOR demonstrates altruistic fierceness: How can I make the future as bright as possible? How can I make the best out of my current situation? What kinds of thinking help me arrive at the truth as quickly as possible? What do I think I know, and why do I think I know it? What would reality look like if my most cherished beliefs were wrong?

In the real world, we may stand at the relative beginning of a bright and long human history. But to ensure that humanity has a future, to make things go right—that may require finding the truth as quickly as possible. That may require clever schemes for doing the most good we can, whatever we can. That may require altruistic fierceness. (See: the Effective Altruism movement.)

Taken together, caring deeply about maximizing human fulfillment and improving my cognitive algorithms changed my life. I don’t know if this particular book will have this particular effect on you. For example, you might not be primarily altruistically motivated on reflection. That’s fine. I think you may still selfishly benefit from this viewpoint and skillset. 

HPMOR isn’t the only way to win these benefits. But I think it’s quite good for some people, which should make it worth your time to try 5–10 chapters. I hope you benefit as much as I did. 


You can find the book at www.hpmor.com (I recommend the PDF version). You can find the unofficial, very good podcast reading on Spotify. You can find me at turneale@oregonstate.edu

11 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-11-03T20:24:11.272Z · LW(p) · GW(p)

How did the audience react? Did you get any feedback? Do you think many of them went and read HPMOR? Did they like it?

Replies from: TurnTrout
comment by TurnTrout · 2021-11-04T19:38:01.252Z · LW(p) · GW(p)

Good reception. Got lots of questions about EA. No questions about HPMOR. There were 8 audience members. I'd imagine that half of them at least loaded www.hpmor.com. 

comment by james.lucassen · 2021-11-03T03:22:29.756Z · LW(p) · GW(p)

Dang, I wish I had read this before the EA Forum's creative writing contest closed. It makes a lot of sense that HPMOR could be valuable via this "first-person-optimizing-experience" mechanism - I had read it after reading the Sequences, so I was mostly looking for examples of rationality techniques and secret hidden Jedi knowledge. 

Since HPMOR!Harry isn't so much EA as transhumanist, I wonder if a first-person EA experience could be made interesting enough to be a useful story? I suppose the Comet King from Unsong is also kind of close to this niche, but not really described in first person or designed to be related to. This might be worth a stab...

Replies from: Raven
comment by Evenflair (Raven) · 2021-11-04T00:20:22.568Z · LW(p) · GW(p)

Hpmor got me to read the sequences by presenting a teaser of what a rationalist could do and then offering the real me that power. This line from the OP resonated deeply:

Taken together, caring deeply about maximizing human fulfillment and improving my cognitive algorithms changed my life. I don’t know if this particular book will have this particular effect on you. For example, you might not be primarily altruistically motivated on reflection. That’s fine. I think you may still selfishly benefit from this viewpoint and skillset.

The sequences then expanded that vision into something concrete, and did in fact completely change my life for the better.

comment by Brian Slesinsky (brian-slesinsky) · 2021-11-03T01:54:40.847Z · LW(p) · GW(p)

Could you say anything more specific or concrete about how reading HPMOR changed your life?

Replies from: TurnTrout, TurnTrout
comment by TurnTrout · 2021-11-03T02:13:54.156Z · LW(p) · GW(p)

OK.

HPMOR introduced me to the Sequences (which benefited me as detailed in e.g. Swimming Upstream [LW · GW]), and by extension: to LessWrong (and therefore HPMOR must receive some credit for everything I've posted to this site and all of the ideas I've generated), AI risk (now my research area), effective altruism (I just attended EAG in London), CFAR's techniques (Internal Double Crux in particular transformed my internal emotional life [LW · GW]) and CFAR's social circle (I'm now polyamorous, which I have found vastly more fulfilling and appropriate than monogamy). 

How much of this is incidental to HPMOR? What would have happened if I had read HPMOR one year later or earlier? I don't know, which is why I focused on the immediate benefits I drew from the book, which are necessarily more vague.

comment by TurnTrout · 2021-11-03T17:48:20.307Z · LW(p) · GW(p)

Another example: HPMOR inspired me to be more scholarly -> Thinking of Harry, in 2017 I only ask for science books for Christmas -> I buy and read Superintelligence -> {I work on AI risk, I read (text)books regularly [? · GW]}.

comment by WalterL · 2021-11-05T06:15:53.996Z · LW(p) · GW(p)

My pick for 'you must experience', or, 'trust me on the sunscreen' in terms of media, is the old British comedy show 'Yes Minister'.  Watching it nowadays is an eerie experience, and, at least in my case, helped me shed illusions of safety and competence like nothing else.

The only evils that beset us are those that we create, but that does not make them imaginary.  To quote the antag from Bad Boys 2 "This is a stupid problem to have, but it is nonetheless a problem."

comment by Nisan · 2021-11-02T20:37:48.174Z · LW(p) · GW(p)

I don't think "viciousness" is the word you want to use here.

Replies from: TurnTrout
comment by TurnTrout · 2021-11-02T20:49:47.121Z · LW(p) · GW(p)

You are right, but for a slightly different reason. I had thought I meant a meaning analogous quoted in Epistemic Viciousness [LW · GW]... Except when I actually reread the essay, this is not what I wanted to imply:

This essay is about epistemic viciousness in the martial arts, and this story illustrates just that. Though the word ‘viciousness’ normally suggests deliberate cruelty and violence, I will be using it here with the more old-fashioned meaning, possessing of vices.

(For some reason, I had remembered "epistemic viciousness" as "epistemic fierceness.") 

I've currently edited to "fierceness." I'll keep an eye out for a yet better word. 

comment by oge · 2021-11-02T19:07:03.070Z · LW(p) · GW(p)

Grinnellian, wut wut. [nnadioge] '06 on plans.