Sunset at Noon

post by Raemon · 2017-11-29T14:52:45.889Z · score: 179 (68 votes) · LW · GW · 19 comments

Contents

  i. Gratitude
  ii. Tortoise Skills
    A Three Percent Incline
  iii. Bayesian Wizardry
    Problems Worth Solving
  iv. Noticing Confusion
    Confusion Is Near-Invisible
  v. The World Is Literally On Fire
  vi. ...also, Metaphorically On Fire
  vii. Burning Out
  viii. Sunset at Noon
None
20 comments

A meandering series of vignettes.

I have a sense that I've halfway finished a journey. I expect this essay to be most useful to people similarly-shaped-to-me, who are also undergoing that journey and could use some reassurance that there's an actual destination worth striving for.

  1. Gratitude
  2. Tortoise Skills
  3. Bayesian Wizardry
  4. Noticing Confusion
  5. The World is Literally on Fire...
  6. ...also Metaphorically on Fire
  7. Burning Out
  8. Sunset at Noon

Epistemic Status starts out "true story", and gets more (but not excessively) speculative with each section.

i. Gratitude

"Rationalists obviously don't *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?"
"Huh. Do *you* keep a gratitude journal?"
"Lol. No, obviously."
- Some Guy at the Effective Altruism Summit of 2012

Upon hearing the above, I decided to try gratitude journaling. It took me a couple years and a few approaches to get it working.

  1. First, I tried keeping a straightforward journal, but it felt effortful and dumb.
  2. I tried a thing where I wrote a poem about the things I was grateful for, but my mind kept going into "constructing a poem" mode instead of "experience nice things mindfully" mode.
  3. I tried just being mindful without writing anything down. But I'd just forget.
  4. I tried writing gratitude letters to people, but it only occasionally felt right to do so. (This came after someone actually wrote me a handwritten gratitude letter, which felt amazing, but it felt a bit forced when I tried it myself)
  5. I tried doing gratitude before I ate meals, but I ate "real" meals sort of inconsistently so it didn't take. (Upon reflection, maybe I should have fixed the "not eat real meals" thing?)

But then I stumbled upon something that worked. It's a social habit, which I worry is a bit fragile. I do it together with my girlfriend each night, and on nights when one of us is traveling, I often forget.

But this is the thing that worked. Each night, we share our Grumps and Grates. (We're in a relationship and have cutesey-poo ways of talking to each other).

Grumps and Grates goes like this:

  1. We share anything we're annoyed or upset about. (We call this The Grump. Our rule is to not go *searching* for the Grump, simply to let it out if it's festering so that when we get to the Gratefuls we actually appreciate them instead of feeling forced)
  2. Share three things that we're grateful for that day. On some bad days this is hard, but we should at least be able to return to old-standbys ("I'm breathing", "I have you with me"), and you should always perform the action of at least *attempting* an effortful search.
  3. Afterwards, pause to actually feel the Grates. Viscerally remember the thing and why it was nice. If you're straining to feel grateful and had to sort of reach into the bottom of the barrel to find something, at least try to cultivate a mindset where you fully appreciate that thing.

Maybe the sun just glinted off your coffee cup nicely, and maybe that didn't stop the insurance company from screwing you over and your best friend from getting angry at you and your boss from firing you today.

But... in all seriousness... in a world whose laws of physics had no reason to make life even possible, a universe mostly full of empty darkness and no clear evidence of alien life out there, where the only intelligent life we know of sometimes likes to play chicken with nuclear arsenals...

...somehow some tiny proteins locked together ever so long ago [LW · GW] and life evolved and consciousness evolved and somehow beauty evolved and... and here you are, a meatsack cobbled together by a blind watchmaker, and the sunlight is glinting off that coffee cup, and it's beautiful.

Over the years, I've gained an important related skill: noticing the opportunity to feel gratitude, and mindfully appreciating it.

I started writing this article because of a specific moment: I was sitting in my living room around noon. The sun suddenly filtered in through the window and, and on this particular day it somehow seemed achingly beautiful to me. I stared at it for 5 minutes, happy.

It seemed almost golden, in the Robert Frost sense. Weirdly golden.

It was like a sunset at noon.

(My coffee cup at 12:35pm. Photo does not capture the magic, you had to be there.)

And that might have been the entire essay here - a reminder to maybe cultivate gratitude (because it's, like, peer reviewed and hopefully hasn't failed to replicate), and to keep trying even if it doesn't seem to stick.

But I have a few more things on my mind, and I hope you'll indulge me.

ii. Tortoise Skills

Recently I read an article about a man living in India, near a desert sand bar. When he was 14 he decided that, every day, he would go there to plant a tree. Over time, those trees started producing seeds of their own. By taking root, they helped change the soil so that other kinds of plants and animals could live there.

Fifteen years later, the desert sandbar had become a forest as large as Central Park.

It's a cute story. It's a reminder that small, consistent efforts can add up to something meaningful. It also asks an interesting question:

Is whatever you're going to do for the next 15 years going to produce something at least as cool as a Central Park sized forest?

(This is not actually the forest in question, it's the image I could find easily that looked similar that was filed under creative commons. Credited to Your Mildura)

A Three Percent Incline

A couple months ago, suddenly I noticed that... I had my shit together.

This was in marked contrast to 5 years ago when I decidedly didn't have my shit together.

I absorbed the CFAR mantra of "try things" and "problems can in principle be factored into pieces, understood, and solved." So I dutifully looked over my problems, attempted to factor and understand and fix them.

I tried things. Lots of things.

My life did not especially change. Insofar as it did, it was because I undertook specific projects that I was excited about [LW · GW], and forced me to gain skills.

Years passed.

Somewhere in the middle of this, 2014, Brienne Yudkowsky wrote an essay about Tortoise Skills.

She divided skills into four quadrants, based on whether a skill was *fast* to learn, and how *hard* it was to learn.

LessWrong has (mostly) focused on epiphanies - concepts that might be difficult to get, but once you understand them you pretty much immediately understand them.

CFAR ends up focusing on epiphanies and skills that can be taught in a single weekend, because, well, they only have a single weekend to teach them. Fully gaining these skills takes a lot of practice, but in principle you can learn it in an hour.

There's some discussion about something you might call Bayesian Wizardry - a combination of deep understanding of probability, decision theory and 5-second reflexes. This seems very hard and takes a long time to see much benefit from.

But there seemed to be an underrepresented "easy-but-time-consuming" cluster of skills, where the main obstacle was being slow but steady. Brienne went on to chronicle an exploration of deliberate habit acquisition, inspired by a similar project by Malcolm Ocean.

I read Brienne and Malcolm's works, as well as the book Superhuman by Habit, of which this passage was most helpful to me:

Habits can only be thought of rationally when looked at from a perspective of years or decades. The benefit of a habit isn't the magnitude of each individual action you take, but the cumulative impact it will have on your life in the long term. It's through that lens that you must evaluate which habits to pick up, which to drop, and which are worth fighting for when the going gets tough.
Just as it would be better to make 5% interest per year on your financial investments for the rest of your life than 50% interest for one year.... it's better to maintain a modest life-long habit than to start an extreme habit that can't be sustained for a single year.
The practical implications of this are twofold.
First, be conservative when sizing your new habits. Rather than say you will run every single day, agree to jog home from the train station every day instead of walk, and do one long run every week.
Second, you should be very scared to fail to execute a habit, even once.
By failing to execute, potentially you're not just losing a minor bit of progress, but rather threatening the cumulative benefits you've accrued by establishing a habit. This is a huge deal and should not be treated lightly. So make your habits relatively easy, but never miss doing them.
Absolutely never skip twice.
I was talking to a friend about a daily habit that I had. He asked me what I did when I missed a day. I told him about some of my strategies and how I tried to avoid missing a day. "What do you do when you miss two days?" he asked.
"I don't miss two days," I replied.
Missing two days of a habit is habit suicide. If missing one day reduces your chances of long-term success by a small amount like five percent, missing two days reduces it by forty percent or so.

"Never miss 2 days" was inspirational in a way that most other habit-advice hadn't been (though this may be specific to me ). It had the "tough but fair coach is yelling at you" thing that some people find valuable, but in a way that clearly had my long-term interests at heart.

So I started investing in habit-centric thinking. And it still wasn't super clear at first that anything good was really happening as a result...

...until suddenly, I looked back at my 5-years-ago-self...

...and noticed that I had my shit together.

It was like I'd been walking for 2 years, and it felt like I'd been walking on a flat, straight line. But in fact, that line had a 3% incline. And after a few years of walking I looked back and noticed I'd climbed to the top of a hill.

(Also as part of the physical exercise thing sometimes I climb literal hills)

Some specific habits and skills I've acquired:

On the macro level, I'm more the sort of person who deliberately sets out to achieve things, and follow through on them. And I'm able to do it while being generally happy, which didn't use to be the case. (This largely involves being comfortable not pushing myself, and guarding my slack [LW · GW]).

So if you've been trying things sporadically, and don't feel like you're moving anywhere, I think it's worth keeping in mind:

  1. Are you aiming for consistency - making sure not to drop the ball on the habits you cultivate, however small?
  2. If you've been trying things for a while, and it doesn't feel like you're making progress, it's worth periodically looking back and checking how far you've come.

Maybe you haven't been making progress (which is indeed a warning sign that something isn't working). But maybe you've just been walking at a steady, slight incline.

Have you been climbing a hill? If you were to keep climbing, and you imagine decades of future-yous climbing further at the same rate as you, how far would they go?

iii. Bayesian Wizardry

"What do you most often do instead of thinking? What do you imagine you could do instead?"
- advice a friend of mine got on facebook, when asking for important things to reflect on during a contemplative retreat.

I could stop the essay here too. And it'd be a fairly coherent "hey guys maybe consider cultivating habits and sticking with them even when it seems hard? You too could be grateful for life and also productive isn't that cool?"

But there is more climbing to do. So here are some hills I'm currently working on, which I'm finally starting to grok the importance of. And because I've seen evidence of 3% inclines yielding real results, I'm more willing to lean into them, even if they seem like they'll take a while.

I've had a few specific mental buckets for "what useful stuff comes out of the rationalsphere," including:

Epistemic fixes that were practically useful in the shortish term (i.e. noticing when you are 'arguing for a side' instead of actually trying to find the truth)

Instrumental techniques, which mostly amount to 'the empirically valid parts of self-help. (i.e. Trigger Action Plans [LW · GW]).

Deep Research and Bayesian Wizardry (i.e. high quality, in depth thinking that pushed the boundary of human knowledge forward while paying strategic attention to what things matter most, working with limited time and evidence)

Orientation Around Important Things (i.e. once someone has identified something like X-Risk as a crucial research area, people who aren't interested in specializing their lives around it can still help out with practical aspects, like getting a job as an office manager)

Importantly, it seemed like Deep Research and Bayesian Wizardry was something other people did. I did not seem smart enough to contribute.

I'm still not sure how much it's possible for me to contribute - there's a power law of potential value, and I clearly wouldn't be in the top tiers even if I dedicated myself fully to it.

But, in the past year, there's been a zeitgeist initiated by Anna Salamon that being good at thinking seems useful, and if you could only carve out time to actually think (and to practice, improving at it over time) maybe you could actually generate something worthwhile.

So I tried.

Earlier this year I carved out 4 hours to actually think about X-Risk, and I output this blogpost on what to do about AI Safety if you seem like a moderately smart person with no special technical aptitudes.

It wasn't the most valuable thing in the world, but it's been cited a few times by people I respect, and I think it was probably the most valuable 4 hours I've spent to date.

Problems Worth Solving

I haven't actually carved out time to think in the same way since then - a giant block of time dedicated to a concrete problem. It may turn out that I used up the low-hanging fruit there, or that it requires a year's worth of conversations and shower-thoughts in order to build up to it.

But I look at people like Katja Grace - who just sit and actually look at what's going on with computer hardware, or coming up with questions to ask actual AI researchers about what progress they expect. And it seems like there's a lot of things worth doing that don't require you to have any weird magic. You should just need to actually think about it, and then actually follow that thinking up with action.

I've also talked more with people who do seem to have something like weird magic, and I've gotten more of a sense that the magic has gears. It works for comprehensible reasons. I can see how the subskills build into larger skills. I can see the broad shape of how those skills combine into a cohesive source of cognitive power.

A few weeks ago, I was arguing with someone about the relative value of LessWrong (as a conversational locus of quality thinking) versus donating money. I can't remember their exact words, but a paraphrase:

It's approximately as hard to have an impact by donating as by thinking - especially now, that the effective altruism ecosystem has become more crowded. There are billions of dollars available - the hard part is knowing what to do with them. And often, when the answer is "use them to hire researchers to think about things", you're still passing the recursive buck.
Someone has to think. And it's about as hard to get good at thinking as it is to get rich.

Meanwhile, some other conversations I've had with people in the EA, X-Risk and Rationality communities could be combined and summarized as:

We have a lot of people showing up, saying "I want to help." And the problem is, the thing we most need help with is figuring out what to do. We need people with breadth and depth of understanding, who can look at the big picture and figure out what needs doing
This applies just as much to "office manager" type positions as "theoretical researcher" types.

iv. Noticing Confusion

Brienne has a series of posts on Noticing Things, which is among the most useful, practical writings on epistemic rationality that I've read.

It notes:

I suspect that the majority of good epistemic practice is best thought of as cognitive trigger-action plans.
[If I'm afraid of a proposition] → [then I'll visualize how the world would be and what I would actually do if the proposition were true.]
[If everything seems to hang on a particular word] → [then I'll taboo that word and its synonyms.]
[If I flinch away from a thought at the edge of peripheral awareness] → [then I'll focus my attention directly on that thought.]

She later remarks:

I was at first astonished by how often my pesky cognitive mistakes were solved by nothing but skillful use of attention. Now I sort of see what's going on, and it feels less odd.
What happens to your bad habit of motivated stopping when you train consistent reflective attention to "motivated stopping"? The motivation dissolves under scrutiny...
If you recognize something as a mistake, part of you probably has at least some idea of what to do instead. Indeed, anything besides ignoring the mistake is often a good thing to do instead. So merely noticing when you're going wrong can be over half the battle.

She goes on to chronicle her own practice at training the art of noticing.

This was helpful to me, and one particular thing I've been focusing lately is noticing confusion.

In the Sequences and Methods of Rationality, Eliezer treats "noticing confusion" like a sacred phrase of power, whispered in hushed tones. But for the first 5 or so years of my participation in the rationality community, I didn't find it that useful.

Confusion Is Near-Invisible

First of all, confusion (at least as I understand Eliezer to use the term) is hard to notice. The phenomenon here is when bits of evidence don't add up, and you get a subtle sense of wrongness. But then instead of heeding that wrongness and making sense of it, you round the evidence to zero, or you round the situation to the nearest plausible cliché.

Some examples of confusion are simple: CFAR's epistemic habit checklist describes a person who thought they were supposed to get on a plane on Thursday. They got an email on Tuesday reminding them of their flight "tomorrow." This seemed odd, but their brain brushed it off as a weird anomaly that didn't matter.

In this case, noticing confusion is straightforwardly useful - miss fewer flights.

Some instances are harder. A person is murdered. Circumstantial evidence points on one particular murderer. But there's a tiny note of discord. The evidence doesn't quite fit. A jury that's tired and wants to go home is looking for excuses to get the sentencing over with.

Sometimes it's harder still: you tell yourself a story about how consciousness works. It feels satisfactory. You have a brief flicker of awareness that your story doesn't explain consciousness well enough that you could build it from scratch, or discern when a given clump of carbon or silicon atoms would start being able to listen in a way that matters.

In this case, it's not enough to notice confusion. You have to follow it up with the hard work of resolving it.

You may need to brainstorm ideas, validate hypotheses. To find the answer fastest and most accurately, you may need to not just "remember base rates", but to actually think about Bayesian probability as you explore those hypthoses with scant evidence to guide you.

Noticing confusion can be a tortoise skill, if you seek out opportunities to practice. But doing something with that confusion requires some wizardry.

(Incidentally: in at least one point earlier in this essay, if I told you you were given the opportunity to practice noticing confusion, could you identify where it was?)

v. The World Is Literally On Fire

I've gotten pretty good at noticing when I should have been confused, after the fact.

A couple weeks ago, I was walking around my neighborhood. I smelled smoke.

I said to myself: "huh, weird." An explanation immediately came to mind - someone was having a barbecue.

I do think this was the mostly-likely explanation given my knowledge at the time. Nonetheless, it is interesting that a day later, when I learned that many nearby towns in California were literally on fire, and the entire world had a haze of smoke drifting through it... I thought back to that "huh, weird."

Something had felt out of place, and I could have noticed. I'd been living in Suburbia for a month or two and not noticed this smell, and while it probably was a barbacue, something about this felt off.

(When the world's on fire, the sun pretty unsubtly declares that things are not okay)

Brienne actually look this a step farther in a facebook thread, paraphrased:

"I notice that I'm confused about the California Wildfires. There are a lot of fires, all across the county. Far enough apart that they can't have spread organically. Are there often wildfires that spring up at the same time? Is this just coincidence? Do they have a common cause?"

Rather than stop at "notice confusion", she and people in the thread went on to discuss hypotheses. Strong winds were reported. Were they blowing the fires across the state? That still seemed wrong - the fires were skipping over large areas. Is it because California is in a drought? This explains why it's possible for lots of fires to abruptly start. But doesn't explain why they all started today.

The consensus eventually emerged that the fires had been caused by electrical sparks - the common cause was the strong winds, which caused powerlines to go down in multiple locations. And then, California being a dry tinderbox of fuel enabled the fires to catch.

I don't know if this is the true answer, but my own response, upon learning about the wildfires and seeing the map of where they were, had simply been, "huh." My curiosity stopped, and I didn't even attempt at generating hypotheses that adequately explained anything.

There are very few opportunities to practice noticing confusion.

When you notice yourself going "huh, weird" in response to a strange phenomenon... maybe that particular moment isn't that important. I certainly didn't change my actions due to understanding what caused the fires. But you are being given a scarce resource - the chance, in the wild, to notice what noticing confusion feels like.

Generating/evaluating hypotheses can be done in response to artificial puzzles and abstract scenarios, but the initial "huh" is hard to replicate, and I think it's important to train not just to notice the "huh" but to follow it up with the harder thought processes.

vi. ...also, Metaphorically On Fire

It so happened that this was the week that Eliezer published There Is No Fire Alarm for Artificial General Intelligence [LW · GW].

In the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn't react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time.
The fire alarm doesn't tell us with certainty that a fire is there. In fact, I can't recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.
But the fire alarm tells us that it's socially okay to react to the fire. It promises us with certainty that we won't be embarrassed if we now proceed to exit in an orderly fashion.

In typically Eliezer fashion, this would all be a metaphor for how there's not ever going to be a moment when it feels socially, professionally safe to be publicly worried about AGI.

Shortly afterwards, Alpha Go Zero [LW · GW] was announced to the public.

For the past 6 years, I've been reading the arguments about AGI, and they've sounded plausible. But most of those arguments have involved a lot of metaphor and it seemed likely that a clever arguer could spin something similarly-convincing but false.

I did a lot of hand wringing, listening to Pat Modesto-like [LW · GW] voices in my head. I eventually (about a year ago) decided the arguments were sound enough that I should move from the "think about the problem" to "actually take action" phase.

But it still didn't really seem like AGI was a real thing. I believed. I didn't aleive.

Alpha Go Zero changed that, for me. For the first time, the arguments were clear-cut. There was not just theory but concrete evidence that learning algorithms could improve quickly, that architecture could be simplified to yield improvement, that you could go from superhuman to super-super-human in a year.

Intellectually, I'd loosely believed, based on the vague authority of people who seemed smart, that maybe we might all be dead in 15 years.

And for the first time, seeing the gears laid bare, I felt the weight of alief that our civilization might be cut down in its prime.

...

(Incidentally, a few days later I was at a friends' house, and we smelled something vaguely like gasoline. Everyone said "huh, weird", and then turned back to their work. On this particular occasion I said "Guys! We JUST read about fire alarms and how people won't flee rooms with billowing smoke and CALIFORNIA IS LITERALLY ON FIRE RIGHT NOW. Can we look into this a bit and figure out what's going on?

We then examined the room and brainstormed hypotheses and things. On this occasion we did not figure anything out and eventually the smell went away and we shrugged and went back to work. This was not the most symbolicly useful anecdote I could have hoped for, but it's what I got)

vii. Burning Out

People vary in what they care about, and how they naturally handle that caring. I make no remark on what people should care about.

But if you're shaped something like me, it may seem like the world is on fire at multiple levels. AI seems around 15% likely to kill everyone in 15 years. If it weren't, people around the world would still be dying for stupid preventable reasons, and people around the world would still be living but cut off from their potential.

Meanwhile, civilization seems disappointingly dysfunctional in ways that turn stupid, preventable reasons into confusing, intractable ones.

Those fires range in order-of-magnitude-of-awfulness, but each seems sufficiently alarming that it completely breaks my grim-o-meter and renders it useless.

For three years, the rationality and effective altruism movements made me less happy, more stressed out, in ways that were clearly unsustainable and pointless.

The world is burning, but burning out doesn't help.

I don't have a principled take on how to integrate all of that. Some people have techniques that work for them. Me, I've just developed crude coping mechanisms of "stop feeling things when they seem overwhelming."

I do recommend that you guard your slack [LW · GW].

And if personal happiness is a thing you care about, I do recommend cultivating gratitude. Even when it turns out the reason your coffee cup was delightfully golden was that the world was burning.

Do what you think needs doing, but no reason not to be cheerful about it.

viii. Sunset at Noon

Earlier, I noted my coffee cup was beautiful. Weirdly beautiful. Like a sunset at noon.

That is essentially, verbatim, the series of thoughts that passed through my head, giving you approximately as much opportunity to pay attention as I had.

If you noticed that sunsets are not supposed to happen at noon, bonus points to you. If you stopped to hypothesize why, have some more. (I did neither).

Sometimes, apparently, the world is just literally on fire and the sky is covered in ash and the sun is an apocalyptic scareball of death and your coffee cup is pretty.

Sometimes you are lucky enough for this not to matter much, because you live safely a few hours' drive away, and your friends and the news and weather.com all let you know.

Sometimes, maybe you don't have time for friends to let you know. You're living an hour away from a wildfire that's spreading fast. And the difference between escaping alive and asphyxiating is having trained to notice and act on the small note of discord as the thoughts flicker by:

"Huh, weird."

(To the right: what my coffee cup normally looks like at noon)

19 comments

Comments sorted by top scores.

comment by weft · 2017-11-30T01:14:56.667Z · score: 23 (8 votes) · LW · GW

This was a great post, and I know this is probably a particularly busy time for you, so thanks!

For some reason, reading this made me deeply sad. I think because I DON'T feel like I've experienced significant gains, or that those gains I have experienced are traded off against losses. For example, I made some long-distance moves knowing that I was trading things like strong relationships and cultural fit for financial stability (and I underestimated how much I was losing and how unable I would be to regain it). Other marginal improvements in my agency are mainly just a result of getting older.

I thought the "fhafrg ng abba" was poetic license. My "Huh, weird" was that the picture wasn't of the actual forest. My previous impression was that you could find practically anything under a Creative Commons license, but now I suppose it is only of generic things and not specific things.

I've tried various daily gratitude journaling, and it didn't seem to help. I feel bad when it is HARD to come up with specific things to be grateful for at the end of the day. But I do have success with noticing in the moment when I am experiencing a Nice Thing, and fully appreciating it at that time. I would not have expected that things like greenery, flowers, trees, and strong winds on sunny days would be particularly important to me, but after cultivating that habit it turns out that those are things that I am most likely to notice in the moment and savor.

comment by Raemon · 2017-11-30T21:19:35.465Z · score: 16 (5 votes) · LW · GW
This was a great post, and I know this is probably a particularly busy time for you, so thanks!

Heh. I tried not writing it and that didn't work. :P

I would not have expected that things like greenery, flowers, trees, and strong winds on sunny days would be particularly important to me, but after cultivating that habit it turns out that those are things that I am most likely to notice in the moment and savor.

Same.

It's an interesting question how much of my (and your) general-ageny-improvements can be attributed to rationality-in-particular, vs just generally getting older. I would expect some kind of generic maturity improvement regardless. AFAICT the particular ways I improved at agency involved obvious gears (mostly, "trying things" and being somewhat strategic about which things to try) that would have been present to some degree in non-rationalist-me, just not as densely.)

(Actually, I'm not sure what precisely you mean by agency - I think I tend to do more socially-agenty things, i.e. deliberately create subculture-like-things, which might look more visible, but isn't obviously less agenty than your various Ballooning exploits)

comment by moridinamael · 2017-11-30T20:16:29.722Z · score: 21 (9 votes) · LW · GW

(Meta: I'm not sure if the following comment is valuable. What you wrote really resonated, though.)

I've felt for some time that, despite being "part of the community" for ( ... checks ... ) over seven years, I'm at around a yellow belt at rationality. Maybe an honorary green belt. I know a lot about rationality. I can talk a good game. Somebody can watch a lot of UFC and learn to rattle off the names of the techniques without being able to do any of them.

I use the belts metaphor deliberately. I think you're right that "black belt rationality" is composed of a lot of gears. A white belt starts with simple movements and refines control, perception and understanding. They do this by following a prescribed path of development. The various subskills are nurtured along the way and culminate in a superior warrior. I'm not sure what that path looks like for rationality. I'm not sure what the paths are, what the gears are, or in what order they need to be developed. I could sketch something out, with all the right buzzwords. But that map would be as likely to be optimal as a plan for martial arts skill sketched out by an average UFC fan.

At the risk of being a bit cringey: I feel, when reading Eliezer's writing, the same what the I feel watching Anderson Silva in a fight. I realize that I'm witnessing mastery of something. But Eliezer seems to be something like a "natural athlete" of rationality. Many highly successful people exhibit specific refined rationality-gears by age 16. Gears that most people grow old and die without ever discovering. Studying Eliezer doesn't really tell me what to do to improve. Even Eliezer's own advice might not be worth much. He may not realize that his students may completely lack gears that he had polished by puberty.

(I'm focusing on Eliezer here because he was the guy who made the first, boldest, and most successful stab at being Rationality Sensei. And I'm not criticizing what he accomplished, I'm reflecting on why I don't feel like I'm as far along as I could be.)

I've wished for a long time that there was a real belt system for this stuff. I wrote all this because I have a similar sense that I'm progressing down a path that I'm forging as I go. Yet there are clearly other people who are further along the path. Try as I might, I can't reach their level by aping their movements.

comment by cousin_it · 2017-12-01T17:13:13.222Z · score: 17 (5 votes) · LW · GW

If you want to repeat Eliezer's path to success, I'm pretty sure the limiting factor is creative writing, not rationality. But maybe creative writing isn't even fun for you. It's better to find your own path.

comment by Raemon · 2017-11-30T21:14:54.577Z · score: 16 (5 votes) · LW · GW

I think part of the thing is that, although there are some clearly defined skills and levels and whatnot, we're still very much in the process of learning a) what skills are most important, b) how to teach them (in some cases, how to teach them at all)

There's certain things Eliezer is good at, and some very different things that, say, Nate Soares is good at that Eliezer self-identifies as not-very-good-at, and different-still things that Luke Mulhauser is good at, and a host of other people, each good at different things that seem clearly part of the rationality canon but which go off in different directions. The concept of "Belt" feels too linear to encapsulate them.

My sense is that CFAR represents the body-of-knowledge of "what we have some sense of how to teach" to arbitrary people, which (I don't think?) yet includes much of the higher-level stuff.

comment by gjm · 2017-12-04T16:23:43.350Z · score: 17 (6 votes) · LW · GW

Clearly Raemon means "short fragments in the style of noted science fiction author Vernor Vinge".

comment by Raemon · 2017-12-04T16:24:18.618Z · score: 5 (1 votes) · LW · GW

lol

comment by lifelonglearner · 2017-12-01T01:26:06.247Z · score: 16 (4 votes) · LW · GW

I got a bunch of positive feelings after reading this.

Also, thanks for continuing to have the underlying message of "actually doing things" (I've noticed that you've done this in other posts / things you write, where you explicitly point out how you do really do take the time to think these sorts of things out.) Deliberateness is an important virtue.

comment by Raemon · 2017-12-01T03:05:54.154Z · score: 5 (1 votes) · LW · GW

Thanks!

comment by Ben Pace (Benito) · 2017-12-02T03:36:01.190Z · score: 14 (4 votes) · LW · GW

The main thing I personally got from this post: I feel my habits as much more real and significant choices to make, and I predict this will change how I think about habits in the long term. I used to think of habits using the framework of TDT ('if I eat the snack when I especially feel like it, then I'm the agent who will eat a snack every time I especially feel like it, and will eat a lot of snacks') but the notion that these are the parts of you that determine my long term trajcetory way more than any individual project I work on, is new and visceral.

(Thing I liked: I internally predicted that the section title "The World is Literally on Fire" would be an exaggeration and it wasn't and I was surprised.)

There were a few other things I like. In general, I probably won't personally Feature posts of this sort (things that feel like "looking back, reminiscing, drawing conclusions about what is truly important") if someone writes a post like this every other week or so, but if you just do a retrospective once every e.g. five years, I'm more inclined to think your search algorithm has output the real nuggets of value in the data that is your history. So thanks for this meandering post of things you've noticed, I really liked the habits things and will ponder more about the noticing confusion, burnout and gratitude. The stuff here feels really useful, and I appreciate knowing some of the main things in this private data store. For these reasons, I've Featured it.

comment by Raemon · 2017-12-02T05:54:42.855Z · score: 7 (2 votes) · LW · GW

Yeah, the habits thing is the one I'm by far most confident in. I do think the noticing confusion/burnout/gratitude things are important but they feature so strongly mostly by accident of anecdotes that all happened in the space of a week and felt connected.

I'm not 100% sure I grok the difference you intend between the TDT habits and Habits-As-Described here - is the main difference that you not only obtain the benefit of the TDT habit, but also the benefits of change-to-personality that change additional types of choices you'll make?

comment by gjm · 2017-11-30T13:57:40.990Z · score: 13 (4 votes) · LW · GW

Trivia: If it is still possible for you to edit your "things to do about AI risk" post, I suggest replacing "Cambridge, Oxford or London" with "Oxford, Cambridge or London" or "London, Cambridge or Oxford" or something of the sort -- because there are two Cambridges that one could imagine putting there, and if you choose the ordering better you can remove the ambiguity.

comment by Raemon · 2017-11-30T14:11:47.791Z · score: 14 (3 votes) · LW · GW

Heh, done.

comment by SquirrelInHell · 2017-12-03T18:45:36.289Z · score: 12 (3 votes) · LW · GW

For you it will be a minor piece of evidence, but I hope it pushes you in the right direction on this delicate issue. This is verbatim from my personal notes, some time ago, formulated directly from my own experience and understanding of the world, without hearing anything about "Bayesian wizardry" beyond the general background of Eliezer's sequences:

Bayesian superpower
-> the ability to intuitively get Bayesian updates on evidence very precisely right
(huge returns across the board)
(learnable though elusive)

I am personally convinced that this is a real, specific, learnable, rare, and extremely valuable mental skill, and that I have greatly improved at it over the past 2 years. I have no way of proving to anyone that this is real, and I am super vague on how one would go about teaching it, but I nevertheless want to bless anyone who declares any level of interest in acquiring it.

Upon further reflection, you learning this would be extraordinarily cool because you might get a better shot at teaching it, even while being worse at the object level skill than some people.

comment by Zvi · 2017-12-03T19:08:52.874Z · score: 6 (2 votes) · LW · GW

I believe this skill indeed exists, is learnable and is quite valuable. I think I have some of it, but that it is possible to have far more of it than I currently have, and it's about gradual training and improvement, continuous deliberate practice and such, rather than a boolean or something that clicks at once. Extensive interaction with prediction markets (on various things) certainly helped a lot.

comment by lexande · 2017-12-14T20:41:10.651Z · score: 9 (3 votes) · LW · GW

I notice that I am confused why people are so extremely disinclined to keep gratitude journals (the effect of which does apparently replicate) even when they report doing it makes them feel better. (Of course I don't keep one either, the idea seems aversive and I don't know why.)

comment by Viljami Virolainen (viljami-virolainen) · 2019-09-16T19:39:10.749Z · score: 6 (3 votes) · LW · GW

Thank you for this blog post. It has been immensely useful to me lately.

I have started new habits and this time they actually seem to stick! Key: never skip twice. Very useful framing. It's like a super power really.

comment by Ikaxas · 2017-11-30T03:20:42.737Z · score: 6 (2 votes) · LW · GW

I didn't notice the opportunity to notice my confusion (I had a "huh, weird" but, but looking back, I think it was

fbzrguvat gb qb jvgu gur fnaqone. Gurer ner npghnyyl n pbhcyr bs guvatf gung, ybbxvat onpx, V'z trahvaryl hafher nobhg ertneqvat gur fnaqone fgbel: 1. Jung vf n "qrfreg fnaqone"? 2. Vs vg jrer na bprna fnaqone (nf V svefg ernq vg), vg jbhyq unir gb or cerggl ovt gb fhccbeg n sberfg gur fvmr bs prageny cnex (evtug? Arire orra, fb qba'g unir n tbbq frafr bs ubj ovt prageny cnex vf). In any case, thanks for writing this, it's given me a lot to think about and hopefully implement.

Edit: typo

comment by CronoDAS · 2017-11-30T00:44:43.862Z · score: 6 (2 votes) · LW · GW

::applauds::