Cultivating Valiance

post by Shoshannah Tekofsky (DarkSym) · 2022-08-13T18:47:08.628Z · LW · GW · 4 comments

Contents

  H1: Strategic Fatalism 
  H2: Value Embodiment
  H3: Heroic Narrative
  H4: Hopeful Pessimism
  Interlude
  H5: Protection Instinct
None
4 comments

When I discovered existential risk from AGI I was surprised to learn most people find the topic distressing or depressing. I realized everyone dying with a high probability should feel bad ... but it just didn't. Instead I felt the desire to punch the problem in the face (don't ask me how that would work). I felt energized and empowered as my brain seemed to register this discovery as a "call to arms".

Since then I've been studying AI Safety and talking to a lot of people about how they experience X-risk. Initially I wanted to find a way to transfer my emotional response to X-risk to those that experience anxiety or depression from it. I quickly realized I had no clue where my feelings were coming from and no obvious pathway to transfer them to others. So... I started asking people who were undeterred by X-risk how they experience the threat, if they ever felt anxiety or depression about it, and if not, then why not?

The responses roughly sounded like:

  1. What else would I do? Feeling bad or doing nothing don't increase our chances of survival.
  2. I believe working on the problem is the right thing to do, no matter the odds.
  3. Oh my god, this is like being an anime protagonist! This is so exciting!
  4. All tough problems seem insurmountable till we have figured them out. We'll figure this one out too.

The following are my hypotheses on what these four mental strategies might boil down to, plus a short reflection on how reproducable each might be. 

 

H1: Strategic Fatalism 

One group of people identified with some flavor of grim determination. They calculated the odds of survival and considered them too low, and thus there was only one logical response: increase the odds. I think this stance might be in line with Jeffrey Ladish's Playing to your Outs [LW · GW] strategy. It would be interesting to explore if this framing is something people could actively adopt to lower negative emotionality around X-risk and increase action-oriented motivations but my intuitions are falling short on how one would achieve this. 

H2: Value Embodiment

The second cluster of people were a mixed bag of high and low emotion people, but their stated internal experience of managing their feelings around X-risk revolved around value embodiment: They believed values-so-and-so were best and thus were motivated to live accordingly. Though this is essentially non-consequentalist, you can of course have consequentialism values you think are best to embody. Intuitively, I'd put Eliezer Yudkowsky's Dying with Dignity [LW · GW] post in this category cause dignity is a value. My understanding is that he tries to point other people to the Value Embodiment strategy while himself using the Strategic Fatalism strategy. Either way, this strategy might be somewhat tractable if your current values would point toward "fight X-risk". My understanding is that the second part of the equation is to then detach your feelings from the probabilities of X-risk and then reattach them to "at least I did the right thing" (essentially a variant of Nate Soares' "Detach the Grim-O-Meter"). Intuitively this sounds like "something people do often enough" but I'm not sure if it's something people can tap in to at will if it doesn't already come naturally.

H3: Heroic Narrative

Some people feel excited about the prospect of an epic challenge in their life time. The majority of us gobble up stories of heroes saving humanity against all odds, and some part of those people ... wish they got the chance to try? Working on X-risk can be exciting. It can be a call to action. It can mean that what you do matters. And even if what you do, doesn't end up being the thing that matters... then at least you've lived up to your heroic potential (which is arguably more a matter of Value Embodiment). This framing seems quite tractable as a mental health strategy. My intuition is that it only works for high self-confidence people, but self-confidence is a tractable trait as well! And maybe leaning in to the fun, the excitement ... the sheer cosmic absurdity of humanity crawling through the eye of the existential needle can be a motivating force in itself. 

H4: Hopeful Pessimism

This is probably the most promising emotional strategy around X-risk. It leans heavily in to the rationalist lens of truth-seeking. Staring X-risk straight in the face is a form of pessimism. You don't distort your chances of survival, which protects your ability to do good science on the problem. Yet the key element is to inject hope in to your framework. You might be of the opinion we have a less than 1% chance of survival currently, but at the same time, you can focus your attention on the exploration of known unknowns and unknown unknowns. In other words, you focus on exploring the paths we can see but haven't gone down yet, and on discovering new paths we never knew existed. And from this focus on known unknowns and unknown unknowns we can draw hope. We can stare down the barrel of history and confirm that all major breakthroughs were prefaced by periods of halted and directionless research. But yet we learned to fly and split atoms. Similarly, you can tap in to your truth-seeking lens of pessimism that allows you to see the world the way it is, but also generate the epistemic humility to recognize that you, nor I, nor any researcher alive has explored or discovered all possible paths. The answer is out there - "We can make it if we try", she hums involuntarily.  

 

Interlude

PS: Weirdly enough, Throughout my quizzing of 30 random people working on X-risk I didn't run in to a single person who reported a similar experience to mine. I can only presume it's a low-frequency experience or the recruitment of AI Safety researchers has a sample bias away from people who's brains work like mine in this regard. However, someone did suggest a plausible theory on what my brain is doing. So as a bonus hypothesis:

 

H5: Protection Instinct

Find something in the world you value more than yourself. In my case, it's my kids. And I'll punch anyone who tries to hurt them. Metaphorically, cause I'm really not all that strong, so that's not a smart strategy in practice. Similarly, you might feel that way about your family, your friends, humanity, life on earth, or making sure the cosmos doesn't turn in to paperclips. This strategy is only minimally tractable though, cause you either have something to protect or you don't. If on reflection nothing comes up, then embodying your ideal anime protagonist who strategically stares into the abyss of human extinction with hopeful pessimism... is like ... an actual strategy you can try out. 

Let me know how it goes.

4 comments

Comments sorted by top scores.

comment by plex (ete) · 2022-08-13T20:38:39.368Z · LW(p) · GW(p)

If only I had an enemy bigger than my apathy I could have won

I'm glad to have found an enemy considerably bigger than my apathy. I don't expect to win in most branches, but in some we survive and I hope to make my future selves proud of my efforts here in the present.

And damn if it isn't an interesting challenge, surrounded by so many great people working for the good of everyone.

Replies from: oge
comment by oge · 2022-08-13T21:00:16.214Z · LW(p) · GW(p)

Thanks for the quote. The song has haunting lyrics

comment by Randomized, Controlled (BossSleepy) · 2022-08-14T02:30:59.179Z · LW(p) · GW(p)

Cf Something to Protect [LW · GW]?

Replies from: DarkSym
comment by Shoshannah Tekofsky (DarkSym) · 2022-08-14T03:33:03.880Z · LW(p) · GW(p)

Interesting!

I dug through the comments too and someone referred to this article by Holden Karnofsky, but I don't actually agree with that for adults (kids, sure).