If you are too stressed, walk away from the front lines

post by Neil (neil-warren) · 2023-06-12T14:26:44.030Z · LW · GW · 14 comments

Contents

14 comments

tl;dr: If your reason for cramming AI knowledge into your brain is stress, then don't do it. You can still be useful, but walk away from the front lines where people are directly responsible. 

Disclaimer: 1) This is an obvious problem that has already been noticed and addressed [LW · GW] by many LessWrong users 2) This is not an original solution but rather a specific framing of the problem and some food for thought 3) I could be gravely mistaken, and your best bet might be putting your all into research after all. However, you might just want to emerge from lurking and actually do something if that is the case. 4) The rather romantic and optimistic tone employed here is fait exprès and is not meant as an accurate description of the reality in which we are in right now.

 

The heroes who will save the world will be alignment researchers: when, dot by dot, the stars will grow dark and a dark dawn will rise, we will all have to buy them a beer. If you are not among this small band of heroes who will guarantee us the best future possible, you may feel an urgent need to promptly join them: I beg of you only to consider whether this is a good idea. 

If you think that you should become an alignment researcher in a matter of months,[1] I will not try to stop you.  But it's probably worth a few days worth of cranial compute to establish whether you are exploiting yourself in the best way you could.

I'll set the parameters of the problem. "Becoming an alignment researcher" is a spectrum: the more you learn about alignment, the more capable you are at navigating the front lines of the alignment project. Certainly, understanding the tenets of alignment is a laudable goal; but at what point will you be faced with diminishing returns? If you are not planning on single-handedly solving the alignment problem, are there not better uses of your time?[2] 

There are many instrumental goals that serve the terminal goal of completing the alignment project, and they might be more worthy of your time:  

 

If you feel stressed right now and have thus decided to spend your time scrantically[6]  reading LessWrong posts about AGI projections and alignment solutions while breathing heavily. . .  just don't. Don't become an alignment scientist today because you are stressed. Don't sacrifice doing what you love, because there's a good chance you can help us by doing what you love. Solve the "you are human" problem first and then perhaps solve one of the others, so that you are not directly involved on the front lines where responsibility is direct. You are just as responsible for the universe as the rest of us: but you are responsible for results, not effort, and that could mean walking away from the front lines. 

Ah, and if you're too stressed: breathe three times using the whole capacity of your lungs, smile at a mirror, then eat some chocolate. I bid you an excellent day. Spend some time looking at flowers or something. Then you can return to heroics. 

 

  1. ^

    It's not enough to become an alignment researcher. You must become a useful alignment researcher, which is of course an even harder target to attain. 

  2. ^

    I'm conflicted because there might be too many people writing and reading on the blog instead of spending hours in solitude attempting to actually find a solution to alignment. I deeply respect people who do the latter, and we need more of them. More on that later.

  3. ^

    "Weaker risks" does not mean they should not be addressed: if climate change starts hampering alignment research, like by slowing down development in poor countries, it should be proportionally paid attention to. How important minor risks are kind of depends on what your AGI forecast model is (there are a dozen on LessWrong [? · GW]). But the point is, almost all the existential risk is concentrated in AGI and so the importance of all other problems should be correlated with their relevance to the alignment problem. 

  4. ^

    The cool aspect of this problem is that you can't be stressed by it. The other problems are external in nature: but the whole point of this problem is that you  must be at peace for it to be solved, meaning that you can't rush your way through it, half-ass it, or have a breakdown while doing it. Take a walk outside or something. 

  5. ^

    False hope is a dangerous thing and I do not mean to supply that here. If we all recycle our pizza boxes, the world won't be saved. But taking away some distractions and burdens that alignment researchers may be plagued with seems like an excellent use of time. And being a good person is just generally a good thing: i.e. don't drop everything including your morals and give your all to the alignment project. There's a lot to say about arrogance of this kind: think of Raskolnikov from Crime and Punishment. AGI is not an excuse for you to forget basic duties.

  6. ^

    Rushing around LessWrong with its abundance of footnotes and references with the goal of learning something and clarifying the picture for you, will accomplish nothing but fragment your mind and increase your stress levels. Digest all knowledge.

14 comments

Comments sorted by top scores.

comment by Chris_Leong · 2023-06-13T12:39:42.614Z · LW(p) · GW(p)

Out of curiosity, what role do you see yourself playing?

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-13T13:09:14.743Z · LW(p) · GW(p)

Interesting question. The sous-entendu might be "how much of this post was written for you" and the answer would be "probably a lot". I don't think I have the mind or time or stamina to work on the front lines, and so so far my most concrete plan is writing a few more LessWrong posts based on various helpful-seeming ideas. This post outlined a few options I and others in the same position as me have. Do you have any more ideas?

Replies from: Chris_Leong
comment by Chris_Leong · 2023-06-13T13:57:13.381Z · LW(p) · GW(p)

This is a good post, so I’d definitely encourage you to write up a few more posts.

I know very little about you, so it’d be hard for me to make good suggestions, but here’s two possibilities for your consideration:

  • Help other people figure out how they can contribute, particularly those looking to contribute in a non-technical way. If this is something you’d be interested in doing, I’d probably invest some more time in understanding the strategic landscape first (before someone starts advising, it’s important to have a robust model of what potential downside risks exist)
  • If you run out of post ideas, find others with things they’d like to write up if they had time and help them write it up
Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-22T20:40:27.301Z · LW(p) · GW(p)

Hello! I thought about what you suggested and have been doing my best to understand the technicalities of alignment and the general coordination landscape, but that's still ongoing. I'll write more posts myself, but did you have anyone in mind for that last part, finding others who'd like posts written up? 

Replies from: Chris_Leong
comment by Chris_Leong · 2023-06-23T03:58:09.954Z · LW(p) · GW(p)

I didn’t particularly have anyone in mind unfortunately.

comment by Neil (neil-warren) · 2023-06-12T14:29:38.006Z · LW(p) · GW(p)

All the obvious alternate routes to participating to the alignment problem seem to have been mentioned here--are there any more I should write down? I'm aware this is a flawed post and would like to make it more complete as time goes.

comment by mesaoptimizer · 2023-06-13T08:37:37.693Z · LW(p) · GW(p)

I'm really glad you wrote this post, because Tsvi's post [LW · GW] is different and touches on very different concepts! That post is mainly about fun and exploration being undervalued as a human being. Your post seems to have one goal: ensure that up-and-coming alignment researchers do not burn themselves out or hyperfocus on only one strategy for contributing to reducing AI extinction risk.

Note, this passage seems to be a bit... off to me.

This one is slightly different from the last because it is an injunction to take care of your mental health. You are more useful to us when you are not stressed. I won’t deny that you are personally responsible for the entire destiny of the universe, because I won’t lie to you: but we have no use for a broken tool.

People aren't broken tools. People have limited agency, and claiming they are "personally responsible for the entire destiny of the universe" is misleading. One must have an accurate sense of the agency and influence they have when it comes to reducing extinction risk if they want to be useful.

The notion that alignment researchers and people supporting them are "heroes" is a beautiful and intoxicating fantasy. One must be careful that it doesn't lead to corruption in our epistemics, just because we want to maintain our belief in this narrative.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-13T10:17:20.342Z · LW(p) · GW(p)

The passage on "you are responsible for the entire destiny of the universe" was mostly addressing the way it seems many EAs feel [LW · GW] about the nature of responsibility. We indeed have limited agency in the world but people around here tend to feel they are personally responsible for literally saving the world alone. The idea was not to frontally deny that or to run against heroic responsibility [? · GW] but rather to say that while the responsibility won't go away, there's no point in becoming consumed by it. You are a less effective tool if you are too heavily burdened by responsibility to function properly. I wrote it that way because I'm hoping the harsh and utilitarian tone will reach the target audience better than something more clichèd would. There's enough romanticization as it is here.

I definitely romanticized the alignment researchers being heroes part. I'll add a disclaimer to mention that the choice of words was meant to paint the specific approach, the specific picture that up-and-coming alignment researchers might have when they arrive here. 

As for which narrative to follow, this one might be as good as any. As the mental health post I referenced here mentioned, the "dying with dignity" approach Eliezer is following might not sit well with a number of people even when it is in line with his own predictions. I'm not sure to what degree what I described is a fantasy. In a universe where alignment is solved, would this picture be inacurate? 

Thanks for the feedback!

comment by Chris_Leong · 2023-06-13T12:29:12.313Z · LW(p) · GW(p)

Excellent post. One part I disagree with though:

“ If you know anybody in politics anywhere, it might be a good idea to try and convince them to pay attention to this AGI thing” - It wouldn’t surprise me if this was net-negative and the default outcome of informing actors about AGI is for them to attempt to accelerate it.

Another part I’d disagree with is lionising technical researchers over everyone else.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-13T13:00:16.948Z · LW(p) · GW(p)

The point of the post was to not lionize them over everyone else. The target audience I had in mind (which may not even exist at this point) was people who wanted to become alignment researchers because that's where the front lines are. My point is that that may not be the best idea in some cases. At the end of the day, if we solve the alignment problem it will be directly thanks to those researchers, that's what I mean. 

As for the politics thing that's interesting, I hadn't thought of it horribly backfiring in that way. I mean the goal would be to explain to them why alignment is necessary, which shouldn't be an impossible task. There's a lot of legal and economic power coming from the government, so just ignoring that actor seems like a mistake. 

Thanks for the feedback!

comment by p.b. · 2023-06-14T09:58:48.786Z · LW(p) · GW(p)

The heroes ... heroes ... heroics. 

 

If you notice that alignment is a problem and you think you can do something about it and you start doing something about it - you are about as heroic as somebody who starts swimming after falling into the water. 

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-14T13:41:30.607Z · LW(p) · GW(p)

Orwell's original title for 1984 was The last man in Europe, by which he meant that Winston, the hero of the novel, was the last sane man left in the entire continent. I would argue that because literally everyone else around him was insane and was essentially drowning in the water, he was a hero for swimming. The amount of people working on alignment in the world is far below 1% of the general population--I know it's a romanticized qualification, which is kind of the point here, but this runs under my definition of "hero". 

I mean what even is your definition of hero? 

Replies from: p.b.
comment by p.b. · 2023-06-14T15:51:01.829Z · LW(p) · GW(p)

Sacrificing or taking a significant risk of sacrifice to do what is right. 

Someone who wins a sporting competition is not a hero - even if it was very difficult and painful to do. Somebody who is correct, where most people are wrong is not a hero. 

I know we all want our heroes to be competent and get it done, but to me that's not what's heroic. 

When it comes to alignment researchers: 

If you are at the beginning of your career and you decide to become an alignment researcher, you are not sacrificing much if anything. AI is booming, alignment is booming - if you do actually relevant work, you will be at the frontline of the most important technology for years to come. 

If you are deeply embedded into the EA and rationalist community, you'll be high status where it matters to you. 

That doesn't mean your work is less important, but it does mean you are not being heroic. 

How about this as advice to be less stressed out: Don't think of your life as an epic drama. Just do what you think is necessary and don't fret about it.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-06-14T16:15:27.336Z · LW(p) · GW(p)

The best example I can recall of what you're describing is the members of La Résistance in France during WW2. These people risked their lives and the lives of their family in order to blow up German rails, smuggle out Jews and kill key Gestapo operatives. They did not consider themselves heroes, because for them this was simply the logical course of action, the only way to act for a person with a shred of common sense. Most of them are dead now but along their lives they repeated that if France considered them to be heroes (which it did) that would defeat the point: that doing what they did should not be extraordinary, but common sense. 

You're right about the epic drama thing. Poetic flare can be useful in certain situations, I imagine, although it is a fine line between using that as motivation and spoiling rationality. (Poetry, as in beauty and fun, is a terminal goal of humanity so I would also advise against ignoring it entirely.)