My advice on finding your own path

post by A Ray (alex-ray) · 2022-08-06T04:57:47.009Z · LW · GW · 3 comments

Contents

  tl;dr - 4 steps
  Intro
  Method: 4 Steps
    1. Give yourself permission to take the future seriously
    2. Be willing to imagine successful or ambitious outcomes
    3. Give yourself the gift of focused time and space
    4. Write draw sketch diagram or whatever you need to empty your mind
  Conclusion
None
3 comments

tl;dr - 4 steps

  1. Give yourself permission to take the future seriously
  2. Be willing to imagine successful or ambitious outcomes
  3. Give yourself the gift of focused time and space
  4. Write draw sketch diagram or whatever you need to empty your mind

Intro

(Feel free to skip to the method.)

When I was transitioning into technical AI alignment work, I benefited immensely from the mentorship and guidance of experts in the field. They helped me to understand the relevant concepts, to develop my skills, and to find my place in the research community.

In my early years of giving career advice to other people, I tried to replicate this experience for them. I listened to their questions and concerns, and I tried to provide what I thought were the best possible answers. These are questions like "What do you think are the most important problems to work on?" or "What should I do to get the highest impact job in AI alignment?", and while I tried to listen to them and give personalized advice, I think the answers always failed to capture what was the best fit for the individual.

Over the years, however, my advice has evolved. I have answered direct questions less and less directly, and tried to do more of asking leading questions in response. "What do _you_ think the most important problems are?" "What kinds of work do _you_ think you'd be highest impact at?"

As I've done this, I've noticed four specific hang-ups (probably others, but I'm simplifying and reducing here in order to make my points simpler). My advice has mostly turned into just four things that I say to everyone, and I think are more useful than specific answers to specific questions.

Method: 4 Steps

1. Give yourself permission to take the future seriously

I think for many people it can be difficult to imagine details about the future. There is a lot of uncertainty, and this can lead to people thinking that certain kinds of thinking or planning or predictions are inappropriate. Another way this sometimes looks is too much deference to "Serious Senior People" and not trusting their own internal senses and intuitions about the future. So I think it's important for people to give themselves permission to take the future seriously. To think about what could happen, what they would like to happen, and what they can do to make it happen. To trust their own judgments and not feel like they have to wait for someone else to tell them what to do.

2. Be willing to imagine successful or ambitious outcomes

Another common hang-up I've seen is a kind of self-limiting mindset. Some people seem to feel like they are not allowed to think about themselves being successful or achieving ambitious goals. They might worry that they will be seen as arrogant or unrealistic, or that they will be disappointed if things don't work out. One way this looks is they heavily outside-view themselves, and limit themselves to plans that would seem appropriate from some kind of judgemental outside perspective. See also: Hero Licensing [LW · GW]. So I think it's important for people to be willing to imagine successful or ambitious outcomes for themselves. To think about what they could achieve if things went really well, and to not feel like they have to stay in some kind of modest box.

3. Give yourself the gift of focused time and space

This is about having time and a good thinking space to exclusively focus on figuring out your long term plans and predictions. I think people new to the field of AI alignment make the mistake of spending too much time trying to read everything important that has been written, and too little time on their own original thinking. Part of this is sometimes because they don't value their own original thinking, or they think they'd just arrive at the same points as other people have. This is compounded by the "urgency" or "rushing" feelings in the field of AI safety, where things sometimes seem so fast moving that you can't take a couple days away to think. My recommendation here is to do it anyways.

4. Write draw sketch diagram or whatever you need to empty your mind

Your head is far too small to come up with a plan. This is about getting your thoughts out of your head and onto some kind of external medium, whichever works best for you. Mind maps, Notepads, Chalk Boards, Long Lists. These don't have to be things you save, but instead are just used while you're processing. I think this tends to follow a pretty common arc:

Conclusion

It's up to you what you want to have at the end of it. Maybe it's a 5-year career plan. Maybe it's an AI Alignment Research Agenda. Maybe it's a startup idea. Maybe it's a strategic vision of possible futures. Maybe it's just a better understanding of yourself and your motivations.

I hope this is useful to you in having original thinking towards finding your own way.

3 comments

Comments sorted by top scores.

comment by green_leaf · 2022-08-07T21:20:09.618Z · LW(p) · GW(p)

Also, I found the post really hitting home with good ideas.

comment by green_leaf · 2022-08-06T05:38:41.448Z · LW(p) · GW(p)

Hero Licensing[link]

This shows just as plaintext.

Replies from: alex-ray
comment by A Ray (alex-ray) · 2022-08-06T05:42:40.100Z · LW(p) · GW(p)

Thanks, fixed the link in the article.  Should have pointed here: https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing