Which battles should a young person pick?

post by EmanuelJankvist (emanueljankvist) · 2023-12-29T20:28:25.579Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    9 plex
    6 Dave Orr
    3 Chris_Leong
None
2 comments

This post is based on the common premise of having to ‘pick your battles’. I’m at an impasse in my life and believe this community could offer insights for reflection. I’m particularly interested in perspectives regarding my paradigm, though I hope to provide value for others with similiar problems. In general the question can crudely be phrased:

‘What’s a young persons middle ground for contributing to AI safety?’

The answers should prefferably therefore not ask my life’s worth in devotion.


Which battles should a young person choose to fight in the face of AI risks? The rapid changes in the world of AI — and the seeming lack of corresponding policy — deeply concern me. I’m pursuing a Bachelor of Science in Insurance Mathematics (with a ‘guaranteed’ entry to a Master’s programme in Statistics or Actuarial Science). While I’m satisfied with my field of study — I feel it doesn’t reflect my values and need for contribution.

In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.

I understand the discussion, but struggle to reconcile it with my desire to take action.

Here are some of my personal assumptions:

• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.

• A future where aging is solved provides me (and humanity in general) with vast ‘amounts’ of utility compared to all other alternatives.

• The probability of solving aging with AGI is significant enough — for the scenario to play a significant role in a ‘mean’ utility calculation of my future utility.

I’m aware these assumptions are somewhat incomplete/il-defined, especially since utility isn’t typically modeled as a cardinal concept. However, they are just meant as context for understanding my value-judgements.


 

I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.

Although the interview was some time ago, I’ve only recently delved into these thoughts. I’d appreciate any context or thoughts you might provide.

Disclaimer: I’m not in a state of distress. I’m simply seeking a middle ground for making a difference in these areas. Also the tags might be a bit off, so I would appreciate some help with those.

Answers

answer by plex · 2023-12-30T00:36:26.254Z · LW(p) · GW(p)

AI Safety Info's answer to "I want to help out AI Safety without making major life changes. What should I do?" is currently:
 

It's great that you want to help! Here are some ways you can learn more about AI safety and start contributing:

Learn More:

Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.

Consider these options:

Join the Community:

Joining the community is a great way to find friends who are interested and will help you stay motivated.

Donate, Volunteer, and Reach Out:

Donating to organizations or individuals working on AI safety can be a great way to provide support.

 

If you don’t know where to start, consider signing up for a navigation call with AI Safety Quest to learn what resources are out there and to find social support.

If you’re overwhelmed, you could look at our other article that offers more bite-sized suggestions.

Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. ↩︎


 

answer by Dave Orr · 2023-12-30T05:10:30.138Z · LW(p) · GW(p)

You should ignore the EY style "no future" takes when thinking about your future. This is because if the world is about to end, nothing you do will matter much. But if the world isn't about to end, what you do might matter quite a bit -- so you should focus on the latter.

One quick question to ask yourself is: are you more likely to have an impact on technology, or on policy? Either one is useful. (If neither seems great, then consider earning to give, or just find a way to add value in society in other ways.)

Once you figure that out, the next step is almost certainly building relevant skills, knowledge, and networks. Connect with senior folks with relevant roles, ask and otherwise try to figure out what skills and such are useful, try to get some experience by working or volunteering with great people or organizations.

Do that for a while and I bet some gaps and opportunities will become pretty clear. 😀

answer by Chris_Leong · 2023-12-31T09:13:05.707Z · LW(p) · GW(p)

I strongly recommend the AI Safety Fundamentals Course (either technical or policy). Having a better understanding of the problem will help you contribute with whatever time or resources you choose to dedicate to the problem.

2 comments

Comments sorted by top scores.

comment by GeneSmith · 2023-12-30T07:12:15.281Z · LW(p) · GW(p)

In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.

It's worth pointing out that Eliezer's views on the relative hopelessness of the situation do not reflect those of the rest of the field. Nearly everyone else outside of MIRI is more optimistic than he is (though that is of course no guarantee he is wrong).

As an interested observer who has followed the field from a distance for about 6 years at this point, I don't think there has ever been a more interesting time with more things going on than now. When I talk to some of my friends that work in the field, many of their agendas sound kind of obvious to me, which is IMO an indication that there's a lot of low-hanging fruit in the field. I don't think you have to be a supergenius to make progress (unless perhaps you're working on agent foundations).

• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.

I'm not sure I understand what this means. Do you mean the "and" instead of "+"? Otherwise this statement is a little vague.

If you consider solving aging a high priority and are concerned that delays of AI might delay such a solution, here are a few things to cosider:

  • Probably over a hundred billion people have died building the civilization we live in today. It would be pretty disrespectful to their legacy if we threw all that away at the last minute just because we couldn't wait 20 more years to build a machine god we could actually control. Not to mention all the people who will live in the future if we get this thing right. In the grand scheme of the cosmos, one or two generations is nothing.
  • If you care deeply about this, you might consider working on cryonics both to make it cheaper for everyone and to increase the odds of personality and memory recovery following the revival process.

I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.

One potential answer here is "earn to give". If you have a chance to enter a lucrative career you can use your earnings from that career to help fund work done by others.

If that's not an option or doesn't sound like something you'd enjoy, perhaps you could move? There are programs like SERI MATS you could attempt to enroll in if you're a newcomer to the field of AI safety but have a relevant background in math or computer science (or are willing to teach yourself before the program begins). 

Replies from: emanueljankvist
comment by EmanuelJankvist (emanueljankvist) · 2023-12-31T10:56:02.557Z · LW(p) · GW(p)

Thanks for the advice @GeneSmith [LW · GW]!

In regards to the 'probability assertions' I made, the following (probably) sums it up best:

I understand the ethical qualms. The point I was trying to make was more in the line of 'if I can effect the system in a positive direction, could this maximise my/humanity's mean-utility function'. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I'd hoped it would provide insight into my thought process. 

Note: in the post I didn't specify the  part. I'd hoped it was implicit -- as I don't care much for the scenario, where aging is solved and AI enacts doom right afterwards. I'm aware this is still an incomplete model (and is quite non-rigorous).

Again, I appreciate the response and the advice;)