How to choose a PhD with AI Safety in mind

post by Ariel Kwiatkowski (ariel-kwiatkowski) · 2020-05-15T22:19:13.800Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    17 evhub
None
No comments

I'm about to begin doctoral studies in multiagent RL as applied to crowd simulation, but somewhere on the horizon, I see myself working on AI Safety-related topics. (I find the Value Alignment problem to be of particular interest)

Now, I'm asking myself the question - if my PhD is in a roughly related area of AI, but not really closely compatible with AI Safety, does that make anything more difficult further down the line? Or is it still perfectly fine?

Answers

answer by evhub · 2020-05-15T23:46:00.918Z · LW(p) · GW(p)

Hi Ariel—I'm not sure if I'm the best person to weigh in on this, since I opted to go straight to OpenAI after completing my undergrad rather than pursue a PhD (and am now at MIRI), but I'm happy to schedule a time to talk to you if you'd be interested. I've also written a couple [AF · GW] of different [AF · GW] posts on possible concrete ML experiments relevant to AI safety that I think might be exciting for somebody in your position to work on if you'd be interested in chatting about any of those.

No comments

Comments sorted by top scores.