How Josiah became an AI safety researcher

post by Neil Crawford (neil-crawford) · 2022-09-06T17:17:44.791Z · LW · GW · 0 comments

Contents

  What persuaded Josiah to work on AI safety research?
None
No comments

Cross-posted from the EA Forum [EA · GW].

Audio recording of my interview with Josiah Lopez-Wild:

https://www.youtube.com/watch?v=KODX3pp28QM 

 

0:00 - 6:53 — Josiah’s background

6:53 - 20:14 — Josiah’s transition into AI safety 

20:14 - 24:07 — Josiah’s current research 

LPS = Logic and Philosophy of Science (at UC Irvine)

 

What persuaded Josiah to work on AI safety research?

Essentially, it was conversations with 5th year LPS student Daniel Herrmann which brought Josiah to AI safety research. Josiah says that he wouldn’t have taken AI safety research seriously as a research area if it weren’t for his conversations with Daniel. Daniel spends a lot of his time thinking and talking about AI alignment research not primarily for the purpose of persuading others to work on it too, but because he is genuinely interested in it. I think if Daniel weren’t himself interested in AI alignment, it would have been a lot harder to convince Josiah to go into it.

Interestingly, ethical arguments weren’t what persuaded Josiah to work on AI safety research. This is likely to be in part because Daniel didn’t offer such arguments. Beyond ethics, even appeals to Josiah’s preferences for safeguarding the future of humanity weren’t among the causes of Josiah’s initial interest in AI alignment. That said, it’s possible that these came secondary, reinforcing Josiah’s ambition to do AI alignment research. Moreover, Josiah thinks that when persuading others to work on AI safety research, the importance of this research (possibly in relation to safeguarding humanity) should be stressed, along with showing that AI alignment research is a) interesting, b) solvable, c) isn’t too far fetched (so we shouldn’t talk about implausible Star Trek scenarios).

0 comments

Comments sorted by top scores.