AMA on EA Forum: Ajeya Cotra, researcher at Open Phil
post by Ajeya Cotra (ajeya-cotra) · 2021-01-29T23:05:41.527Z · LW · GW · 0 commentsThis is a link post for https://forum.effectivealtruism.org/posts/QAqghTmp7FSMcJ4ch/ama-ajeya-cotra-researcher-at-open-phil
Contents
No comments
Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me!
I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything.
About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines [LW · GW] which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA.
I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything!
0 comments
Comments sorted by top scores.