Book review: Architects of Intelligence by Martin Ford (2018)
post by Ofer (ofer) · 2020-08-11T17:30:21.247Z · LW · GW · 0 commentsContents
Applying the “80/20 rule” when reading this book None No comments
Cross-posted from the EA Forum [EA · GW].
The 2018 book Architects of Intelligence by Martin Ford is a collection of 23 interviews about progress in AI and future impacts thereof, including the prospect of developing AGI and existential risks. The interviewees include some of the most prominent and influential AI researchers, and they vary a lot in the degree to which they take existential risk from AI seriously. They include Andrew Ng and Yann LeCun who, at least historically, were some of the loudest skeptics about existential risk from AI. They also include Nick Bostrom and Stuart Russell, who both played a major role in establishing the field of AI safety.
I highly recommend this book to anyone who's interested in AI safety (both AI alignment and AI policy) and doesn't yet have a good model for how key figures in AI talk about AGI and existential risk from AI, what is socially acceptable to say in what circles, how do people in different circles frame things, etc. AGI and existential risk from AI are sensitive topics, especially to people whose career involves making progress in AI, and if you want to work in related areas it's very useful to have a good model for all the above. Just be aware that the book is from 2018 and at least a bit outdated; there may have been important shifts in some of those aspects since its publication.
Applying the “80/20 rule” when reading this book
Every interview is a separate chapter, titled by the name of the interviewee, including in the Audible version of the book (so it's easy to jump to certain interviews); and each interview is self-contained. If you're interested in the book mainly from an AI safety perspective, I think you would get ~90% of the value from reading just the following 8 interviews (listed in the order they appear in the book):
- Yoshua Bengio, who won a Turing Award (with Geoffrey Hinton and Yann LeCun) for his work in deep learning.
- Note: In his interview he seems to argue that we shouldn't currently warry about existential risk from AI, but I think his thinking on this topic changed since then. In 2019, he tried to recruit a postdoc to work on AI alignment and he endorsed the 2019 book Human Compatible by Stuart Russell, writing: "This beautifully written book addresses a fundamental challenge for humanity: increasingly intelligent machines that do what we ask but not what we really intend. Essential reading if you care about our future."
- Stuart Russell, a prominent AI researcher who is one of the most influential figures in AI safety.
- Geoffrey Hinton, who won a Turing Award (with Yoshua Bengio and Yann LeCun) for his work in deep learning.
- Nick Bostrom, one of the most influential figures in AI safety and the founding director of the Future of Humanity Institute (FHI) at Oxford.
- Yann LeCun, who won a Turing Award (with Yoshua Bengio and Geoffrey Hinton) for his work in deep learning. [EDIT: Also, he is the Chief AI Scientist at Facebook.]
- Demis Hassabis, the CEO and co-founder of DeepMind (a top AI lab owned by Google's parent firm Alphabet).
- Andrew Ng, one of the most high-profile figures in the machine learning community, co-founder of the Google Brain project, and the former chief scientist of Baidu.
- Jeffrey Dean, the lead of Google AI.
If you're not familiar at all with machine learning, maybe also read the subsections "A Brief Introduction to the Vocabulary of AI'' and "How AI Systems Learn" in the introduction.
0 comments
Comments sorted by top scores.