Talk: Key Issues In Near-Term AI Safety Research
post by Aryeh Englander (alenglander) · 2020-07-10T18:36:12.462Z · LW · GW · 1 commentsContents
1 comment
I gave a talk for the Foresight Institute yesterday, followed by a talk from Dan Elton (NIH) on explainable AI and a panel discussion that included Robert Kirk and Richard Mallah.
While AI Safety researchers are often pretty well aware of related work in the computer science and machine learning fields, I keep finding that many people are not aware of a lot of very related work that is taking place in an entirely different research community. This other community is variously referred to as Assured Autonomy, Testing Evaluation Verification & Validation (TEV&V), or Safety Engineering (as they relate to AI-enabled systems, of course). As I discuss in the talk, this is a much larger and more established research community than AI Safety, but unfortunately until recently there was very little acknowledgement by the Assured Autonomy community of closely associated work by the AI Safety community, and vice versa.
Recently organizations such as CSER and FLI have been doing a lot of great work helping to connect these two communities with jointly-sponsored workshops at major AI conferences - some of you may have attended those. But I still think it would be useful if more people in both communities were a bit more aware of the work of the other community. This talk represents my attempt at a short intro to that.
Video (my presentation is from 2:28 to 19:00)
Short version of slide deck (the one I used in the presentation)
1 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2020-07-10T23:12:25.892Z · LW(p) · GW(p)
I enjoyed the whole video :) My only regret is that nobody brought up Bayesianism, or even regularization, in the context of double descent.