Is this what FAI outreach success looks like?

post by Charlie Steiner · 2018-03-09T13:12:10.667Z · LW · GW · 3 comments

This is a link post for https://www.youtube.com/watch?v=gb4SshJ5WOY

Contents

3 comments

Highlights (spoilers):

Two hours of more or less reasonable discussion of AI safety.

Specifically, even though people disagree, they tend to keep arguments on the object level, rather than getting bogged down in how things sound or what people are allowed to talk about.

Plus, the Overton window for what gets included in the discussion is such that Max Tegmark sounds like a reasonable person.

Neil deGrasse Tyson mentions that due to listening to a Sam Harris interview with an anonymous AI researcher, he has been convinced that we would not keep control over superintelligent AI by virtue of being able to unplug them.

The audience questions are by and large reasonable questions for people recently interested in the subject to be asking.


Admittedly, this is a stacked deck - Max is a known quantity, but the U of M professor, Mike Wellman, is also someone who's been doing research on safety and reliability topics. I'm reminded of Scott Alexander's (?) remark that in the course of AI safety "coming out of the closet," there's been a shift in what's acceptable to talk about without people actually changing their minds. The culture of what can be talked about has started to shift, but the panel still had to have been selected a little bit for people who had delved into the topic before (which makes me wonder, who selected the panel?).

If this is what success looks like, then success looks like

3 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2018-03-09T17:46:13.175Z · LW(p) · GW(p)
Shifting the culture of what's acceptable to talk about (through conferences, funding, talking to the media).

I don't know if this story is written up anywhere, but a few years ago you may have noticed that AI safety went from being awkward to talk about publicly at all to being the thing that Elon Musk, Stuart Russell, Stephen Hawking, etc. were writing op-eds about. As far as I know (based on various private conversations) the main causal factor behind this was the founding of FLI by a group of CFAR alumni, including Max Tegmark, together with a conference they ran in Puerto Rico. It is plausibly the largest obviously visible impact that CFAR has had to date.

comment by CronoDAS · 2018-03-09T15:56:47.395Z · LW(p) · GW(p)

Incidentally, Sam Harris has talked a lot about AI safety on his podcast and has had Eliezer Yudkowsky as a guest on it.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-03-09T16:48:56.970Z · LW(p) · GW(p)

Yeah, that was the podcast that caused Neil DeGrasse Tyson to publicly change his position about the threat of superintelligence.