The recent NeurIPS call for papers requires authors to include a statement about the potential broader impact of their work

post by Ofer (ofer) · 2020-02-24T07:44:20.850Z · LW · GW · 5 comments

Contents

5 comments

NeurIPS (formerly NIPS) is a top conference in machine learning and computational neuroscience. The recently published call for papers for NeurIPS 2020 includes the following (which did not appear in previous years):

In order to provide a balanced perspective, authors are required to include a statement of the potential broader impact of their work, including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes.

[...]

Evaluation criteria: [...] Submissions will also be considered on ethical grounds. Regardless of scientific quality or contribution, a submission may be rejected for ethical considerations, including methods, applications, or data that create or reinforce unfair bias or that have a primary purpose of harm or injury.

5 comments

Comments sorted by top scores.

comment by David_Kristoffersson · 2020-02-24T13:16:53.628Z · LW(p) · GW(p)

Looks promising to me. Technological development isn't by default good [EA · GW].

Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren't by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-02-24T21:12:17.255Z · LW(p) · GW(p)

Agreed, I think of this like sending a signal that at least a limited concern for safety is important. I'm sure we'll see a bunch of papers with sections addressing this that won't be great, but over time it stands some chance of more regularizing considering concerns about safety and ethics of ML work in the field such that safety work will become more accepted as valuable. So even without a lot of guidance or strong evaluative criteria, this seems a small win to me that, at worst, causes some papers to just have extra fluff sections their authors wrote to pretend to care about safety rather than ignoring it completely.

comment by shminux · 2020-02-24T09:07:11.579Z · LW(p) · GW(p)

Just like with renaming NIPS to NeurIPS, this is wokeness gone wild.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-02-24T11:50:03.336Z · LW(p) · GW(p)

So is this a good thing or bad thing? Is wokeness a stepping stone towards some future enlightened morality, and help bring AI/ML along to that destination (in which case perhaps we should excuse its current excesses), or will it ultimately collapse while doing a lot of damage in the meantime (like communism)?