IEEE released the first draft of their AI ethics guide
post by c0rw1n · 2016-12-14T13:30:42.825Z · LW · GW · Legacy · 6 commentsThis is a link post for http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf
Contents
6 comments
6 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2016-12-14T19:29:35.131Z · LW(p) · GW(p)
Whoa, this draft has a section on AGI and superintelligence that directly quotes Bostrom, Yudkowsky, Omohundro etc., and also has an "appreciation" section saying "We also wish to express our appreciation for the following organizations regarding their seminal efforts regarding AI/AS Ethics, including (but not limited to) [...] the Machine Intelligence Research Institute".
The executive summary for the AGI/ASI section reads as follows:
Replies from: ViliamFuture highly capable AI systems (sometimes referred to as artificial general intelligence or AGI) may have a transformative effect on the world on the scale of the agricultural or industrial revolutions, which could bring about unprecedented levels of global prosperity. The Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) Committee has provided multiple issues and candidate recommendations to help ensure this transformation will be a positive one via the concerted effort by the AI community to shape it that way.
Issues:
• As AI systems become more capable— as measured by the ability to optimize more complex objective functions with greater autonomy across a wider variety of domains—unanticipated or unintended behavior becomes increasingly dangerous.
• Retrofitting safety into future, more generally capable, AI systems may be difficult.
• Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.
• Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions.
↑ comment by Viliam · 2016-12-14T20:21:44.593Z · LW(p) · GW(p)
I am probably saying the obvious, but this is not a guide to develop a Friendly AI.
It's more like a list of things that people who consider themselves ethical should think about when developing an autonomously driving car, or a drone, or a data-mining program that works with personal data.
And I don't feel impressed by it, but I am not sure what else could they have written instead to impress me more. Considering it's a document produced by a committee, it could have been much worse. Maybe we should have higher standards for a technical committee, but it was not realistic to expect them to provide a technical implementation of robotic ethics.
Replies from: Vaniver, RomeoStevens↑ comment by Vaniver · 2016-12-14T22:56:19.168Z · LW(p) · GW(p)
Considering it's a document produced by a committee, it could have been much worse.
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it's names that either are or should be familiar:
Replies from: Sean_o_h
Malo Bourgon (Co-Chair) – COO, Machine Intelligence Research Institute
Richard Mallah (Co-Chair) – Director of Advanced Analytics, Cambridge Semantics; Director of AI Projects, Future of Life Institute
Paul Christiano – PhD Student, Theory of Computing Group, UC Berkeley
Bart Selman – Professor of Computer Science, Cornell University
Carrick Flynn – Research Assistant at Future of Humanity Institute, University of Oxford
Roman Yampolskiy, PhD – Associate Professor and Director, Cyber Security Laboratory; Computer Engineering and Computer Science, University of Louisville
↑ comment by Sean_o_h · 2016-12-15T13:58:19.025Z · LW(p) · GW(p)
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)
↑ comment by RomeoStevens · 2016-12-14T22:17:52.979Z · LW(p) · GW(p)
Considering it's a document produced by a committee, it could have been much worse.
This is what jumped out at me pretty strongly. In general I have been surprised by the non-terribleness of stuff like this and the White House thing considering the kind of bullshit that many academically ordained AI experts were spouting in the last few years when confronted with AI safety arguments.
edit: some parts do look laughably bad, which undermines how serious anyone serious takes this.
comment by Lumifer · 2016-12-14T16:48:27.900Z · LW(p) · GW(p)
Looks bad.
This is an SJ-flavoured wishy-washy text written to hit the right buzzwords (human rights! empowerment! multi-stakeholder ecosystems! disadvantaged sub-groups!) but say nothing of substance. There is one overwhelming desire coming through though: the desire to regulate and control.
I've only glanced at it, but my favourite part so far is the suggestion that cops give presentations in schools about AI safety 8-D I'm not kidding:
Educating law enforcement surrounding these issues so citizens work collaboratively with them to avoid fear or confusion (e.g., in the same way police officers have given public safety lectures in schools for years, in the near future they could provide workshops on safe AI/AS).