Research Priorities for Artificial Intelligence: An Open Letter

post by jimrandomh · 2015-01-11T19:52:19.313Z · LW · GW · Legacy · 10 comments

Contents

10 comments

The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

 

10 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2015-01-11T22:03:49.047Z · LW(p) · GW(p)

A number of prestigious people from AI and other fields have signed the open letter, including Stuart Russell, Peter Norvig, Eric Horvitz, Ilya Sutskever, several DeepMind folks, Murray Shanahan, Erik Brynjolfsson, Margaret Boden, Martin Rees, Nick Bostrom, Elon Musk, Stephen Hawking, and others. I'd edit that into the OP.

Also, it's worth noting that the research priorities document cites a number of MIRI's papers.

comment by jimrandomh · 2015-01-11T19:52:37.417Z · LW(p) · GW(p)

There is also a discussion of this happening on Hacker News here.

comment by devi · 2015-01-12T16:01:01.240Z · LW(p) · GW(p)

Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?

Replies from: lukeprog
comment by lukeprog · 2015-01-12T21:29:00.866Z · LW(p) · GW(p)

After the first few pages of signatories, I recognize very few of the names, so my guess is that LW signers will just get drowned in the much larger population of people who support the basic content of the research priorities document, which means there's not much downside to lots of LWers signing the open letter.

comment by Darklight · 2015-01-16T15:11:24.312Z · LW(p) · GW(p)

I'm impressed they managed to get the Big Three of the Deep Learning movement (Geoffrey Hinton, Yann LeCun, and Yoshua Bengio). I remember at the 27th Canadian Conference on Artificial Intelligence 2014, I asked Professor Bengio what he thought of the ethics of machine learning, and he asked if I was a reporter. XD

comment by tegid · 2015-01-15T09:48:41.444Z · LW(p) · GW(p)

This has appeared in the popular science page "I Fucking Love Science", followed by almost 20 million people on facebook. I think this is extremely good news. Despite the picture and its caption, the article seems to take the matter seriously.

comment by RedErin · 2015-01-12T16:39:43.359Z · LW(p) · GW(p)

So if an AI were created that had consciousness and sentience, like in the new Chappie movie. Would they advocate killing it?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-01-14T14:53:55.265Z · LW(p) · GW(p)

If the AI were superintelligent and would otherwise kill everyone else on Earth - yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.

comment by RedErin · 2015-01-15T17:08:21.630Z · LW(p) · GW(p)

I didn't see Ray Kurzwiels name on there. I guess he wants AI asap, and figures it's worth the risk.

comment by hazelperry786 · 2015-01-22T10:24:28.117Z · LW(p) · GW(p)

The article sources are great with collection of updated information. Great guidance site! Amazon Bontonwear Fight Club Jacket:http://www.amazon.com/BontonWear-Fight-Synthetic-Leather-Jacket/dp/B00N8XNMFQ