Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link]
post by Dr_Manhattan · 2014-01-30T01:20:10.579Z · LW · GW · Legacy · 12 commentsContents
12 comments
http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html
Not going to summarize the article content, but I think this is the highest-level publication linking to LW so far.
Also, it's appears that Shane Legg, Jaan Tallin and others at Deep Mind leveraged the acquisition and moved the friendly AI conversation to a higher level, quite possibly highest level at Google. Interesting times, these are.
12 comments
Comments sorted by top scores.
comment by XiXiDu · 2014-01-30T09:20:38.839Z · LW(p) · GW(p)
By the way, I asked Shane Legg for a follow-up, but he replied that they were not currently doing any media so he's unable to comment further.
Here are the questions I wanted to ask him (maybe he can reply in future):
Q1. Has your opinion about risks associated with artificial general intelligence changed since 2011?
Q2. Can you comment on the creation of an internal ethics board at Google?
Q3. To what extent do people within DeepMind and Google agree with the general position of the Machine Intelligence Research Institute?
Q4. Do you believe that Google will create an artificial general intelligence?
Q5. Do you have any general comments for the LessWrong community regarding Google and their recent acquisition of DeepMind?
Replies from: oooo↑ comment by oooo · 2014-01-30T15:51:00.670Z · LW(p) · GW(p)
Q6. How much influence will the ethics committee actually have? For example, are there commercial and IP clawback provisions if the committee is deemed to be ignored or sidelined?
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2014-01-30T19:56:12.457Z · LW(p) · GW(p)
With Google's army of lawyers I wouldn't count on this as the enforcement mechanism. BUT, this has a chance to get Larry or Sergei involved, which has a chance of succeeding and making a huge difference.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-01-30T20:49:37.338Z · LW(p) · GW(p)
From what I gather, it is chiefly Sergey Brin who is concerned with ethical issues, and his attention is on various Google X projects. Larry Page and Eric Schmidt don't seem to care as much, if at all. That's probably one reason Google has been getting visibly eviller in the last couple of years. Unless Deep Mind is a part of Google X, I would not expect the ethics board to matter.
comment by Filipe · 2014-01-30T06:45:30.629Z · LW(p) · GW(p)
A blog connected to the NYT also linked to the interview.
Mr. Legg noted in a 2011 Q&A with the LessWrong blog that technology and artificial intelligence could have negative consequences for humanity.
comment by Stefan_Schubert · 2014-01-31T23:29:14.562Z · LW(p) · GW(p)
The Economist writes about this too:
comment by Shmi (shminux) · 2014-01-30T07:35:42.405Z · LW(p) · GW(p)
Downvoted for persistently refusing to summarize your links. Show some respect to you readers, man.
Replies from: Tenoke↑ comment by Tenoke · 2014-01-30T08:12:11.822Z · LW(p) · GW(p)
Eh? Given that he started this thread with the idea of discussing the link to LW and not so much the content of the article, it doesn't seem like much of an issue (especially when the acquisition has been discussed on LW already).
I would be more inclined to downvote because I think that this is more suited for the Open Thread but I think the same for a lot of posts.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-01-30T10:23:43.929Z · LW(p) · GW(p)
Then be consistent and downvote the other posts, too.
Replies from: Tenoke↑ comment by Tenoke · 2014-01-30T10:56:04.299Z · LW(p) · GW(p)
I am usually fairly consistent in this and downvote most such posts when I see them, however, I didn't downvote this post because it is almost borderline. You are right though, I should've downvoted this post to be consistent with my other downvotes, so I shall.
comment by Shmi (shminux) · 2014-01-30T07:42:08.831Z · LW(p) · GW(p)
Huffpost is a pretty shallow publication, and this article is no exception: the author assumes that one can get away with Asimov-style deontological rules:
Replies from: Luke_A_Somerswho would we trust to develop a "10 commandments" for ethical AI?
↑ comment by Luke_A_Somers · 2014-01-30T15:18:43.683Z · LW(p) · GW(p)
It could be metaphorical.