SSC Discussion: No Time Like The Present For AI Safety Work
post by tog · 2015-06-05T02:34:28.645Z · LW · GW · Legacy · 13 commentsContents
13 comments
(Continuing the posting of select posts from Slate Star Codex for comment here, for the reasons discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)
Scott recently wrote a post called No Time Like The Present For AI Safety Work. It makes the argument for the importance of organisations like MIRI thus, and explores the last two premises:
1. If humanity doesn’t blow itself up, eventually we will create human-level AI.
2. If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI
3. If far-above-human-level AI comes into existence, eventually it will so overpower humanity that our existence will depend on its goals being aligned with ours
4. It is possible to do useful research now which will improve our chances of getting the AI goal alignment problem right
5. Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise
I placed very high confidence (>95%) on each of the first three statements – they’re just saying that if trends continue moving towards a certain direction without stopping, eventually they’ll get there. I had lower confidence (around 50%) on the last two statements.
Commenters tended to agree with this assessment; nobody wanted to seriously challenge any of 1-3, but a lot of people said they just didn’t think there was any point in worrying about AI now. We ended up in an extended analogy about illegal computer hacking. It’s a big problem that we’ve never been able to fully address – but if Alan Turing had gotten it into his head to try to solve it in 1945, his ideas might have been along the lines of “Place your punch cards in a locked box where German spies can’t read them.” Wouldn’t trying to solve AI risk in 2015 end in something equally cringeworthy?
As always, it's worth reading the whole thing, but I'd be interested in the thoughts of the LessWrong community specifically.
13 comments
Comments sorted by top scores.
comment by knb · 2015-06-07T06:52:36.396Z · LW(p) · GW(p)
I think Scott's argument is totally reasonable, well-stated and I agree with his conclusion. So it was pretty dismaying to see how many of his commenters are dismissing the argument completely, making arguments which were demolished in Eliezer's OB sequences.
Some familiar arguments I saw in the comments:
- Intelligence, like, isn't even real, man.
- If a machine is smarter than humans, it has every right to destroy us.
- This is weird, obviously you are in a cult.
- Machines can't be sentient, therefore AI is impossible for some reason.
- AIs can't possibly get out of the box, we would just pull the plug.
- Who are we to impose our values on an AI? That's like something a mean dad would do.
↑ comment by TheAncientGeek · 2015-06-07T17:20:46.715Z · LW(p) · GW(p)
There's also better arguments, like
"We wouldn't build a god AI and put it in charge of the world"
"We would make some sort of attempt at installing safety overrides"
" Tool AI is safer and easier, and easier to make safe, and wouldn't need goals to be aligned with ours"
"Well be making ourselves smarter in parallel"
comment by anon85 · 2015-06-05T04:09:28.287Z · LW(p) · GW(p)
I think point 1 is very misleading, because while most people agree with it, hypothetically a person might assign 99% chance of humanity blowing itself up before strong AI, and < 1% chance of strong AI before the year 3000. Surely even Scott Alexander will agree that this person may not want to worry about AI right now (unless we get into Pascal's mugging arguments).
I think most of the strong AI debate comes from people believing in different timelines for it. People who think strong AI is not a problem think we are very far from it (at least conceptually, but probably also in terms of time). People who worry about AI are usually pretty confident that strong AI will happen this century.
Replies from: Houshalter, IlyaShpitser↑ comment by Houshalter · 2015-06-05T22:10:31.525Z · LW(p) · GW(p)
In my experience the timeline is not usually the source of disagreement. They usually don't believe that AI would want to hurt humans. That the paperclip maximizer scenario isn't likely/possible. E.g. this popular reddit thread from yesterday.
I guess that would be premise number 3 or 4, that goal alignment is a problem that needs to be solved.
Replies from: anon85↑ comment by IlyaShpitser · 2015-06-05T20:43:37.570Z · LW(p) · GW(p)
My reading of that article is:
"I am stumping for my friends."
Replies from: knb↑ comment by knb · 2015-06-07T06:55:18.105Z · LW(p) · GW(p)
So are you claiming he doesn't really believe his argument?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-07T11:12:17.282Z · LW(p) · GW(p)
I am saying he wrote that article because his friends asked him to. You are asking the wrong person about Scott's beliefs.
Replies from: knb↑ comment by knb · 2015-06-07T21:41:05.219Z · LW(p) · GW(p)
I wasn't asking you about his beliefs, I was asking about what implication you were making. We already know what Scott says he believes; unless you doubt he is being honest there is no reason to assume he is stumping for his friends rather than advocating his own beliefs.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-06-09T10:07:33.110Z · LW(p) · GW(p)
I am not sure what you are asking. I don't think Scott is an evil mutant, he wouldn't just cynically lie, I don't think. AI risk is not one of his usual blog topics, however.
I think you are underestimating the degree to which personal truth is socially constructed, and in particular influenced by friends.
Replies from: Raemoncomment by Lalartu · 2015-06-05T08:14:03.327Z · LW(p) · GW(p)
I don't think that so high estimate for first statement is reasonable.
Also, link now leads to bicameral reasoning article.
Replies from: tog↑ comment by tog · 2015-06-05T17:00:50.820Z · LW(p) · GW(p)
Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/