AI-ON is an open community dedicated to advancing Artificial Intelligence
post by morganism · 2016-10-18T22:17:28.675Z · LW · GW · Legacy · 17 commentsThis is a link post for http://ai-on.org/
Contents
17 comments
17 comments
Comments sorted by top scores.
comment by siIver · 2016-10-20T01:41:10.636Z · LW(p) · GW(p)
This may be a naive and over-simplified stance, so educate me if I'm being ignorant--
but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.
Replies from: Houshalter, scarcegreengrass, username2↑ comment by Houshalter · 2016-10-20T20:31:48.419Z · LW(p) · GW(p)
I agree. But it's worthwhile to try to get AI researchers on our side, and get them researching things relevant to FAI. Perhaps lesswrong could have some influence on this group. If nothing else it's interesting to keep an eye on how AI is progressing.
↑ comment by scarcegreengrass · 2016-10-20T16:16:37.251Z · LW(p) · GW(p)
First of all, it's hard to imagine anyone slowing down AI research. Even a ban by n large governments would only encourage research in places beyond government control.
Second, there is quite a lot of uncertainty about the difficulty of these tasks. Both human-comparable software and AI value alignment probably involve multiple difficult subproblems that have barely been researched so far.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2016-10-22T03:29:08.772Z · LW(p) · GW(p)
One could slow it down by convincing people who would otherwise speed it up.
↑ comment by username2 · 2016-10-20T14:02:55.692Z · LW(p) · GW(p)
What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.
I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.
Replies from: Houshalter↑ comment by Houshalter · 2016-10-20T20:48:36.227Z · LW(p) · GW(p)
Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.
I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.
Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.
Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
EDIT: fixed "research into the topic of AI research"
Replies from: TheAncientGeek, TheAncientGeek, username2↑ comment by TheAncientGeek · 2016-11-04T11:57:44.251Z · LW(p) · GW(p)
This is the majority opinion here and has been discussed to death in the past,
If a discussion involves dissenters being insulted and asked to leave, then it doesn't count.
↑ comment by TheAncientGeek · 2016-11-04T10:40:34.808Z · LW(p) · GW(p)
Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur.
Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant.
Go back to he object level: is it true or false that a skyney scenario requires the conjunction of a string of unlikely events?
Replies from: Houshalter↑ comment by Houshalter · 2016-11-05T06:01:28.509Z · LW(p) · GW(p)
False. It requires only a few events, like smarter-than-human AI being invented, and the control problem not being solved. I don't think any of these things is very unlikely.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-11-06T20:26:51.945Z · LW(p) · GW(p)
Not solving the control problem isn't a sufficient condition for AI danger: the AI also needs inimical motivations. So that is a third premise. Also fast takeoff of a singleton AI is being assumed.
ETA: The last two assumptions are so frequently made in AI risk circles that they lack salience -- people seem to have ceased to regard them as assumptions at all.
Replies from: Houshalter↑ comment by Houshalter · 2016-11-06T21:32:24.055Z · LW(p) · GW(p)
Well the control problem is all about making AIs without "inimical motivations", so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it's time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-11-09T11:51:28.826Z · LW(p) · GW(p)
Well the control problem is all about making AIs without "inimical motivations",
Only if you use language very loosely. If you don't. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.
And fast takeoff is not at all necessary for AI risk. AI
This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.
I guess it gives us somewhat more time to react, at best.
Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.
↑ comment by username2 · 2016-10-20T23:41:19.374Z · LW(p) · GW(p)
Most AI researchers have not done any research into the topic of AI [safety], so their opinions are irrelevant.
(I assume my edit is correct?)
One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Replies from: dxu, Houshalter↑ comment by dxu · 2016-10-23T22:46:24.663Z · LW(p) · GW(p)
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Considering that you're using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.
↑ comment by Houshalter · 2016-10-21T00:40:33.334Z · LW(p) · GW(p)
most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-11-04T10:13:23.698Z · LW(p) · GW(p)
. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site.
A view can be extreme within the wider AI community, and normal within less wrong. The disconnection between LW and everyone else is part of the problem.
comment by Gurkenglas · 2016-10-22T03:31:36.587Z · LW(p) · GW(p)
So. How do we contact these people and find out their stance on how important the alignment problem is?