Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"
post by Paul Crowley (ciphergoth) · 2015-01-22T20:21:48.539Z · LW · GW · Legacy · 18 commentsContents
18 comments
Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21
18 comments
Comments sorted by top scores.
comment by Artaxerxes · 2015-01-22T22:53:58.476Z · LW(p) · GW(p)
Just knowing that this seems to be on Bill's radar is pretty reassuring. The guy has lots of resources to throw at stuff he wants something done about.
Replies from: Gondolinian, dxu↑ comment by Gondolinian · 2015-01-23T11:49:41.907Z · LW(p) · GW(p)
And he has a track record of actually doing things with his money, as opposed to the hundreds of other people who have lots of resources to throw at things they want something done about, but don't in any significant way.
Replies from: dxu↑ comment by dxu · 2015-01-25T23:07:55.643Z · LW(p) · GW(p)
The problem is that there's too much stuff to be done. From Gates' perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn't necessarily mean he'll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn't as reassuring as it might look at first glance.
Replies from: timcomment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2015-01-26T05:23:53.245Z · LW(p) · GW(p)
I found it interesting that he doesn't think we should stop or slow down, but associates his position with Bill Joy, the author of "Why the Future Doesn't Need Us" (2000), which argued for halting research in genetics, nanotech and robotics.
comment by deschutron · 2015-01-24T10:24:07.232Z · LW(p) · GW(p)
Ten years ago this would have been a great segue into jokes comparing a post-singularity AGI to Microsoft Windows.
Replies from: leplen↑ comment by leplen · 2015-01-25T03:08:14.667Z · LW(p) · GW(p)
The reason that AI wants to turn the universe into paperclips is because it's the 2nd coming of Clippy.
Replies from: deschutron↑ comment by deschutron · 2015-01-29T03:10:14.387Z · LW(p) · GW(p)
The solution to the friendly AI problem: Make an AI that detects what people are trying to do and asks them if they'd like some help.
comment by Alejandro1 · 2015-01-28T20:47:51.275Z · LW(p) · GW(p)
Here the question is raised again to Gates in a Reddit AMA. He answers:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
Edit: Ninja'd by Kawoomba.
comment by James_Miller · 2015-01-23T03:06:09.564Z · LW(p) · GW(p)
I don’t think it’s a dramatic problem in the next ten years but
So we need to convince Gates that even though unfriendly AI almost certainly won't appear in the next ten years, he should devote resources to the problem now.
Replies from: buybuydandavis, ike, John_Maxwell_IV, adamzerner↑ comment by buybuydandavis · 2015-01-23T22:15:23.502Z · LW(p) · GW(p)
Widespread catastrophic consequences from global warming also "almost certainly won't appear in the next ten years".
Gates has spent a good chunk of change on no carbon energy, partially to combat global warming, and partially to alleviate poverty.
Seems to be sympatico on the importance of R&D.
http://www.rollingstone.com/politics/news/the-miracle-seeker-20101028?page=2
Q: What have you learned about energy politics in your trips to Washington?
A: The most important thing is to start working on the long-lead-time stuff early. That's why the funding for R&D feels urgent to me.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-01-23T03:41:33.120Z · LW(p) · GW(p)
That was in reference to the labor issue, right?
Replies from: CarlShulman↑ comment by CarlShulman · 2015-01-23T05:37:17.227Z · LW(p) · GW(p)
AI that can't compete in the job market probably isn't a global catastrophic risk.
↑ comment by Adam Zerner (adamzerner) · 2015-01-23T03:36:33.518Z · LW(p) · GW(p)
Does he think that it isn't worth investing in yet?
Him thinking that it won't appear in the next ten years doesn't mean he thinks that we shouldn't devote resources to it yet. Has he done things that imply that he doesn't think it's worth investing in yet (I genuinely don't know)?
comment by RyanCarey · 2015-04-07T20:39:17.451Z · LW(p) · GW(p)
Here he is again saying basically that biological hardware is inferior, that he is in agreement with Musk, recommending Superintelligence and endorsing Musk's funding effort: https://www.youtube.com/watch?v=vHzJ_AJ34uQ.
comment by zereyaqob · 2015-01-30T15:11:02.860Z · LW(p) · GW(p)
The way AI is going, our aim is to reach General Intelligence or mimic the human brain at some point. I just want to differentiate that with the AI we know today. If we assume that, then there are two end points that we might reach. One would be, we are not as smart as we think and we have made an "intelligent" being, by that I mean stupid and the stupid being has the tools it needs to destroy us and it can at anytime harm us. The second option is we are really smart and we create the intelligent being we have always dreamed about. Think about it, the system we have built would surely by so complex that the smallest change could trigger a big chain reaction. We might start building robots and one robot might have a malfunction, just like the car malfunction that the car industry faced. Now think of the consequence that the world might face. The AI we have built surely out smarts us and if it can think evil, who is to say it can treat us like we treat ants ? Is there a guaranty? No, would surely be the answer and I don't think we should pursue it because either way we go the result is deadly.