Superintelligent AI mentioned as a possible risk by Bill Gates
post by Roko · 2010-11-28T11:51:50.475Z · LW · GW · Legacy · 20 commentsContents
20 comments
"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."
- Bill Gates
From
Africa Needs Aid, Not Flawed Theories
One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).
It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:
"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."
20 comments
Comments sorted by top scores.
comment by JoshuaZ · 2010-11-28T17:08:11.420Z · LW(p) · GW(p)
It seems that Bill is also something of a Bayesian with respect to global catastrophic risk
This isn't Bayesianism. this is something closer to caring about expected utility. Not the same thing.
Replies from: Rokocomment by CarlShulman · 2010-11-28T13:46:29.045Z · LW(p) · GW(p)
Surely Bill Joy is another possibility, and Kurzweil does talk at least a bit about AI x-risk.
Replies from: Roko, Roko↑ comment by Roko · 2010-11-28T14:09:08.459Z · LW(p) · GW(p)
By the way my memory fails me: what exactly does Joy say about AI risk? What is his angle? If I recall correctly he cites the dangers of robots, not of superintelligence.
E.g. the word "superintelligence(ent)" only appears once in Bill Joy's famous essay "Why the future doesn't need us", and that in a Moravec quote. "Robot(ics)" appears 52 times.
Replies from: timtyler↑ comment by timtyler · 2010-11-28T14:45:48.408Z · LW(p) · GW(p)
He says - of the "robots":
"If they are smarter than us, stronger than us, evolve quicker than us, they are likely to out-evolve us - in the same way that we have taken over the planet and out-evolved most of the other creatures" - source.
Replies from: Roko↑ comment by Roko · 2010-11-28T15:01:24.055Z · LW(p) · GW(p)
Still, that doesn't tell me why Gates said "superintelligent computers" rather than "highly-evolved robots"
Replies from: timtyler↑ comment by timtyler · 2010-11-28T15:40:42.417Z · LW(p) · GW(p)
Give a superintelligence some actuators and it becomes a robot. A superintelligence without actuators is not much use to anyone.
Replies from: ciphergoth, Nic_Smith↑ comment by Paul Crowley (ciphergoth) · 2010-11-30T07:46:46.716Z · LW(p) · GW(p)
The point is that Gates's turn of phrase is informative about the provenance of his ideas.
Replies from: timtylercomment by Emile · 2010-11-28T17:51:36.112Z · LW(p) · GW(p)
One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).
The idea is also quite common in Science Fiction.
Or reading OvercomingBias (unlikely), or talking to someone who did (more likely) - my impression is that more people may have been in contact with the "Scary Idea" though Eliezer's writing than through that of the other people you list (except probably Kurzweil). Back when Eliezer was posting daily on OB, I'd see mentions of the blog from quite varied sources (all quite geeky).
Of course, still more people have been exposed to a form of the Scary Idea through the Terminator movies and other works of fiction.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-29T13:51:21.800Z · LW(p) · GW(p)
Gates could have come up with the idea by himself, too.
Replies from: Rokocomment by MichaelVassar · 2010-11-28T16:43:12.997Z · LW(p) · GW(p)
Kurzweil does say that AGI is a GCR.
Replies from: Roko↑ comment by Roko · 2010-11-28T16:44:47.651Z · LW(p) · GW(p)
Where?
Replies from: CarlShulman↑ comment by CarlShulman · 2010-11-28T16:59:29.739Z · LW(p) · GW(p)
In the Singularity Is Near, go to the index and look for "risk," "pathogen," and so on to find the relevant chapter. He says that the best way to reduce AI risk is to be moral, so that our future selves and successors respond well.