Mini advent calendar of Xrisks: Artificial Intelligence
post by Stuart_Armstrong · 2012-12-07T11:26:38.757Z · LW · GW · Legacy · 5 commentsContents
Artificial intelligence None 5 comments
The FHI's mini advent calendar: counting down through the big five existential risks. As people on this list would have suspected, the last one is the most fearsome, should it come to pass: Artificial Intelligence.
And the FHI is starting the AGI-12/AGI-impacts conference tomorrow, on this very subject.
Artificial intelligence
Current understanding: very low
Most worrying aspect: likely to cause total (not partial) human extinction
Humans have trod upon the moon, number over seven billion, and have created nuclear weapons and a planet spanning technological economy. We also have the potential to destroy ourselves and entire ecosystems. These achievements have been made possible through the tiny difference in brain size between us and the other greater apes; what further achievements could come from an artificial intelligence at or above our own level?
It is very hard to predict when or if such an intelligence could be built, but it is certain to be utterly disruptive if it were. Even a human-level intelligence, trained and copied again and again, could substitute for human labour in most industries, causing (at minimum) mass unemployment. But this disruption is minor compared with the power that an above-human AI could accumulate, through technological innovation, social manipulation, or careful planning. Such super-powered entities would be hard to control, pursuing their own goals, and considering humans as an annoying obstacle to overcome. Making them safe would require very careful, bug-free programming, as well as an understanding of how to cast key human concepts (such as love and human rights) into code. All solutions proposed so far have turned out to be very inadequate. Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species.
5 comments
Comments sorted by top scores.
comment by timtyler · 2012-12-07T11:48:43.790Z · LW(p) · GW(p)
These achievements have been made possible through the tiny difference in brain size between us and the other greater apes; what further achievements could come from an artificial intelligence at or above our own level?
Supposedly they were made possible through a tiny difference in brain size. However others point at the ability to sustain cumulative cultural evolution. Cultural evolution may have caused larger brains more than it was caused by them - at least according to some theorists. Also, since many cetaceans have enormous brains and (probably) complex cultures, our opposable thumb and terrestrial ecosystem seem likely to have something to do with it.
comment by Kawoomba · 2012-12-07T12:15:34.745Z · LW(p) · GW(p)
It is very hard to predict when or if such an intelligence [at or above our own level] could be built (...)
Certainly agree on the "when", but if?
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-12-08T00:54:16.229Z · LW(p) · GW(p)
Now I wish the LW Survey had asked for P(humans will ever build a human-level-or-higher AGI).
comment by CarlShulman · 2012-12-07T13:00:18.214Z · LW(p) · GW(p)
Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species. Most worrying aspect: likely to cause total (not partial) human extinction
I agree that AI risk is more likely to be existential given that it is at least catastrophic than the other things you have mentioned. This is especially true in the sense of "most of the accessible universe gets used in ways that fall far short of their potential/astronomical waste point of view."
However, see this discussion of "AI will keep some humans around" arguments (or record data about, and recreate some in experiments and the like).
All solutions proposed so far have turned out to be very inadequate.
Well, none have been tested. Potential problems have been found or suggested, but depending on technological and social factors many might work.
Replies from: ewbrownv↑ comment by ewbrownv · 2012-12-10T23:28:21.668Z · LW(p) · GW(p)
If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.
Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...