How Feasible Is the Rapid Development of Artificial Superintelligence?
post by Kaj_Sotala · 2016-10-24T08:43:28.050Z · LW · GW · Legacy · 12 commentsThis is a link post for https://foundational-research.org/how-feasible-is-rapid-development-artificial-superintelligence
Contents
12 comments
12 comments
Comments sorted by top scores.
comment by gwern · 2016-10-24T20:02:26.288Z · LW(p) · GW(p)
To the speed section, you might want to add examples of parallel learning. Parallelizing learning of robot arm manipulation, or parallel playing of Atari games, which are both (much) faster in terms of wallclock time and also can be more sample and resource efficient (A3C actually can be more sample-efficient than DQN with multiple independent agents, and it doesn't need to waste a great deal of RAM and computation on the experience replay).
comment by Houshalter · 2016-10-25T07:20:41.517Z · LW(p) · GW(p)
The premise this article starts with is wrong. The argument goes that AIs can't take over the world, because they can't predict things much better than humans can. Or, conversely, that they will be able to take over because they can predict much better than humans.
Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans. Humans didn't evolve to be engineers or computer programmers. It's really just an accident we are capable of it. Humans have such a hard time designing complex systems, keeping track of so many different things in our head, etc. Already these jobs are restricted to unusually intelligent people.
I think there are many possible optimizations to the mind to improve at these kinds of tasks. There are rare humans that are very good at these tasks, showing that human brains aren't anywhere near the peak. An AI that is optimized for them, will be able to design technologies we can't even dream of. We could theoretically make nanotechnology today, but there are so many interacting parts and complexities, humans are just unable to manage it. The internet has so much bugged software running it. It could probably be pwned in a weekend by a sufficiently powerful programming AI.
And the same is perhaps true with designing better AI algorithms, an AI optimized towards AI research, would be much better at it than humans.
Replies from: Kaj_Sotala, TheAncientGeek↑ comment by Kaj_Sotala · 2016-10-25T08:29:44.759Z · LW(p) · GW(p)
Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans.
The way I think of it, designing technology is a special case of prediction. E.g. to design a steam engine, you need to be able to predict how steam behaves in different conditions and whether, given some candidate design, the pressure from the steam will be transformed into useful work or not.
Replies from: gjm, TheAncientGeek↑ comment by gjm · 2016-10-25T15:36:13.806Z · LW(p) · GW(p)
designing technology is a special case of prediction
It's possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won't, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond "that's how the physics works out".
(To be good at design you do, I think, need to be pretty good at prediction.)
Replies from: morganism, Gurkenglas↑ comment by morganism · 2016-10-30T23:22:38.054Z · LW(p) · GW(p)
I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.
Replies from: gjm↑ comment by Gurkenglas · 2016-10-28T21:15:27.528Z · LW(p) · GW(p)
(To be good at design you do, I think, need to be pretty good at prediction.)
Then this whole endeavour is doomed, because part of the point of designing AGI is that we don't know what it'll do.
Replies from: gjm↑ comment by gjm · 2016-10-30T22:18:22.599Z · LW(p) · GW(p)
Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.
Clearly that's difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.
↑ comment by TheAncientGeek · 2016-10-25T15:37:01.209Z · LW(p) · GW(p)
You can design things based on a priori prediction, but you don't have to in many cases...you can also use trial and error instead.
↑ comment by TheAncientGeek · 2016-10-25T15:03:43.846Z · LW(p) · GW(p)
But AI taking over isn't the negative outcome we are trying to avoid...we are trying to avoid takeover by AIs that are badly misaligned with our values. What's the problem with an AI that runs complex technology in accordance with our values, better than us?
Replies from: Houshalter↑ comment by Houshalter · 2016-10-25T20:21:30.209Z · LW(p) · GW(p)
No it's not necessarily a negative outcome. I think it could go both ways, which is why I said it was "my greatest fear/hope".
comment by SchizophrenicElsa · 2016-10-24T22:10:20.133Z · LW(p) · GW(p)
If you're in the field of Computer Science and you've got a PhD in ML/AI you can expect hefty (and soon to be heftier) salaries. I think it is certainly possible.