Posts

Comments

Comment by RobbyRe on Superintelligence 28: Collaboration · 2015-03-25T14:42:32.803Z · LW · GW

It would be interesting to me to read others’ more free-ranging impressions of where Bostrom gets it right in Superintelligence – and what he may have missed or not emphasized enough.

Comment by RobbyRe on Superintelligence 27: Pathways and enablers · 2015-03-19T20:14:54.653Z · LW · GW

It’s also possible that FAI might necessarily require the ability to form human-like moral relationships, not only with humans but also nature. Such a FAI might not treat the universe as its cosmic endowment, and any von Neumann probes it might send out might remain inconspicuous.

Like the great filter arguments, this would also reduce the probability of “rogue singletons” under the Fermi paradox (and also against oracles, since human morality is unreliable).

Comment by RobbyRe on Superintelligence 26: Science and technology strategy · 2015-03-13T16:43:22.855Z · LW · GW

Bostrom lists a number of serious potential risks from technologies other than AI on page 231, but he apparently stops short of saying that science in general may soon reach a point where it will be too dangerous to be allowed to develop without strict controls. He considers whether AGI could be the tool that prevents these other technologies from being used catastrophically, but the unseen elephant in this room is the total surveillance state that would be required to prevent misuse of these technologies in the near future – and as long as humans remain recognizably human and there’s something left to be lost from UFAI. Is the centralized surveillance of everything, everywhere the future with the least existential risk?