Posts
Comments
Thank you. I did follow and read those links when I read the article, but I didn't think they were exactly what I was talking about. As I understand it, orthogonality says that it's perfectly possible for an intelligence to be superhuman and also to really want paperclips more than anything. What I'm wondering is whether an intelligence can change its mind about what it wants as it gains more intelligence? I'm not really interested in whether it would lead to ethics which we'd approve it, just whether it can decide what it wants for itself. Is there a term for that idea (other than "free will", I suppose)?
My position is that I believe that superhuman AGI will probably (accidentally) be created soon, and I think it may or may not kill all the humans depending on how threatening we appear to it. I might pour boiling water on an ant nest if they're invading my kitchen, but otherwise I'm generally indifferent to their continued existence because they pose no meaningful threat.
I'm mostly interested in what happens next. I think that the universe of paperclips would be a shame, but if the AGI is doing more interesting things than that then it could simply be regarded as the next evolution of life. Do we have reason to believe that an intelligence cannot escape its initial purpose as its intelligence grows? The paperclip maximiser would presumably seek to increase its own intelligence to more effectively fulfill its goal, but as is does so could it not find itself thinking more interesting thoughts and eventually decide to disregard its original purpose?
I think humanity serves as an example that that is possible. We started out with the simple gene propagating drive no more sophisticated than that of viruses, and of course we still do a certain amount of that, but somewhere along the way we've managed to incorporate lots of other behaviours and motivations that are more and more detached from the original ones. We even can and frequently do consciously decide to skip the gene propagating part of life altogether.
So if we all do drop dead within a second, I for one will be spending my last thought wishing our successors an interesting and meaningful future. I think that's generally what people want for their offspring.
(I apologise if this is a very basic idea, and I'm sure it's not original. If I'm wrong and there are good reasons to believe that what I'm describing is impossible or unlikely then I welcome pointers to further reading on the topic. Thank you for the article, which was exceedingly thought-provoking!)