Posts
Comments
I'm curious: what would you say about the writings of Paul Graham on this topic? It seems like he has a lot of evidence and experience in the field and his opinion differs drastically from yours. http://www.paulgraham.com/venturecapital.html
Well, heck. At least he's being honest. Maybe a little blunt, but definitely honest.
This idea may be contaminated by optimism, but to avoid the risk of destroying humanity with AI, would it not be sufficient to make the AI more or less impotent? If it were essentially a brain in a jar type of thing that showcased everything humanity could create in terms of intelligence without the disastrous options of writing its own code or having access to a factory for creating death-bots? I suppose this is also anthromorphizing the AI because if it were really that super-intelligent it could come up with a way to do its optimization beyond the constraints we think we are imposing. Surely building a tooth-less though possibly "un-Friendly" AI is a more attainable goal than building an unrestricted Friendly AI?
I don't understand why it must be a given that things like love, truth, beauty, murder, etc.. are universal moral truths that are right or wrong independent of the person computing the morality function. I know you frown upon mentioning evolutionary psychology, but is it really a huge stretch to surmise that the more even-keeled, loving and peaceful tribes of our ancestors would out-survive the wilder warmongers who killed each other out? Even if their good behavior was not genetic, the more "moral" leaders would teach/impart their morality to their culture until it became a general societal truth. We find cannibalism morally repugnant, yet for some long isolated islander tribes it was totally normal and acceptable, what does this say about the universal morality of cannibalism?
In short, I really reading enjoyed your insight on evaluating morality by looking backwards from results, and your idea of a hidden function that we all approximate is a very elegant idea, but I still don't understand how you saying "murder is wrong no matter whether I think it's right or not" does not amount to a list of universal moral postulates sitting somewhere in the sky.