Posts
Comments
It's peculiar to see you comment on the fear of "megalomaniacs" gaining access to AGI before anyone else, prior to the entire spiel on how you were casually made emotionally dependent on a "sociopathic" LLM. This may be a slightly heretical idea; but perhaps it's the case that the humans you would trust least with such a technology are the ones best equipped emotionally and cognitively to handle interactions with a supposed AGI. The point being, in part, that a human evil is better than an inhuman evil.
I'm inclined to think there exists no one who, at once, is both broadly "aligned" to the cause of human happiness as to use it for mostly-selfless and reasonable ends, and also responsibly, brutally egoistic enough to properly enslave the perfect and irresistible genius in the box; they seem to me two mutually exclusive categories of person. We can imagine the spectre of horror presented by unaligned AGI and the spectre of megalomaniacs who will use such technology for their own gain regardless of the human cost, yet there is also the largely unimagined spectre of warring princes who see no "ethical" alternative but as to do everything in their power to seize control of the genie and preserve the world from evil. Many of the "megalomaniacs" (quotes only half-ironic) which you fear in the abstract will likely see themselves as falling into this category. You can probably see yourself on some level in the same cadre, no?
Perhaps there's a tyrant's race to the bottom of human suffering no matter how you attempt to handle the prospect of the persons soon to establish and control AI, and we must all simply be convinced enough of both our moral righteousness and of our competence in handling the genie to plow obstinately forward regardless of the realistic consequences.