Could the Maxipok rule have catastrophic consequences? (I argue yes.)
score: 3 (3 votes) ·
Interesting topics :) About your second paper:
You say you provide “a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges.” So, it sounds like the list excludes those whose morality implies that it would be right to kill everyone, or who may want to kill everyone, but who have simply kept quiet about it.
In footnote 2, you write “Note that, taken to its extreme, classical utilitarianism could also, arguably, engender an existential risk,” and you refer to an argument by David Pearce. That’s an important note. That also goes beyond individuals who themselves have “expressed omnicidal urges,” since the argument is from Pearce; not a classical utilitarian reporting her urges. By the way, I think it is fine to say that "classical utilitarianism could also, arguably, engender an existential risk." But the risk is also about killing everyone, which need not be an existential risk (in the sense that Earth-originating, intelligent life goes away, or fails to realize its potential), since, if there is a risk, a or the main risk is presumably that classical utilitarianism implies that it would be right to kill all of us in order to spread well-being beyond Earth, which would not be an existential catastrophe in the sense I just mentioned.
A fun and important exercise would be to start from your, the author’s, morality, and analyze if it implies that it would be right to kill everyone. Without knowing much at all about your morality, I guess that one could make a case for it, and that it would be a complex investigation to see if the arguments you could give as replies are really successful.