Posts
Comments
I know this is a few days late, but I couldn't help but notice that no one mentioned how your "Zenlike but not Zen" philosophy is basically just a weak version of Stoicism (weak in that you seem to desire some passion, whereas a stoic would advocate distancing yourself from all highs and lows). There is no need to create techniques to do this from scratch, the path has already been laid out. I would encourage anyone interested in the topic to research Stoic teachings, particularly Epictetus, if you haven't done so already.
[I recommend Epictetus because there is an unfortunate tendency in ancient Stoic philosophers toward more mystical, almost religious, thinking. Epictetus refrains from that, for the most part, and concentrates on practical aspects.]
"How different is it when soldiers are at war? They must kill or be killed."
I think there is an important distinction between "kill or die" and "kill or be killed." The wolf's life may be at stake, but the rabbit clearly isn't attacking the wolf. If I need a heart transplant, I would still not be justified in killing someone to obtain the organ.
Why not? Because it's wrong. I can sense that it's wrong, even if I have no other evidence, and the fact that nearly everyone else agrees is pretty good confirmation. That's not proof, I suppose, but I'm willing to behave as if I had proof because there is not enough disconfirmatory information.
I believe in the moon because I can see the moon, I trust my eyes, and everyone else seems to agree. I would continue to believe in the absence of any other evidence. If I couldn't see the moon I might not believe, and that would also be rational. I can see, however, and I trust my moral sense as my as I trust any other sensory organ. Just as with a sociopath, I think you would have a hard time proving the existence of a specific lunar crater to a blind person, but the fact that she lacks the capability to see it isn't evidence that it isn't there. People that can see can see it, and that will have to be enough.
That is, assuming they are both still people, as I said before, not merely animals.
"Your argument might convince a rabbit, but I doubt a fox would buy it"
Change the fox's name to Beverly and the rabbit's name to Herman. I don't care how much smarter or better looking Beverly is, she still doesn't have the right to kill and eat Herman.
I'm not sure I see what is so hard to understand about the Rabbit/Fox scenario. If they both were intelligent creatures, it seems pretty clear that there would be no moral justification in eating the rabbit, and the fox would be obligated to seek out another source of food. If you were to stipulate that the rabbit is the only source of nourishment available to the fox, this still in no way justifies murder. The fox would have a moral obligation to starve to death. The only remaining problem would be whether the fox has an obligation to his species to survive and procreate, but that is a claim that Eliezer already explicitly rejected.
Of course this reasoning only works with species largely similar to our own. I'm still not sure if it would be applicable to species which exhibit no sense of individualism.
It is pretty clever to suggest objective morality without specifying an actual moral code, as it is always the specifics that cause problems.
My issue would be how Eliezer appears to suggest that human morality and alien morality could be judged separately from the genetics of each. Would super intelligent alien bees have the same notions of fairness as we do, and could we simply transplant our morality onto them, annd judge them accordingly, with no adjustments made for biological differences? I think it is very likely that such a species would consider the most fair distribution of a found pie to be one that involved a sizeable portion going to the queen, and that a worker who disagreed would be acting immorally. Is this something that we can safely say is objectively wrong?
I'm not sure I understand at what point the torture would no longer be justified. It's easy to say that it is preferable to a googolplex of people with dust specks is worse than one person being tortured, but there has to be some number at which this is no longer the case. At some point even your preferences should flip, but you never suggest a point where it would be acceptable. Would it be somewhere around 1.5-1.6 billion, assuming the dust specks were worth 1 second of pain? Is it acceptable if it is just 2 people affected? How many dust specks go into 1 year of torture? I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor.