Comment by jacy-reese on Preliminary thoughts on moral weight · 2018-08-15T12:41:02.413Z · score: 0 (3 votes) · LW · GW

I think most thinkers on this topic wouldn't think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn't find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there.

I do agree with you that you can't do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities.

I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.

Comment by jacy-reese on Preliminary thoughts on moral weight · 2018-08-15T11:48:28.228Z · score: 4 (5 votes) · LW · GW

I don't think the two-elephants problem is as fatal to moral weight calculations as you suggest (e.g. "this doesn't actually work"). The two-envelopes problem isn't a mathematical impossibility; it's just an interesting example of mathematical sleight-of-hand.

Brian's discussion of two-envelopes is just to point out that moral weight calculations require a common scale across different utility functions (e.g. the decision to fix the moral weight of a human at 1 whether you're using brain size, all-animals-are-equal, unity-weighting, or any other weighing approach). It's not to say that there's a philosophical or mathematical impossibility in doing these calculations, as far as I understand.

FYI I discussed this a little with Brian before commenting, and he subsequently edited his post a little, though I'm not yet sure if we're in agreement on the topic.