Posts
Comments
"Utilitarians have unsurprisingly been aware of these issues for a very long time and have answers to them. Happiness being the sole good (for humans at least) is in no way invalidated by the complexity of relationship bonds." (Toby Ord)
Toby, one can be utilitarian and pluralist, so "happiness" need not be the only good on a utilitarian theory. Right? (I contradict only to corroborate.)
Eliezer, when you say you think morality is "subjectively objective," I take that to mean that a given morality is "true" relative to this or that agent -- not "relative" in the pejorative sense, but in the "objective" sense somewhat analogous to that connoted by relativity theory in physics: In observing moral phenomena, the agent is (part of) the frame of reference, so that the moral facts are (1) agent-relative but (2) objectively true. (Which is why, as a matter of moral theory, it would probably be more fruitful to construe 'moral relativity' merely as the denial of moral universality instead of as the denial of normative facts-of-the-matter tout court -- particularly since no one really buys moral relativity in the conventional sense.)
"[I]t's a simple unprobabilistic phase inversion topography manifold calculation..."
Tosh. This ignores the salience of the linear data elicitation projected over dichotomous variables with a fully specified joint distribution.
In a slogan, one wants to be both happy and worthy of happiness. (One needn't incorporate Kant's own criteria of worthiness to find his formulation useful.)
The dangers of a "little learning" are easily offset by pointing out the ways the relevant "simple math" fails in a given case. Cf. Feynman's (for example) use of analogies. He'd state the analogy, then point out the ways in which the analogy is wrong or misleading, the specific features that fail to map, etc. This strategy gets you the pedagogical benefits of structure mapping while minimizing the risk (that Bill Swift warns against, supra) that a little learning will be mistaken for a great deal.