My unbundling of morality

post by Rudi C (rudi-c) · 2020-12-30T15:19:10.073Z · LW · GW · 2 comments

Inspired by seeing Morality as "Coordination", vs "Altruism" [LW · GW].

What more can you think of? (Of course, a lot of these have some overlap.)


Comments sorted by top scores.

comment by TAG · 2020-12-30T21:48:28.064Z · LW(p) · GW(p)

Optimizing trade-offs for personal benefits: E.g., net-neutrality is good for middle-class people, bad for poor people. “Bravery debates” might fall under this umbrella as well.

What you are talking about is optimising trade off for group benefits. You can't usually get personal benefits unless you wield absolute power.

Jockeying for group benefits is very much a thing, but the thing we generally call "politics".

comment by jefallbright · 2020-12-30T16:52:37.443Z · LW(p) · GW(p)

As agents embedded and evolving within our (ancestral) environment of interaction, our concepts of "morality" tend toward choices which, in principle, exploited synergies and thus tended to persist, for our ancestors.

For an individual agent, isolated from ongoing or anticipated interaction, there is no "moral", but only "good" relative to the agent's present values.

For agents interacting within groups (and groups of groups, …) actions perceived as "moral", or right-in-principle, are those actions assessed as (1) promoting an increasing context of increasingly coherent values (hierarchical and fine-grained), (2) via instrumental methods increasingly effective, in principle, over increasing scope of consequences. These orthogonal planes of (1) values, and (2) methods, form a space of meaningful action tending to select for increasing coherence over increasing context. Lather, rinse, repeat—two steps forward, one step back—tending to select for persistent, positive-sum, outcomes.

For agents embedded in their environment of interaction, there can be no "objective" morality, because their knowledge of their (1) values and (2) methods is ultimately ungrounded, thus subjective or perspectival, however this knowledge of values and methods is far from arbitrary since it emerges at great expense of testing within the common environment of interaction.

Metaphorically, the search for moral agreement can be envisioned as individual agents like leaves growing at the tips of a tree exploring the adjacent possible, and as they traverse the thickening and increasingly probable branches toward the trunk shared by all, rooted in the mists of "fundamental reality", they must find agreement upon arrival at the level of those branches which support them all.

The Arrow of Morality points not in any specific direction, but tends always outward, with increasing coherence over increasing context of meaning-making.

The practical application of this "moral" understanding is that we should strive to promote increasing awareness of (1) our present but evolving values, increasingly coherent over increasing context of meaning-making, and (2) our instrumental methods for their promotion, increasingly effectively over increasing scope of interaction and consequences, within an evolving intentional framework for effective decision-making at a level of complexity exceeding individual human faculties.