0 comments
Comments sorted by top scores.
comment by Roven Skyfal · 2022-08-12T11:52:30.613Z · LW(p) · GW(p)
Interesting question and one I have thought a lot about. I hold a moral anti-realism view and think most people speaking about "morality" are discussing normative views. For this reason, a Virtue Ethics framework would translate really well into a better world under your scenario. A framework designed to optimize people being better rather than doing better would hopefully be an improvement a step earlier in the process of living. I am not familiar with a lot of criticism about virtue ethics, but open to reconsidering. Additionally, a moral anti-realism position isn't necessary for virtue ethics to be an ideal framework in my mind.
Replies from: NinaR↑ comment by Nina Panickssery (NinaR) · 2022-08-12T13:11:51.708Z · LW(p) · GW(p)
Interesting! I can see where you are coming from with this idea. I feel like the question gets me to think about what the optimal framework would be based on how the whole system would behave / evolve as opposed to the normally individualistic view of morality.
comment by Dagon · 2022-08-12T15:17:26.399Z · LW(p) · GW(p)
I'd be a bad god. I'd probably encode some mix of kindness and responsibility, and likely a more static enjoyment of what is than a striving for change. I presume they'd never get out of hunter/gatherer mode.
And now I'm wondering exactly what my limits are. I don't think "a theory of morality" is something that stands alone in people. I translated that in my mind to a set of personality traits and behaviors that would automatically be enforced somehow and not evolve over time (in individuals or across generations). But if you mean more constrained cognitive moral theories that most of 'em don't actually follow very well, I'm not sure what I'd choose.
Note that none of this applies to real humans, nor perhaps any agents in this universe.
comment by jefallbright · 2022-08-13T18:15:48.821Z · LW(p) · GW(p)
As evolved—and evolving—agents, we would benefit from increasing awareness of (1) our values, hierarchical and fine-grained, and (2) our methods for promoting those present but evolving values in the world around us, with perceived consequences feeding back and selected for increasing coherence over increasing context of meaning-making (values) and increasing scope of instrumental effectiveness (methods). Lather, rinse, repeat…
As inherently perspectival agents acting to express our present but evolving nature within the bounds of our presently perceived environment of interaction, we can find moral agreement as if we were (metaphorically) individual leaves on the tips of the growing branches of a tree, and by traversing the (increasingly probable) branches of that tree toward the (most probable) trunk, rooted in what we know as the physics of our world, finding agreement at the level(s) of those branches supporting our values-in-common.
I am not a god, but this is the advice I would provide to the next one I happen to meet, and thereby hope to expedite our current haphazard progress (2.71828 steps forward, 1 step back) in the domain of social decision-making assessed as increasingly "moral", or right in principle.
The arrow of morality points not toward any imagined goal, but rather, outward, with increasing coherence over increasing context.
comment by Jesse Kanner (jesse-kanner) · 2022-08-12T11:22:57.165Z · LW(p) · GW(p)
Foundational questions to ponder: am I really God, or do I just think I'm God? How would I test this premise? I'd take a very long time to figure this out. Do I (or the humans) incur any penalty for a delay in encoding morality?
Also, are the humans in question subject to forces of evolution or are we talking about a static landscape? If we mean literal Homo Sapiens, then whatever we encode applies to a finite window as the creatures we manipulate will eventually evolve into something else.
Replies from: NinaR↑ comment by Nina Panickssery (NinaR) · 2022-08-12T13:09:14.392Z · LW(p) · GW(p)
You’re able to set everyone’s moral framework and create new humans, however once they are created you cannot undo or try again. You also cannot rely on being able to influence the world post creation.
Assume humans will be placed on the planet in an evolved state (like current Homo Sapiens) and they can continue evolving but will possess a pretty strong drive to follow the framework you embed (akin to the disgust response that humans have to seeing gore or decay).
Replies from: jesse-kanner↑ comment by Jesse Kanner (jesse-kanner) · 2022-08-14T11:34:28.707Z · LW(p) · GW(p)
I apologize for the simplistic response: if we're talking about a version of current Homo Sapiens, then they already have a perfectly functional meta-ethical system encoded into them. Otherwise they would not have evolved into humans. The quality of being human must necessarily include all the iterative development that got the creatures there.
I must therefore conclude that if I indeed have the power of God and felt the need to intervene in a disruptive manner to rewire the poor human's ethical system, I must actually be the Devil... and any action taken along this path would therefor be inherently evil. Evil actors typically think they are Gods and cannot tell the difference.
Replies from: NinaR↑ comment by Nina Panickssery (NinaR) · 2022-08-14T19:21:51.276Z · LW(p) · GW(p)
Fair enough! I like the spirit of this answer, probably broadly agree, although makes me think “surely I’d want to modify some people’s moral beliefs”…
Replies from: jesse-kanner↑ comment by Jesse Kanner (jesse-kanner) · 2022-08-15T11:09:59.225Z · LW(p) · GW(p)
Of course you do. Me too! Humans are compelled by a need for mutual domestication. It's what sustains our bonds and long-term survival. In many ways culture and society are a kind of marketplace of morality modification.