The Moral Copernican Principle

post by Legionnaire · 2023-05-02T03:25:40.142Z · LW · GW · 7 comments

Contents

7 comments

You ever see people arguing about whether some facet of another culture is good or bad, when suddenly one of them declares the moral high ground can't exist because obviously moral relativity is true? Well that line of reasoning is nonsense, but I don't think people know how to respond to it very well, so it often wins the argument. Consider this article a countermeasure you can reach for.

Moral absolutists say things like "killing is always wrong" and believe that aliens and AI will converge on our moral beliefs.[1]
Moral relativists say things like "it's just their culture, who are we to say" and believe it's not wrong for other cultures to have practices we deem immoral.

I think they often both manage to be wrong. Let's translate these archetypes to their physical equivalents. Imagine two people talking about whether to go to Mars:
"Earth is the center of civilization and life. It is objectively the best location to be! Mars is very far away and inhospitable."
"Relativity tells us there is no special location, or even reference frame, in the universe! To Mars, we're the ones that are very far away."

We know physical relativity is true, but that doesn't prevent us from talking about locations with regard to their utility to us, or distances being a valid concept.[2] Earth really is objectively easier to get to and more hospitable for every human since we share important traits like wanting to breath oxygen. Yet that doesn't prove that Mars has no useful resources, or that there are no exoplanets that are even better than Earth. Plus there are likely aliens that would enjoy Mars more than Earth. We may not occupy a "privileged position", but we as humans have bodies that evolved for certain conditions, and thus a few special positions. [3]

Similarly, moral relativism does not prove that all moral locations are equally hospitable for humans, because we have brains that evolved for specific conditions. Different locations can be objectively better or worse for every single homo sapien that finds themselves there. When a tribe on the other side of the world burns a witch at the stake for crop failure, the humans there are not that different from us. We're correct in saying that's a bad idea, especially having already tried that particular solution ourselves.

I would consider myself one, but I see a lot of moral relativists make the mistake of assuming no location is better than any other for human flourishing while moral absolutists fail to notice how many better locations exist for humans, or realize how many alien forms are possible. One of these common mistakes will likely have lasting negative impacts on the development of AI minds.

  1. ^

    Scott Aaronson and many others.

  2. ^

    Quippy version: Einstein's relativity didn't prove Mt. Everest isn't tall. 

  3. ^

     I think this is what Sam Harris is trying to say in The Moral Landscape and while arguing against the Is-Ought distinction. But the Is-Ought distinction is still correct, and science isn't going to be able to tell us about a universal moral framework that aliens will be on board with. Science can specifically tell us about common human values though.

7 comments

Comments sorted by top scores.

comment by RamblinDash · 2023-05-02T13:41:38.342Z · LW(p) · GW(p)

I have seen this idea presented as Natural Law Theory. The dumb unsophisticated version of Natural Law theory purport that certain laws are universal due to being derived from some kind of abstract first principles. The smart versions of Natural Law Theory posit that these "natural laws" are more like laws of engineering than like laws of physics - if you want a bridge that will stand up, you need to follow these kinds of rules. If you want a human society that will "stand up", you need to follow these kinds of rules. But the nature of those rules are derived from facts about humans, not from abstract universal principles or formal logic.

Replies from: Legionnaire
comment by Legionnaire · 2023-05-04T18:38:48.508Z · LW(p) · GW(p)

Good to know! I'll look more into it.

comment by Shmi (shminux) · 2023-05-02T06:03:05.666Z · LW(p) · GW(p)

Remember, morality is nothing but a useful proxy for boundedly rational agents to act in the interest of the society they are part of. There is nothing special about it. It is neither objective nor subjective. It is constructed. The closest view in metaethics and normative ethics is Moral Constructivism. Sean Carroll describes it as "human beings construct their ethical stances starting from basic impulses, logical reasoning, and communicating with others". Here is a good podcast (and a transcript) where he interviews Molly Crockett.

Replies from: Legionnaire, Mitchell_Porter
comment by Legionnaire · 2023-05-02T17:19:52.033Z · LW(p) · GW(p)

I agree that's all it is, but you can make all the same general statements about any algorithm.

The problem is that some people hear you say "constructed" and "nothing special", and then conclude they can reconstruct it any way they wish. It may be constructed and not special in a cosmic sense, but it's not arbitrary. All heuristics are not made equal for any given goal.

Replies from: shminux
comment by Shmi (shminux) · 2023-05-02T17:37:37.415Z · LW(p) · GW(p)

Yes, I agree with you there, constructed does not mean arbitrary. It has to be fit for a purpose.

comment by Mitchell_Porter · 2023-05-04T06:53:39.243Z · LW(p) · GW(p)

morality is nothing but a useful proxy for boundedly rational agents to act in the interest of the society they are part of

I feel like there's truth in this, but it also leaves a lot unanswered. For example, what are the "interests of society"? Are they constructed too? Or: if someone faces a moral dilemma, and they're trying to figure out the right thing to do, the psychologically relevant factors may include a sense of duty or responsibility. What is that? Is it a "basic impulse"? And so on. 

Replies from: shminux
comment by Shmi (shminux) · 2023-05-04T16:44:12.045Z · LW(p) · GW(p)

Yeah, I was a bit vague there, definitely worth going deeper. One would start comparing societies that survive/thrive with those that do not, and compare prevailing ethics and how it responds to the external and internal changes. Basically "moral philosophy" would be more useful as a descriptive observational science, not a prescriptive one. I guess in that sense it is more like decision theory. And yes, it interfaces with psychology, education and what not.