What do the baby eaters tell us about ethics?
post by spookyuser · 2019-10-06T22:27:47.656Z · LW · GW · 29 commentsThis is a question post.
Contents
Answers 3 countedblessings None 29 comments
I just finished the baby eater sequence ( https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8 ) [LW · GW] and aside from it being an incredibly engaging story I feel like there is a deeper message about ethical systems I don't fully understand yet.
In the sequence introduction Eliezer says it makes points about "naturalistic metaethics" but I wonder what points are these specifically, since after reading the SEP page on moral naturalism https://plato.stanford.edu/entries/naturalism-moral/ I can't really figure out what the mind-independent moral facts are in the story.
Another thought I've had since I read the story is that it seems like a lot of human-human interactions are really human-babyeater interactions. If you're religious and talking to an atheist about God, both of you will look like baby eaters to the other. Likewise if you watch Fox News everyone on CNN or MSNBC will look like baby eaters but the same is true in reverse, everyone watching CNN will think Fox News are the baby eaters.
I have to say, this feels like some kind of ethical nihilism, but I would be curious to know if there are any canonical meta-ethical or ethical theories that correspond to the _message_ of the baby eater sequence, because if there is one, I think I agree with it.
Answers
I'm tempted to put it like this: ethics is a rule for producing something called a "total order" [LW · GW] that tells you what to do in any and every given situation. Basically, you have a list of all the things that could happen, and then ethics puts them in an order so you have the most ethical conceivable thing at the top and the least ethical conceivable thing at the bottom.
From there, you go to the top and then start chopping off things that you can't do. For example, maybe your ethics has "give everyone an immortality pill" really high up on the order. But you can't do that, so you chop it off and keep going down. Eventually you run into something that you can do, so you do it because it's the most ethical thing remaining.
What the humans, Babyeaters, and Super Happy people all find out when they meet in literature-space is that you can define an ethical rule for producing any total order from the unordered list of all things that could happen. Say that list is really small, just A, B, and C. Well, A < B < C is clearly one valid order. But so is C < B < A. And B < A < C. Etc.
The humans are following one rule, the Babyeaters another, and the Super Happy people yet a third. Because of the way algorithms feel from the inside [LW · GW], they all perceive each other as monstrous.
Ethical nihilism is an easy mistake, I think. Label the rule producing the order A < B < C as "moral." Then it is an objective fact to say that the rule producing C < B < A is "not moral." It's also possible that you live in a big universe with lots of stuff in it like chess, genocide, and chocolate, and so your ethics rule is really complicated and so you might not have the order it gives you fully derived. Thus you might find yourself asking questions like "Is it moral to do [insert action here]?" Still, the order that is "moral" is the order that is "moral." [LW · GW]
That's my initial guess after skimming the set of links given by riceissa. We'll be discussing orders (albeit usually partial, not total) in my category theory series of posts, so if this interests you, follow along....
↑ comment by TAG · 2019-10-07T08:40:32.942Z · LW(p) · GW(p)
Label the rule producing the order A < B < C as “moral.”
Can you do more than just label? Is there a way of finding the one true order?
Replies from: clone of saturn, TAG↑ comment by clone of saturn · 2019-10-07T08:58:17.679Z · LW(p) · GW(p)
How would you know if you had?
Replies from: TAG↑ comment by TAG · 2020-06-29T19:50:52.976Z · LW(p) · GW(p)
Label the rule producing the order A < B < C as “moral.”
As a way of explaining "The Meaning of Right" that is pretty unhelpful. EY says the true morality is a blob of computation that doens't vary between persons. But it is capable of varying from other blobs of computation. So is calling that particular computation Moral the recognition of a pre-existing fact about it, or the stipulation of a meaning for the word "morality"?
Likewise -- is the label a recognition (not abitrary) , or a stipulation that is arbitrary at the point that it is made.
↑ comment by spookyuser · 2019-10-07T08:35:39.757Z · LW(p) · GW(p)
Thank you very much for that explanation, the idea of differing ethical orderings makes a lot of sense especially in relation to the story.
I'll be sure to check out your category theory posts, sounds very interesting!
Replies from: Pattern↑ comment by Pattern · 2019-10-07T18:12:32.150Z · LW(p) · GW(p)
More relevant to the moral of the story - a number of the reasons the humans thought the baby eaters were evil, were reasons
the super happies thought the humans were evil, though to a lesser extent.
While the humans didn't find the baby eaters noble w.r.t that different trait, or appreciate their ethos, the humans thought what they did was right, though the main character was shocked by the super happies because he'd never considered that path (the change w.r.t pain, as opposed to just pursuing 'happiness'.)
29 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2019-10-07T01:17:37.672Z · LW(p) · GW(p)
I liked the story, but could never relate to its Eliezer-imposed "universal morality" of forcing others to conform to your own norms. To me the message of the story is "expansive metaethics leads to trouble, stick to your own and let others live the way they are used to, while being open to learning about each other's ways non-judgmentally".
Replies from: vanessa-kosoy, SaidAchmiz, Pattern↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-07T12:12:27.228Z · LW(p) · GW(p)
I don't understand what you're saying here. Do you think it's impossible for aliens to exist that would want to impose their norms on us? Or, you're saying that it was human!wrong of the humans in the story to consider meddling with the baby eaters? What position do you think Eliezer endorses that you disagree with?
Replies from: shminux↑ comment by Shmi (shminux) · 2019-10-07T15:45:21.045Z · LW(p) · GW(p)
The near-universal reaction of the crew to the baby-eaters customs is not just horror and disgust, but also the moral imperative to act to change them. It's as if there existed a species-wide objective "species!wrong", which is an untenable position, and, even less believably than that, as if there existed a "universal!meta-wrong" where anyone not adhering to your moral norms must be changed in some way to make it palatable (the super-happies are willing to go an extra mile to change themselves in their haste to fix things that are "wrong" with others).
This position is untenable because it would lead to constant internal infighting, as customs and morals naturally drift apart for a diverse enough society. Unless you impose a central moral authority and ruthlessly weed out all deviants.
I am not sure how much of the anti-prime-directive morality is endorsed by Eliezer personally, as opposed to merely being described by Eliezer the fiction writer.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-08T11:31:25.589Z · LW(p) · GW(p)
In what sense is this position "untenable"? In the evolutionary sense (anyone who holds this position would go extinct) or in the logical sense (the position can be proved wrong)?
In evolution, there are two competing forces: on the one hand creating more conflict increases the probability to be destroyed, on the other hand imposing your norms on other groups obviously causes those norms to propagate more and thus makes them more evolutionary fit. For real humans in the real world, our history kind of is constant internal infighting.
If interpreted in the logical sense, I don't think your argument makes sense: it seems like trying to derive an "ought" from an "is".
Also, the actual distance between those diverging morals matters, and baby eating surely seems like an extreme example.
I don't claim claim that leaving the Baby-eaters alone is necessarily we!wrong, but it is not obvious to me that it is we!right, and it is even less obvious that it doesn't make sense to postulate a future human culture that would consider it right (especially that we already know it is supposed to be a "weird" culture by modern standards), much less an alien culture like the Super-Happies.
Replies from: shminux↑ comment by Shmi (shminux) · 2019-10-08T15:46:40.153Z · LW(p) · GW(p)
Re "tenability", today's SMBC captures it well: https://www.smbc-comics.com/comic/multiplanetary
If interpreted in the logical sense, I don't think your argument makes sense: it seems like trying to derive an "ought" from an "is".
Hmm, in my reply to OP I expressed what the moral of the story is for me, and in my reply to you I tried to justify it by appealing to the expected stability of the species as a whole. The "ought", if any, is purely utilitarian: to survive, a species has to be slow to act against the morals it finds abhorrent.
Also, the actual distance between those diverging morals matters, and baby eating surely seems like an extreme example.
Uh. If you live in a city, there is a 99% chance that there is little girl within a mile from you being raped and tortured by her father/older brother daily for their own pleasure, yet no effort is made to find and save her. I don't find the babyeaters' morals all that divergent from human, at least the babyeaters had a justification for their actions based on the need for the whole species to survive.
I don't claim claim that leaving the Baby-eaters alone is necessarily we!wrong, but it is not obvious to me that it is we!right
My point is that there is no universal we!right and we!wrong in the first place, yet the story was constructed on this premise, which led to the whole species being hoisted on its own petard.
it is supposed to be a "weird" culture by modern standards), much less an alien culture like the Super-Happies
Oh. It never struck me as weird, let alone alien. The babyeaters are basically Spartans and the super-happies are hedonists.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-08T17:45:43.366Z · LW(p) · GW(p)
The "ought", if any, is purely utilitarian: to survive, a species has to be slow to act against the morals it finds abhorrent.
I still don't understand, is your claim descriptive or prescriptive?
If you live in a city, there is a 99% chance that there is little girl within a mile from you being raped and tortured by her father/older brother daily for their own pleasure, yet no effort is made to find and save her.
I don't understand what you're saying here at all. Obviously we have laws against rape, and these laws are enforced, although ofc there are perpetrators that don't get caught. The reason these things still happen is clearly not because we, as a species, are tolerant towards different moralities.
My point is that there is no universal we!right and we!wrong in the first place, yet the story was constructed on this premise, which led to the whole species being hoisted on its own petard.
"Universal we!right" is a contradiction in terms. The reason I added "we" there is because I am talking about our* values, not "universal" values. I agree that there are no universal values. Moreover it seems clear to me that, contrary to your claim, the premise of the story is precisely that there are no universal values. But maybe I just don't understand what you're saying here.
*I was vague about who exactly is "we", but feel free to draw the line around the participants of this conversation anywhere you want. Strictly speaking, each person probably has somewhat different values, but in a given debate about ethics there might be hope that the participants can come to a consensus. Indeed, if we did not believe there was such hope in this conversation then there would be no point having it (at least, to the extent the debate is prescriptive rather than descriptive, which I am not sure about atm).
Replies from: shminux↑ comment by Shmi (shminux) · 2019-10-09T06:56:49.122Z · LW(p) · GW(p)
I still don't understand, is your claim descriptive or prescriptive?
Neither... Or maybe descriptive? I am simply stating the implication, not prescribing what to do.
I don't understand what you're saying here at all.
Yes, we do have plenty of laws, but no one goes out of their way to find and hunt down the violators. If anything, the more horrific something is, the more we try to pretend it does not exist. You can argue and point at the law enforcement, whose job it is, but it doesn't change the simple fact that you can sleep soundly at night ignoring what is going on somewhere not far from you, let alone in the babyeaters' world.
"Universal we!right" is a contradiction in terms.
We may have not agreed on the meaning. I meant "human universal" not some species-independent morality.
in a given debate about ethics there might be hope that the participants can come to a consensus
I find it too optimistic a statement for a large "we". The best one can hope for is that logical people can agree with an implication like "given this set of values, this is the course of action someone holding these values ought to take to stay consistent", without necessarily agreeing with the set of values themselves. In that sense, again, it's describes self-consistent behaviors without privileging a specific one.
In general, it feels like this comment thread has failed to get to the crux of the disagreement, and I am not sure if anything can be done about it, at least without using a more interactive medium.
Replies from: Walker Vargas, vanessa-kosoy↑ comment by Walker Vargas · 2019-10-11T00:24:59.011Z · LW(p) · GW(p)
Vigilantism has been found to be lacking. If I wanted to help with that problem in particular I'd become a cop, or vote for politicians to put higher priority on it. That seems directly comparable to what the humans in the story intended to do for most of it.
What the baby eaters are doing is worst by most people's standards than anything in our history. At least if scale counts for something. Humans don't even need a shared utility function. There just needs to be a cluster around what most people would reflectively endource. Paperclip maximizers might fight each other over the exact definition, but a pencil maximizer is clearly not helping by any of their standards.
Also the baby eaters aren't Spartans. If you gave the Spartans a cure all for birth defects, they would stop killing their kids, and certainly wouldn't complain about cure.
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-11T20:47:33.484Z · LW(p) · GW(p)
I still don't understand, is your claim descriptive or prescriptive?
Neither... Or maybe descriptive? I am simply stating the implication, not prescribing what to do.
Then I don't understand what you're saying at all. If you are stating an implication, then I don't understand (i) what exactly is the premise (ii) what exactly is the conclusion (iii) how is this implication violated in the story Three World Collide.
Yes, we do have plenty of laws, but no one goes out of their way to find and hunt down the violators.
So, your argument is (correct me if I'm wrong): in the real world people only put that much effort into hunting down criminals, therefore it is unrealistic that in the story the people put so much effort into thinking what to do with the Baby-eaters. I am not convinced. In the real world, you need to allocate your limited resources between many problems you need to deal with. The Baby-eaters are a heretofore unknown problem on a huge scale (possibly dwarfing all human criminality), so it makes perfect sense the protagonists would put a lot of effort into dealing with it. Moreover, we are talking about a future humanity in which there is much less violent crime (IIRC this is stated explicitly in the story) and people are much more sensitive to ethical issues.
I meant "human universal" not some species-independent morality.
I don't think the story obviously postulates a human universal morality. It only implies that many people living at the same time period have similar views on certain ethical questions, which doesn't strike me as unrealistic?
In general, it feels like this comment thread has failed to get to the crux of the disagreement, and I am not sure if anything can be done about it, at least without using a more interactive medium.
Well, if you feel this is not productive we can stop?
Replies from: shminux↑ comment by Shmi (shminux) · 2019-10-12T02:02:55.057Z · LW(p) · GW(p)
It's frustrating where an honest exchange fails to achieve any noticeable convergence... Might try once more and if not, well, Aumann does not apply here, anyhow.
My main point: "to survive, a species has to be slow to act against the morals it finds abhorrent". I am not sure if this is the disagreement, maybe you think that it's not a valid implication (and by implication I mean the converse, "intolerant => stunted").
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-12T09:50:51.262Z · LW(p) · GW(p)
If I understood correctly, your objection to Three Worlds Collide is (mostly?) descriptive rather than prescriptive: you think the story is unrealistic, rather than dispute some normative position that you believe it defends. However, depending on the interpretation of that maxim you formulated, it is (IMO) either factually wrong or entirely consistent with the story of Three Worlds Collide.
Do you believe real world humans are "slow to act against the morals it finds abhorrent"? If your answer is positive, how do you explain all (often extremely violent) conflicts over religion and political ideology over the course of human history? Whatever explanation you propose to these conflicts, what prevents it from explaining the conflict with the Baby-Eaters described in Three Worlds Collide? If your answer to the first question is negative, how do you explain the survival of the human species so far? Whatever explanation you provide to this survival, what prevents it from explaining the continued survival of the human species until the imaginary future in the story?
Replies from: shminux↑ comment by Shmi (shminux) · 2019-10-12T21:19:08.984Z · LW(p) · GW(p)
If I understood correctly, your objection to Three Worlds Collide is (mostly?) descriptive rather than prescriptive: you think the story is unrealistic, rather than dispute some normative position that you believe it defends.
I am not a moral realist, so I cannot dispute someone else's morals, even if I don't relate to them, as long as they leave me alone. So, yes, descriptive, and yes, I find the story a great read, but that particular element, moral expansionism, does not match the implied cohesiveness of the multi-world human species.
Do you believe real world humans are "slow to act against the morals it finds abhorrent"?
Yes. Definitely.
how do you explain all (often extremely violent) conflicts over religion and political ideology over the course of human history?
Generally, economic or some other interests in disguise. Like distracting the populous from the internal issues. You can read up on the reasons behind the Crusades, the Holocaust, etc. You can also notice that when the morals lead the way, extreme religious zealotry leads to internal instability, like the fractures inside Christianity and Islam. So, my model that you call "factually wrong" seems to fit the observations rather well, though I'm sure not perfectly.
Whatever explanation you provide to this survival, what prevents it from explaining the continued survival of the human species until the imaginary future in the story?
My point is that humans are behaviorally both much more and much less tolerant of the morals they find deviant than they profess. In the story I would have expected humans to express extreme indignation over babyeaters' way of life, but do nothing about it beyond condemnation.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-13T19:48:51.585Z · LW(p) · GW(p)
Alright, now I finally understand your claim. I still disagree with it: I think that your cynicism about human motivations is unsupported by evidence. But, that's not a debate I'm interested to start atm. Thank you for explaining your views.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-10-07T07:56:44.055Z · LW(p) · GW(p)
Indeed.
As far as I’m concerned, the humans in the story would be entirely justified in treating the Superhappies as mortal enemies who constitute an existential threat to humanity, and against whom any amount of force may reasonably be applied to stop them from making good on their intentions toward us… and, likewise, the Babyeaters would be equally justified in treating the humans in the same way. It would be foolishness to the point of sheer insanity, for the Superhappies (or, respectively, the humans) to expect the humans (or, respectively, the Babyeaters) to respond otherwise. (Any engagement with negotiations, by each respective weaker party, should in such a case be understood only as appeasement, forced only by threat of brute force, and only as permanent and reliable as that threat.)
Since this is hardly a productive way for civilizations to interact with each other, the much more sensible thing to do is just to leave each other alone, and to interact on mutually consensual terms only—making no attempt to meddle in one another’s internal affairs.
Replies from: Walker Vargas↑ comment by Walker Vargas · 2020-01-26T17:34:01.481Z · LW(p) · GW(p)
The story brings up the possibility that, the disutility of the babyeaters might outweigh the utility of humanity. There's certainly nothing logically impossible about this.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-01-26T17:44:10.475Z · LW(p) · GW(p)
I don’t see how this is responsive to anything I said. Could you elaborate?
Replies from: Walker Vargas↑ comment by Walker Vargas · 2020-02-22T15:31:04.360Z · LW(p) · GW(p)
Sorry this is so late. I haven't been on the site for a while. My last post was in reply to no interference always being better than fighting it out. Most of the character's seem to think that stopping the baby eaters has more utility than letting the superhappies do the same thing to us would cost.
comment by Viliam · 2019-10-07T21:44:26.198Z · LW(p) · GW(p)
If you're religious and talking to an atheist about God, both of you will look like baby eaters to the other. Likewise if you watch Fox News everyone on CNN or MSNBC will look like baby eaters but the same is true in reverse, everyone watching CNN will think Fox News are the baby eaters.
This seems like a lack of imagination how the aliens could be truly different from us. Narcissism of small differences, on a cosmic scale.
comment by Lukas Finnveden (Lanrian) · 2019-10-07T13:03:32.664Z · LW(p) · GW(p)
In the sequence introduction Eliezer says it makes points about "naturalistic metaethics" but I wonder what points are these specifically, since after reading the SEP page on moral naturalism https://plato.stanford.edu/entries/naturalism-moral/ I can't really figure out what the mind-independent moral facts are in the story.
I wouldn't necessarily expect Eliezer's usage to be consistent with Stanford's entry. LW in general and Eliezer in particular are not great at using words from academic philosophy in the same way that philosophers do (see e.g. "utilitarianism").
comment by riceissa · 2019-10-06T23:08:20.827Z · LW(p) · GW(p)
Eliezer has written a sequence on meta-ethics. I wonder if you're aware of it? (If you are, my next question is why you don't consider it an answer to your question.)
Another thought I've had since I read the story is that it seems like a lot of human-human interactions are really human-babyeater interactions.
I think Under-acknowledged Value Differences [LW · GW] makes the same point.
Replies from: spookyuser↑ comment by spookyuser · 2019-10-07T08:38:10.119Z · LW(p) · GW(p)
Thanks for that. I will check out Under acknowledged value differences. I do know about the meta-ethics squence but I have not read the entire sequence, just read a couple of posts here and there. Though it is on my list and I have been meaning to read the entire thing.
comment by countedblessings · 2019-10-07T00:16:33.720Z · LW(p) · GW(p)
You're playing chess. You're white, so it's your move first. It's a big board, and there's a lot of pieces, so you're not quite sure what to do.
Luckily for you, there definitely exists a rule that tells you the best possible move to play for every given configuration of pieces—the rule that tells you the move that maximizes the probability of victory (or since draws exist and may be acceptable, the move that minimizes the probability of defeat. Or maximizes the points you gain, 1 for a win and 0.5 for a draw, over an infinite number of games against the opponent in question—whatever.).
But many of these configurations of pieces will have more than one possible move to play, so it's not like this rule is just a given. You have to figure it out.
So what is a rule? It's something that tells you what move to play in any and every given position. Two rules are equal when, for every possible position, they tell you to play the same move.
When two rules are equal, we just merge them into one rule—they're literally the same, after all. So let's consider the list of all unequal rules—rules that differ in at least one move recommendation for at least one position from every other rule.
How many of these rules are there? Mathematically speaking, the answer is "a super-huge amount that would literally cause your mind to explode if you tried to hold them all in your head." After all, there is a universe-eating number of possible chess positions (remember, this is a rule that works for all possible chess positions, even ones that would never happen in a real game). And in every chess position, the number of possible ways that rules can be distinguished is equal to the number of possible moves in that position.
So imagine each rule as a black ball, each attached to this really big wall in this vast infinite array. Out of this huge infinity of black balls, one of these balls gets a little pink dot placed on its backside, so you can't see that it's there.
Now, out of this huge infinite array, you have to find the one ball with the little pink dot on it. That is the challenge of finding the rule that tells you the best chess move for each position.
Is ethics just as hard? No! Ethics is insanely harder. Because ethics tells you a most ethical move possible for each chess position, which includes illegal moves and therefore there are more black balls corresponding to just the subset of ethical choices for chess positions!
Now consider that also Go exists, and checkers, and, Fortnite, and situations that aren't even games at all, like most of everything in the universe.
There's a ball for each way a rule can be distinguished from all other rules over the full list of all possible situations, including but hardly limited to chess situations.
One of them has a little pink dot on its backside. Go find it.
That is the challenge of ethics.
Yes, people disagree about which ball has the little pink dot on it! Yes, you can search your heart and soul and still not know which ball has the little pink dot on it! That does not mean there is not a ball with a little pink dot on it!
The pink-dotted ball exists!
Alas, the Babyeaters were looking for the ball with the little red dot on it, and the Super Happy people looking for the ball with the little blue dot on it. Looking for different colored dots, or disagreeing about which ball has the pink/red/blue dot on it, is the stuff that wars and trade are made of.
Replies from: Jiro, SaidAchmiz, TAG↑ comment by Jiro · 2019-10-10T21:58:40.745Z · LW(p) · GW(p)
Luckily for you, there definitely exists a rule that tells you the best possible move to play for every given configuration of pieces—the rule that tells you the move that maximizes the probability of victory (or since draws exist and may be acceptable, the move that minimizes the probability of defeat.
If your opponent is a perfect player, each move has a 0% or 100% probability of victory. You can only maximize it in a trivial sense.
If your opponent is an imperfect player, your best move is the one that maximizes the probability of victory given your opponent's pattern of imperfection. Depending on what this pattern is, this may also mean that each move has a 0% or 100% probability of victory.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-10-07T08:18:14.426Z · LW(p) · GW(p)
The pink-dotted ball exists!
How do you know? What even makes you think that “the pink-dotted ball exists”? How did you come to believe this? “What do you think you know, and how do you think you know it?”
Notice that in life, unlike in chess, there is no agreed-upon metric for how well you’ve done. It’s not just that we don’t agree on which rule maximizes the expected score at game’s end; we also don’t agree just what exactly constitutes the ‘score’! (For that matter, we don’t even agree on what constitutes “game’s end”…)
In other words, suppose you somehow find the one ball with a pink dot on it. “Eureka!”, you shout, grabbing the ball and turning it around, “Look! The pink dot!”
Whereupon your friend Alice looks at the ball you’re holding and says “Eh? That dot isn’t pink at all. What, are you blind or something? It’s clearly orange.” And your other friend, Bob, asks, confused, “Why are we looking for a pink dot, anyway? It’s a green triangle we should be looking for, isn’t it?”
And so on. In short, ethics (and metaethics) is actually much, much harder than you make it out to be. In fact, it’s about as hard as looking for the proverbial black cat in the dark room…
Replies from: TAG↑ comment by TAG · 2019-10-07T13:50:57.104Z · LW(p) · GW(p)
Go chess and fortnite are all amoral. They are not morally relevant.
If ethics had to guide you in every situation, not just a subset, then it would be insanely complex. Likewise, if any value counted as moral value. Make different assumptions, and suddenly things get easier. Maybe as few as ten rules.
Replies from: Slider↑ comment by Slider · 2019-10-11T21:18:39.207Z · LW(p) · GW(p)
Such games are not guaranteed to be morality free.
If you are playing against a chess player that will kill themselfs if they lose and their death is morally relevant then chess strategy becomes relevant (even if only for how to effectively lose)
comment by Pattern · 2019-10-06T23:32:34.294Z · LW(p) · GW(p)
Two things:
1. The babyeaters were supposed to be weird - they were supposed to be alien. In my opinion, to the extent that they have something in common (w.r.t that extreme) with humans, it should not be "I disagree with this person politically this is what it's like meeting an alien" but rather the observation that the word "cannibals" exists.
2. What everyone had in common was - Everyone wanted everyone else to change.* That was the source of conflict. (They also all had spaceships, and the more "loving" the species, the more advanced the tech.)
*Sort of. Was less clear with the humans, but more detail was available. Wouldn't have been an issue (to that extent) if they were all the same.
a deeper message about ethical systems
Maybe the assertion that they evolved?