tivelen's Shortform
post by tivelen · 2021-11-11T00:31:27.311Z · LW · GW · 13 commentsContents
13 comments
13 comments
Comments sorted by top scores.
comment by tivelen · 2021-11-11T00:31:27.590Z · LW(p) · GW(p)
Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn't, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is "optional", right?
I read about slack [LW · GW] recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.
I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought "That's just slack applied to morality, isn't it?". Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.
Isn't my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn't I just about to conclude without actually reconciling the two positions?
Oh. Looks like I still have some actual work ahead of me, and some more learning to do.
Replies from: JBlack, Vladimir_Nesov, TAG↑ comment by JBlack · 2021-11-11T02:44:16.820Z · LW(p) · GW(p)
I suspect that it's a combination of a lot of things. Slack, yes. Also Goodhart's law, in that optimizing directly for any particular expression of morality is liable to collapse it.
There are also second and greater order effects from such moral principles: people who truly believe that people must always do the single most moral thing are likely to fail to convince others to live the same way, and so reduce the total amount of good that could be done. They may also disagree on what the single most moral thing is, and suffer from factionalization and other serious breakdowns of coordination that would be less likely in people who are less dogmatic about moral necessity.
It's a difficult problem, and certainly not one that we are going to solve any time soon.
Replies from: Viliam↑ comment by Viliam · 2021-11-11T23:41:43.088Z · LW(p) · GW(p)
Second order effects, indeed.
Instead of being (the only) moral agent, imagine yourself in a role of a coordinator of the moral agents. You specify their algorithm, they execute it. Your goal is to create maximum good, but you must do it indirectly, through the agents.
Your constraints at the good maximizations are the following: Doing good requires spending some resources; sometimes a trivial amount, sometimes a significant amount. So you aim at a balance between spending resources to do good now, and saving resources to keep your agents alive and strong for a long time, so they can do more good over their entire lifetimes. Furthermore, the agents do kind of volunteer for the role, and more strict rules statistically make them less likely to volunteer. So you again aim at a balance between more good done per agent, and more agents doing good.
The result is a heuristic where some rules are mandatory to follow, and other rules are optional. The optional rules do not slow down your recruitment of moral agents, but the ones who do not mind having strict rules are provided an option to do more good.
↑ comment by Vladimir_Nesov · 2021-11-11T02:29:08.474Z · LW(p) · GW(p)
It's useful, but likely not valuable-in-itself for people to strive to be primarily morality optimizers. Thus the optimally moral thing could be to care about the optimally moral thing substantially less than sustainably feasible.
↑ comment by TAG · 2021-11-11T00:47:29.926Z · LW(p) · GW(p)
Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing.
That's all downstream of an implicit definition of "what I am obliged to do" as "the optimally moral thing". If what you are obliged to do is less demandingly, then there is space for the superogatory.
Replies from: tivelen↑ comment by tivelen · 2021-11-11T01:38:03.598Z · LW(p) · GW(p)
If I am not obliged to do something, then why ought I do it, exactly? If it's morally optimal, then how could I justify not doing it?
Replies from: JBlack, TAG↑ comment by JBlack · 2021-11-11T02:48:58.726Z · LW(p) · GW(p)
Many systems of morality are built more like "do no harm" than "do the best possible good at all times".
That is, you are morally obliged to choose actions from a particular set in some circumstances, but they do not prescribe which action from that set.
Replies from: tivelen↑ comment by tivelen · 2021-11-12T00:24:49.137Z · LW(p) · GW(p)
Such a system doesn't prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more "morally virtuous" to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement "X is more morally virtuous but not prescribed" coming from this moral system is not relevant to you. The system might as well say "X is more fribble". You won't care either way, unless the moral system also prescribes X, in which case X isn't supererogatory.
comment by tivelen · 2021-12-16T21:18:04.206Z · LW(p) · GW(p)
As of now, we cannot unfreeze people who have been cryogenically frozen and successfully revive them. However, we can freeze 5-day-old fertilized eggs and revive them successfully years later. When exactly does an embryo become unrevivable?
Identical twins split at around one week after fertilization, so if it were possible to revive past then, we could freeze one twin and let the other gestate, and effectively clone the gestated twin whenever desired. Since we can artificially induce twinning, then we could give every newly born person the ability to be cloned, seemingly with none of the downsides of current methods of cloning, although with the overhead of IVF treatment, which is substantial.
Is this currently possible under current scientific understanding? Is it more ethical than other methods of cloning? What ethical issues remain? Would anyone even want to do this if it were legally available?
Replies from: gwern↑ comment by gwern · 2021-12-16T22:24:14.719Z · LW(p) · GW(p)
When exactly does an embryo become unrevivable?
When it becomes roughly rabbit-kidney-sized, I think the answer is, ~12g, so maybe around week 10?
Since we can artificially induce twinning, then we could give every newly born person the ability to be cloned
Sure, they could be 'cloned' (once). But it's a weird scenario. If you freeze development of one embryo but not the other, what motivation would you or the grownup one have to implant it later? (Outside, of course, of agricultural applications like cattle where one could use "sib testing with embryo transfer" - the point would be sib-testing to see how the first sibling performs compared to predictions to decide whether to implant more embryos/clones of it into surrogate mothers.)
Replies from: tivelen↑ comment by tivelen · 2021-12-16T22:49:03.288Z · LW(p) · GW(p)
Human sib-testing seems like it would be useful, for one thing. There was a post here about cloning great people from the past. We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.
In theory, this would have the same use cases as typical cloning, with an upfront cost and time delay. The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.
We could clone people successfully with no further advances in science, or unusual costs. The ethical issues with cloning are no longer theoretical, if true. This seems like a big deal, but maybe cloning really isn't much of an ethical issue at all, and few people will be interested.
Replies from: gwern↑ comment by gwern · 2021-12-16T23:46:24.237Z · LW(p) · GW(p)
We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.
You would get one copy, but why not just use embryo selection instead to shift the population up? The point of cloning in breeding programs is typically to enable a single elite donor to make a huge and disproportionate contribution to the population, when you've already started using selection strategies like truncation selection to the top 10%. I hardly need to explain why no such things are at all viable for human populations.
The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.
That didn't seem like it was all that big a deal. Even Dollie's siblings were fine.
Human cloning hasn't been tried and found hard, it hasn't been tried. It's like CRISPR editing human babies; if your inference from the absence of CRISPR babies pre-He Jiankui was that "gosh, this must be very hard", you were very wrong. The true problems with human cloning were never any objections about it being super-difficult or hard to figure out. We could get human cloning to work with very little collective research effort. (How much did those Chinese primates cost? A few million bucks?) Compared to many things researchers accomplish... This is one reason why the Raelians weren't laughed out of the building when they announced they had cloned someone, because the experts knew deep down that a serious effort would definitely succeed. It's just no one wants to look like a Raelian or eugenicist, and there is little demand compared to more ordinary ways of having kids. (Who wants a clone of themself? Gay couples? No, because then one is left out, they'd rather have gametogenesis so they can have a true genetic child of them both.) So, it doesn't happen. And it'll continue on not happening for who knows how long. Lots of things are like that.