Self-sacrifice is a scarce resource

post by mingyuan · 2020-06-28T05:08:05.010Z · score: 62 (37 votes) · LW · GW · 9 comments


  High school: Naive morality
  Effective altruism, my early days
  You can’t make a policy out of self-sacrifice
“I just solved the trolley problem.... See, the trolley problem forces you to choose between two versions of letting other people die, but the actual solution is very simple. You sacrifice yourself.”

- The Good Place

High school: Naive morality

When I was a teenager, I had a very simple, naive view of morality. I thought that the right thing to do was to make others happy. I also had a naive view of how this was accomplished - I spent my time digging through trash cans to sort out the recyclables, picking up litter on the streets, reading the Communist Manifesto, going to protests, you know, high school kind of stuff. I also poured my heart into my dance group which was almost entirely comprised of disadvantaged students - mainly poor, developmentally disabled, or severely depressed, though we had all sorts. They were good people, for the most part, and I liked many of them simply as friends, but I probably also had some sort of intelligentsia savior complex going on with the amount of effort I put into that group.

The moment of reckoning for my naive morality came when I started dating a depressed, traumatized, and unbelievably pedantic boy with a superiority complex even bigger than his voice was loud. I didn’t like him. I think there was a time when I thought I loved him, but I always knew I didn’t like him. He was deeply unkind, and it was like there was nothing real inside of him. But my naive morality told me that dating him was the right thing to do, because he liked me, and because maybe if I gave enough of myself I could fix him, and then he would be kind to others like he was to me. Needless to say this did not work. I am much worse off for the choices I made at that time, with one effect being that I have trouble distinguishing between giving too much of myself and just giving basic human decency.

And even if it were true that pouring all of my love and goodness out for a broken person could make them whole again, what good would it be? There are millions of sad people in the world, and with that method I would only be able to save a few at most (or in reality, one, because of how badly pouring kindness into a black hole burns you out). If you really want to make people’s lives better, that is, if you really care about human flourishing, you can’t give your whole self to save one person. You only have one self to give.

Effective altruism, my early days

When I first moved to the Bay, right after college, I lived with five other people in what could perhaps practically but certainly not legally be called a four-bedroom apartment. Four of the others were my age, and three of us (including me) were vegan. The previous tenants had left behind a large box of oatmeal and a gallon of cinnamon, so that was most of what I ate, though I sometimes bought a jar of peanut butter to spice things up or mooched food off of our one adult housemate. I was pretty young and pretty new to EA and I didn’t think it was morally permissible to spend money, and many of my housemates seemed to think likewise. Crazy-burnout-guy work was basically the only thing we did - variously for CEA, CHAI, GiveWell, LessWrong, and an EA startup. My roommate would be gone when I woke up and not back from work yet when I fell asleep, and there was work happening at basically all hours. One time my roommate and I asked Habryka if he wanted to read Luke’s report on consciousness with us on Friday night and he told us he would be busy; when we asked with what he said he’d be working.

One day I met some Australian guys who had been there in the really early days of EA, who told us about eating out of the garbage (really!) and sleeping seven to a hallway or something ridiculous like that, so that they could donate fully 100% of their earnings to global poverty. And then I felt bad about myself, because even though I was vegan, living in a tenement, half-starving myself, and working for an EA org, I could have been doing more.

It was a long and complex process to get from there to where I am now, but suffice it to say I now realize that being miserable and half-starving is not an ideal way to set oneself up for any kind of productive work, world-saving or otherwise.

You can’t make a policy out of self-sacrifice

I want to circle back to the quote at the beginning of this post. (Don’t worry, there won’t be any spoilers for The Good Place). It’s supposed to be a touching moment, and in some ways it is, but it’s also frustrating. Whether or not self-sacrifice was correct in that situation misses the point; the problem is that self-sacrifice cannot be the answer to the trolley problem.

Let’s say, for simplicity’s sake, that me jumping in front of the trolley will stop it. So I do that, and boom, six lives saved. But if the trolley problem is a metaphor for any real-world problem, there are millions of trolleys hurtling down millions of tracks, and whether you jump in front of one of those trolleys yourself or not, millions of people are still going to die. You still need to come up with a policy-level answer for the problem, and the fact remains that the policy that will result in the fewest deaths is switching tracks to kill one person instead of five. You can’t jump in front of a million trolleys.

There may be times when self-sacrifice is the best of several bad options. Like, if you’re in a crashing airplane with Eliezer Yudkowsky and Scott Alexander (or substitute your morally important figures of choice) and there are only two parachutes, then sure, there’s probably a good argument to be made for letting them have the parachutes. But the point I want to make is, you can’t make a policy out of self-sacrifice. Because there’s only one of you, and there’s only so much of you that can be given, and it’s not nearly commensurate with the amount of ill in the world.


I am not attempting to argue that, in doing your best to do the right thing, you will never have to make decisions that are painful for you. I know many a person working on AI safety who, if the world were different, would have loved nothing more than to be a physicist. I’m glad for my work in the Bay, but I also regret not living nearer to my parents as they grow older. We all make sacrifices at the altar of opportunity cost, but that’s true for everyone, whether they’re trying to do the right thing or not.

The key thing is that those AI safety researchers are not making themselves miserable with their choices, and neither am I. We enjoy our work and our lives, even if there are other things we might have enjoyed that we’ve had to give up for various reasons. Choosing the path of least regret doesn’t mean you’ll have no regrets on the path you go down.

The difference, as I see it, is that the “self-sacrifices” I talked about earlier in the post made my life strictly worse. I would have been strictly better off if I hadn’t poured kindness into someone I hated, or if I hadn’t lived in a dark converted cafe with a nightmare shower and tried to subsist off of stale oatmeal with no salt.

You’ll most likely have to make sacrifices if you’re aiming at anything worthwhile, but be careful not to follow policies that deplete the core of yourself. You won’t be very good at achieving your goals if you’re burnt out, traumatized, or dead. Self-sacrifice is generally thought of as virtuous, in the colloquial sense of the word, but moralities that advocate it are unlikely to lead you where you want to go.

Self-sacrifice is a scarce resource.


Comments sorted by top scores.

comment by Dagon · 2020-06-28T15:46:16.057Z · score: 13 (8 votes) · LW(p) · GW(p)
Self-sacrifice is a scarce resource.

I frame it a little differently. "Self" is the scarce resource. Self-sacrifice can be evaluated just like spending/losing (sacrificing) any other scarce and valuable resource. Is the benefit/impact greater than the next-best thing you could do with that resource?

As you point out in your examples, the answer is mostly "no". You're usually better off accumulating more self (becoming stronger), and then leveraging that to get more result with less sacrifice. The balance may change as you age, and the future rewards of self-preservation get smaller as your expected future self-hours decrease. But even toward end-of-life, the things often visible as self-sacrifice remain low-impact and don't rise above the alternate uses of self.

comment by Viliam · 2020-06-28T19:36:53.484Z · score: 11 (5 votes) · LW(p) · GW(p)
you can’t make a policy out of self-sacrifice

Taking this from a Kantian-ish perspective: what would actually happen if many people adopted this policy? From third-person perspective, this policy would translate to: "The proper way to solve ethical problem is to kill those people who take ethics most seriously." I can imagine some long-term problems with this, such as running out of ethical people rather quickly. If ethics means something else than virtue signaling, it should not be self-defeating.

comment by orthonormal · 2020-06-28T17:29:19.364Z · score: 10 (5 votes) · LW(p) · GW(p)

This is super important, and I'm curious what your process of change was like.

(I'm working on an analogous change- I've been terrified of letting people down for my whole adult life.)

comment by Wei_Dai · 2020-06-28T22:49:34.881Z · score: 5 (3 votes) · LW(p) · GW(p)

If you find yourself doing too much self-sacrifice, injecting a dose of normative and meta-normative uncertainty might help. (I've never had this problem, and I attribute it to my own normative/meta-normative uncertainty. :) Not sure which arguments you heard that made you extremely self-sacrificial, but try Shut Up and Divide? [LW · GW] if it was "Shut Up and Multiply", or Is the potential astronomical waste in our universe too small to care about? [LW · GW] if it was "Astronomical Waste".

comment by Slider · 2020-06-28T19:23:00.204Z · score: 1 (1 votes) · LW(p) · GW(p)

If millions of trolleys are about and millions of people self-sacrifice to fix them then suicidal fixing can be a valid policy. Baneling ants exist and are selected for.

The impulse to value self-sacrife might come form the default position that people are very good at looking after their own interest. So at a coarse level any "self-detrimenal" effect is likely to come from a complicated or abstract moral reasoning. But then there is the identity blind kind of reasoning. If you think that people that help others should not be tired all the time, if person A helps others and is tired you should arrange for their relaxation. This remains true if person A is yourself. But the basic instinct is to favour giving yourself a break becuase it is hedonistically pleasing. But the reasoning of persons in your position should arrange their stuff in a certain way is a kind of "cold" basis for possibly the same outcome.

A policy that good people should suicide just becuase they are good is very terrible policy. But the flip side is that some bad people will pay unspeakble costs to gain real survivalship percentages. People have a right to life even in an extended "smaller things than life-and-death" way. But life can be overvalued and most real actions carry a slight chance of death.

Then there is the issue of private matters versus public matters. If there is a community of 1000 people that has one shared issue involving the life and death of 100 people and each has private matter involving 1 different person, then via one logic everybody sticking to their own business saves 1000 people vs 100 and via another way a person doing public work over private work saves 100 people vs 1 person. However if 100 persons do public work at the cost of their private work then it is a choice of between 100 vs 100 people. Each of those can think they are being super efficient 100:1 heroes. And those that choose a select few close ones can seem like super ineffcient 1:100 ones.

comment by gjm · 2020-06-30T10:44:57.510Z · score: 3 (2 votes) · LW(p) · GW(p)

Your last paragraph doesn't make much sense to me. I think you need to specify how much needs to be done in order to resolve that one shared issue. If it requires the same investment from all 1000 people as they'd have put into saving those single individual lives, then it's 1000 people versus 100 people and they should do the individual thing. If it requires just one person to do it, then (provided there's some way of selecting that person) it's 1 person versus 100 people and someone should do the shared thing. If it requires 100 people to do it, then as you say it's a choice of 100 versus 100 and other considerations besides "how many people saved?" will dominate. But none of this is really about private versus public, and whether someone's being efficient or inefficient in making a particular choice depends completely on that how-much-needs-to-be-done question that you left unspecified.

(There are public-versus-private issues, and once you nail down how much public effort it takes to resolve the shared issue then they become relevant. Coordination is hard! Public work is more visible and may motivate others! People care more about people close to them! Etc., etc.)

comment by Slider · 2020-06-30T11:56:09.102Z · score: 1 (1 votes) · LW(p) · GW(p)

Why is it mandatory? What happens if I don't specify?

I wrote it as weighing the importance but I had an incling it is more of a progress about how much is done. If one has access to accurate effort information then utilitarian calculus is easy. However sometimes there are uncertainties about them and some logics do not require or access this information. Or like you know exactly how cool it would be to be on the moon but you don't have an idea whether it is expensive or super duper expensive and you need to undertake a research program during which the costs clear up. Or you could improve healthcare or increase equanimity of justice. So does that mean that because cost are harder to estimate in one field vs other fields, predictable costs get selected over more nebulous ones? Decisions under big cost uncertainty and difficulty in comparing values are not super rare. But still a principle of "if you use a lot of resources for something it better be laudable in some sense" survives.

For example in the case that an effective selection mechanism is not found there is danger that 1 person actually does the job, 1 tries to help but is only half effective and 98 people stand and watch as the two try to struggle. In the other direction high probablity of being a useless bystander might make that 0 people attempt the job. If everybody just treated jobs as jobs without distintion on how many others might try it the jobs with most "visiblity" will likely be overcrowded or overcrowded relative to their otherwise importance. In a way what has sometimes been described as a "bias" dilution of responcibility can be seen as a hack / heuristic to solve the situation. It tries to balance so that in a typical size crowd the expected amount of people taking action is a small finite number, by raising the bar to action according to how big a crowd you are in. It is a primitive kind of coordination but even that helps a lot.

Overtly sacrifical behaviour could be analysed as giving way too much more importance to other peoples worries, that is removing the dilution of responciblity without replacing it with anything more advanced. Somebody that tries to help everybody in a village will as a small detail spend a lot of time salesmanning across the village and the transit time alone might cut into the efficiency even before considering factors like greater epistemological distance (you spend a lot of time interviewing people whether they are fine or not) and not being fit for every kind of need (you might be good at carpentry but that one requires masonry). Taking these somewhat arbitrary effects effectively into account you could limit yourself to a small geographical area (less travelling), do stuff only upon request (people need to know what their needs are) or only do stuff you know how to do (do the carpentry for the whole country but no masonry for anyone). All move into the direction that a need somebody has will go unaddressed by you personally.

comment by gjm · 2020-06-30T14:13:09.532Z · score: 2 (1 votes) · LW(p) · GW(p)

Mandatory? It's not mandatory. But if you don't specify then you're making an argument with vital bits missing.

I agree that utilitarian decision making (or indeed any decision making) is harder when you don't have all the information about e.g. how much effort something takes.

I also agree that in practice we likely get more efficiency if people care more about themselves and others near to them than about random people further away.

comment by Slider · 2020-06-30T15:00:43.214Z · score: 1 (1 votes) · LW(p) · GW(p)

Welll the specification would be "jobs of roughly equal effort" which I guess I left implicit in a bad way.

I think you are arguing that the essence will depend on the efficiency ratios but I think the shared vs not-shared property will overwhelm efficiency considerations. That is if job efficiency varies between 0.1 and 10 and the populations are around 10000 and 100000 then 1000 public effort lives at typical bad efficiency will seem comparable to 1 private life at good efficiency while at population level doing the private option at bad efficiency would be comparable to getting the public option done. Thus any issue affecting the "whole" community will overwhelm any private option.

It is crucial that the public task is finite and shared. If you could start up independent "benefit all" extra projects (and get them done alone) the calculus would be right. One could try point ot the error also via "marginal result" in that yes it is an issue of 1000 lives but if your participation doesn't make or break the project then it is of zero impact. So one should be indifferent rather than thinking it is the utmost importance. If it can partially succeed then the impact is the increase in success not the total success. Yet when you think stuff like "hungry people in africa" your mind probably refers to the total issue/success.

If I am asking what is the circumference of a circle at lot of people would accept pi as the answer. Somebody could insist that I tell the radius as essential information to determine how long the circumference would be. Efficiency is not essential to the phenomenon that I try to point out.