Defending One-Dimensional Ethics
post by HoldenKarnofsky · 2022-02-15T17:20:20.107Z · LW · GW · 9 commentsContents
Part 1: enough "nice day at the beach" benefits can outweigh all other ethical considerations -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- Part 2: linear giving -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- -NUH- -UH- Footnotes None 9 comments
Previously, I introduced the idea of "future-proof ethics." Ethics based on "common sense" and the conventions of the time has a horrible track record; future-proof ethics is about trying to make ethical decisions that we can remain proud of in the future, after a great deal of (societal and/or personal) moral progress.
Here I'm going to examine some of the stranger aspects of the "future-proof" approach I outlined, particularly ways in which it pushes us toward being one-dimensional: allowing our ethical decision-making to be "taken over" by the opportunity to help a large number of persons in the same way.
- Utilitarianism implies that providing a modest benefit to a large enough number of persons can swamp all other ethical considerations, so the best way to make the world a better place may involve focusing exclusively on helping animals (who are extremely numerous and relatively straightforward to help), or on people who haven’t been born yet (e.g., via working to reduce existential risks).
- It also can (potentially) imply things like: "If you have $1 billion to spend, it might be that you should spend it all on a single global health intervention."
- These ideas can be disturbing and off-putting, but I think there is also a strong case for them, for those who wish their ethics to be principled and focused on the interests of others.
I'm genuinely conflicted about how "one-dimensional" my ethics should be, so I'm going to examine these issues via a dialogue between two versions of myself: Utilitarian Holden (UH) and Non-Utilitarian Holden (NUH). These represent actual dialogues I’ve had with myself (so neither side is a pure straw person), although this particular dialogue serves primarily to illustrate UH's views and how they are defended against initial and/or basic objections from NUH. In future dialogues, NUH will raise more sophisticated objections.
I think this topic is important, but it is undoubtedly about philosophy, so if you hate that, probably skip it.
Part 1: enough "nice day at the beach" benefits can outweigh all other ethical considerations
To keep it clear who's talking when, I'm using -UH- for "Utilitarian Holden" and -NUH- for "non-Utilitarian Holden." (In the audio version of this piece, my wife voices NUH.)
-UH-
To set the stage, I think utilitarianism is the best candidate for an other-centered ethics, i.e., an ethics that's based as much as possible on the needs and wants of others, rather than on my personal preferences and personal goals. If you start with some simple assumptions that seem implied by the idea of “other-centered ethics,” then you can derive utilitarianism.
This point is fleshed out more in an EA Forum piece about Harsanyi's Aggregation Theorem [EA · GW].
I don’t think this ethical approach is the only one we should use for all decisions. I’ll instead be defending thin utilitarianism, which says that it’s the approach we should use for certain kinds of decisions. I think utilitarianism is particularly good for actions that are “good but usually considered optional,” such as donating money to help others.
With that background, I'm going to defend this idea: "providing a modest benefit to a large enough number of persons can swamp all other ethical considerations."
-NUH-
Ethics is a complex suite of intuitions, many of them incompatible. There’s no master system for it. So a statement as broad as “Providing a modest benefit to a large enough number of persons can swamp all other ethical considerations” sounds like an overreach.
-UH-
I agree there are many conflicting ethical intuitions. But many such intuitions are distorted: they're intuitions that seem to be about what's right, but are often really about what our peers are pressuring us to believe, what would be convenient for us to believe, and more.
I want to derive my ethics from a small number of principles that I really believe in, and a good one is the “win-win” principle.
Say that you’re choosing between two worlds, World A and World B. Every single person affected either is better off in World B, or is equally well-off in both worlds (and at least one person is better off in World B).
In this case I think you should always choose World B. If you don’t, you can cite whatever rules of ethics you want, but you’re clearly making a choice that’s about you and your preferences, not about trying to help others. Do you accept that principle?
-NUH-
I’m pretty hesitant to accept any universal principle, but it sounds plausible. Let’s see where this goes next.
-UH-
Let’s start with two people, Person 1 and Person 2.
Let’s imagine a nice theoretically clean space, where you are - for some reason - choosing which button to press.
- If you press Button 1, Person 1 gets a modest benefit (not an epic or inspiring one) - say, a nice relaxing day on the beach added to their life.
- If you press Button 2, Person 2 gets a smaller benefit - say, a few hours of beach relaxation added to their life.
In this theoretically simplified setup, where there aren’t awkward questions about why you’re in a position to press this button, considerations of fairness, etc. - you should press Button 1, and this isn’t some sort of complex conflicted decision. Do you agree with that?
-NUH-
Yes, that seems clear enough.
-UH-
OK. Now, say that it turns out Person 2 is facing some extremely small risk of a very large, tragic cost - say, a 1 in 100 million chance of dying senselessly in the prime of their life. We’re going to add a Button 3 that removes this 1-in-100-million risk of tragedy.
To press Button 3, you have to abstain from pressing Button 1 or Button 2, which means no beach benefits. What do you do?
-NUH-
I press Button 3. The risk of a tragedy is more important than a day at the beach.
-UH-
What if Person 2 prefers Button 2?
-NUH-
Hm, at first blush that seems like an odd preference.
-UH-
I think it would be a nearly universal preference.
A few miles of driving in a car gives you a greater than 1 in 100 million chance of dying in a car accident.1 Anyone who enjoys the beach at all is happy to drive more than 3 miles to get there, no? And time is usually seen as the main cost of driving. The very real (but small) death risk is usually just ignored.
-NUH-
OK, in fact most people in Person 2’s situation would prefer that I pressed Button 2, not 3. But that doesn’t mean it’s rational to do so. The risks of death from 3 miles of driving must just feel so small that we don’t notice them, even though we should?
-UH-
So now that you’ve thought about them, are you personally going to be unwilling to drive 3 miles to get a benefit as good as a nice day at the beach?
-NUH-
I’m not. But maybe I’m not being rational.
-UH-
I think your version of rationality is going to end up thinking you should basically never leave your house.
-NUH-
All right, let’s say it’s rational for Person 2 to prefer Button 2 to Button 3 - meaning that Button 2 really is "better for" them than Button 3. I still wouldn’t feel right pressing Button 2 instead of Button 3.
-UH-
Then you’re failing to be other-centered. We’re back to the “win-win” principle I mentioned above: if Button 2 is better than Button 3 for Person 2, and they're equally good for Person 1, and those are all the affected parties, you should prefer Button 2.
-NUH-
All right, let’s see where this goes.
Say I accept your argument and say that Button 2 is better than Button 3. And since Button 1 is clearly better than Button 2 (as above), Button 1 is the best of the three. Then what?
-UH-
Then we’re almost done. Now let’s add a Button 1A.
Instead of giving Person 1 a nice day at the beach, Button 1A has a 1 in 100 million chance of giving 100 million people just like Person 1 a nice day at the beach. It’s otherwise identical to Button 1.
I claim Button 1 and Button 1A are equivalently good, and hence Button 1A is also better than Button 2 and Button 3 in this case. Would you agree with that?
Button | Result |
Button 1 | Person 1 gets a nice day at the beach (modest benefit) |
Button 1A | There's a 1-in-100-million chance that 100 million people each get a nice day at the beach (modest benefit) |
Button 2 | Person 2 gets a few hours at the beach (smaller benefit) |
Button 3 | Person 2 avoids a 1-in-100-million chance of a horrible tragedy (say, dying senselessly in the prime of their life) |
-NUH-
I’m not sure - is there a particular reason I should think that “a 1 in 100 million chance of giving 100 million people just like Person 1 a nice day at the beach” is equally good compared to “giving Person 1 a nice day at the beach”?
-UH-
Well, imagine this from the perspective of 100 million people who all could be affected by Button 1A.
You can imagine that none of the 100 million know which of them will be “Person 1,” and think of this as: “Button 1 gives one person out of the 100 million a nice day at the beach; Button 1A has a 1 in 100 million chance of giving all 100 million people a nice day at the beach. From the perspective of any particular person, who doesn’t know whether they’re in the Person 1 position or not, either button means the same thing: a 1 in 100 million chance that that person, in particular, will have a nice day at the beach.”
-NUH-
That one was a little convoluted, but let’s say that I do think Button 1 and Button 1A are equivalently good - now what?
-UH-
OK, so now we’ve established that a 1 in 100 million chance of “100 million people get a nice day at the beach” can outweigh a 1 in 100 million chance of “1 person dying senselessly in the prime of their life.” If you get rid of the 1 in 100 million probability on both sides, we see that 100 million people getting a nice day at the beach can outweigh 1 person dying senselessly in the prime of their life.
Another way of thinking about this: two people are sitting behind a veil of ignorance [EA · GW] such that each person doesn’t know whether they’ll end up being Person 1 or Person 2. Let's further assume that these people are, while behind the veil of ignorance, "rational" and "thinking clearly" such that whatever they prefer is in fact better for them (this is basically a simplification that makes the situation easier to think about).
In this case, I expect both people would prefer that you choose Button 1 or 1A, rather than Button 2 or 3. Because both would prefer a 50% chance of turning out to be Person 1 and getting a nice day at the beach (Button 1), rather than a 50% chance of turning out to be Person 2 and getting only a few hours at the beach (Button 2) - or worse, a mere prevention of a 1 in 100 million chance of dying senselessly in their prime (Button 3).
For the rest of this dialogue, I’ll be using the “veil of ignorance” metaphor to make my arguments because it’s quicker and simpler, but every time I use it, you could also construct a similar argument to the other (first) one I gave.2
You can do this same exercise for any “modest benefit” you want - a nice day at the beach, a single minute of pleasure, etc. And you can also swap in whatever “large tragic” cost you want, incorporating any elements you like of injustices and indignities suffered by Person 2. The numbers will change, but there will be some number where the argument carries - because for a low enough probability, it’s worth a risk of something arbitrarily horrible for a modest benefit.
-NUH-
Arbitrarily horrible? What about being tortured to death?
-UH-
I mean, you could get kidnapped off the street and tortured to death, and there are lots of things to reduce the risk of that that you could do and are probably not doing. So: no matter how bad something is, I think you’d correctly take some small (not even astronomically small) risk of it for some modest benefit. And that, plus the "win-win" principle, leads to the point I’ve been arguing.
-NUH-
I want to come back to my earlier statement about morality. There is a lot in morality. We haven’t talked about when it’s right and wrong to lie, what I owe someone when I’ve hurt them, and many other things.
-UH-
That's true - but we’ve established that whatever wrong you’re worried about committing, it’s worth it if you help a large enough number of persons achieve a modest benefit.
-NUH-
You sound like the villain of a superhero movie right now. Surely that’s a hint that you’ve gone wrong somewhere? “The ends justify the means?”
-UH-
In practice, I endorse avoiding “ends justify the means” thinking, at least in complex situations like you see in the movies. That’s a different matter from what, in principle, makes an action right.
I’m not saying the many other moral principles and debates are irrelevant. For example, lying might tend to hurt people, including indirectly (e.g., it might damage the social order, which might lead over time to more people having worse lives). It might be impossible in practice to understand all the consequences of our actions, such that we need rules of thumb like “don’t lie.” But ultimately, as long as you’re accepting the "win-win" principle, there’s no wrong you can’t justify if it truly helps enough persons. And as we’ll see, some situations present pretty simple opportunities to help pretty huge numbers of persons.
-NUH-
That’s a very interesting explanation of why your supervillain-like statements don’t make you a supervillain, but I wouldn’t say it’s conclusive or super satisfying. Shouldn’t you feel nervous about the way you’re going off the rails here? This is just not what most people recognize as morality.
-UH-
I think that “what most people recognize as morality” is a mix of things, many of which have little or nothing to do with making the world better for others. Conventional morality shifts with the winds, and it has often included things like “homosexuality is immoral” and “slavery is fine” and God knows what all else.
There are lots of moral rules I might follow to fit in or just to not feel bad about myself … but when it comes to the things I do to make the world a better place for others, the implications of the "win-win" principle seem clear and rigorously provable, as well as just intuitive.
We’re just using math normally and saying that if you care at all about benefiting one person, you should care hugely about benefiting huge numbers of persons.
As a more minor point, it’s arguably only fairly recently in history that people like you and I have had the opportunity to help massive numbers of persons. The technological ability to send money anywhere, and quantitatively analyze how much good it’s doing, combined with massive population and inequality (with us on the privileged end of the inequality), is a pretty recent phenomenon. So I don’t think the principle we’re debating has necessarily had much chance to come up in the past anyway.
-NUH-
I just pull back from it all and I envision a world where we’ve both got a lot of money to give. And I’m dividing my giving between supporting my local community, and fighting systematic inequities and injustices in my country, and alleviating extreme suffering … and you found some charity that can just plow it all into getting people nice days at the beach at a very cost-effective rate. And I’m thinking, “What happened to you, how did you lose sight of basic moral intuitions and turn all of that money into a bunch of beach?”
-UH-
And in that world, I’m thinking:
“I’m doing what everyone would want me to do, if we all got together and reasoned it out under the veil of ignorance. If you assembled all the world’s persons and asked them to choose whether I should give like you’re giving or give like I’m giving, and each person didn’t know whether they were going to be one of the persons suffering from injustices that you’re fighting or one of the (far more numerous) persons enjoying a day at the beach that I’m making possible, everyone would say my philanthropy was the one they wanted to see more of. I am benefiting others - what are you doing?
“You’re scratching your own moral-seeming itches. You’re making yourself feel good. You’re paying down imagined debts that you think you owe, you’re being partial toward people around you. Ultimately, that is, your philanthropy is about you and how you feel and what you owe and what you symbolize. My philanthropy is about giving other people more of the lives they’d choose.
“My giving is unintuitive, and it's not always 'feel-good,' but it's truly other-centered. Ultimately, I'll take that trade.”
For further reading on this topic, see Other-Centered Ethics and Harsanyi’s Aggregation Theorem [EA · GW]
Part 2: linear giving
-UH-
Now I'm going to argue this:
It’s plausible (probably not strictly true, but definitely “allowed” philosophically) that: if you had $1 billion to spend, you should spend it all on delivering basic global health interventions in developing countries, ala GiveWell’s top charities, before you spend any of it on other things aimed at benefiting humans in the near term.
-NUH-
Even if there is some comically large number of modest benefits that could make up for a great harm, it doesn’t at all follow that today, in the world we live in, we should be funding some particular sort of charity. So you’ve got some work to do.
-UH-
Well, this dialogue is about philosophy - we’re not going to try to really get into the details of how one charity compares to another. Instead, the main focus of this section will be about whether it’s OK to give exclusively to “one sort of thing.”
So I’ll take one hypothetical (but IMO ballpark realistic) example of how delivering basic global health interventions compares to another kind of charity, and assume that that comparison is pretty representative of most comparisons you could make with relatively-easily-available donation opportunities. We’re going to assume that the numbers work out as I say, and argue about what that means about the right way to spend $1 billion.
-NUH-
OK.
-UH-
So let’s say you have $1 billion to split between two kinds of interventions:
1 - Delivering bednets to prevent malaria. For every $2000 you spend on this, you avert one child’s death from malaria.
2 - Supporting improvements in US schools in disadvantaged areas. For every $15,000 you spend on this, one student gets a much better education for all of grades K-12.3 For concreteness, let’s say the improved education is about as good as “graduating high school instead of failing to do so” and that it leads to increased earnings of about $10,000 per year for the rest of each student’s life.4
Finally, let’s ignore flow-through effects (the fact that helping someone enables them to help others). There could be flow-through effects from either of these, and it isn’t immediately clear which is the bigger deal. We'll talk more about the long-run impacts of our giving in a future dialogue. For now let’s simplify and talk about the direct effects I outlined above.
Well, here’s my claim - the “averting a death” benefit is better and cheaper than the “better education” benefit. So you should keep going with option 1 until it isn’t available anymore. If it can absorb $1 billion at $2000 per death averted, you should put all $1 billion there.
And if it turns out that all other ways of helping humans in the near term are similarly inferior to the straightforward global health interventions, then similar logic applies, and you should spend all $1 billion on straightforward global health interventions before spending a penny on anything else.
-NUH-
What do you mean by “better” in this context, i.e., in what sense is averting a death “better” than giving someone a better education?
-UH-
It means that most people would benefit more from having their premature death averted than from having a better education. Or if it’s too weird to think about that comparison, it means most people would benefit more from avoiding a 10% chance of premature death than getting a 10% chance of a better education. So behind the veil of ignorance, if people were deciding where you should give without knowing whether they’d end up as beneficiaries of the education programs or as (more numerous) beneficiaries of the bednets, they’d ~all ask you to spend all of the $1 billion on the bednets.
-NUH-
It’s pretty clear in this case that the bednet intervention indeed has something going for it that the education one doesn’t, that “money goes farther there” in some sense. The thing that’s bugging me is the idea of giving all $1 billion there.
Let’s start with the fact that if I were investing my money, I wouldn’t put it all into one stock. And if I were spending money on myself, I wouldn’t be like “Bananas are the best value for money of all the things I buy, so I’m spending all my money on bananas.” Do you think I should? How far are you diverging from conventional wisdom here about “not putting your eggs in one basket?”
-UH-
I think it’s reasonable to diversify your investments and your personal spending. The reason I think it’s reasonable is essentially because of diminishing marginal returns:
- The first banana you buy is a great deal, but your 100th banana of the week isn’t. Rent, food, entertainment, etc. are all categories where you gain a lot by spending something instead of nothing, but then you benefit more slowly as you spend more. So if we were doing cost-effectiveness calculations on everything, we’d observe phenomena like “Before I’ve bought any food for the week, food is the best value-for-money I can get; after I’ve bought some food, entertainment is the best value-for-money I can get; etc.” The math would actually justify diversifying.
- Investing is similar, because money itself has diminishing returns. Losing half of your savings would hurt you, much more than gaining that same dollar amount would help you. By diversifying, you reduce both your upside and your downside, and that’s good for your goals.
- But in this hypothetical, you can spend the entire $1 billion on charity without diminishing marginal returns. It’s $2000 per death averted, all the way down.
- Of course, it would be bad if everyone in the world tried to give to this same charity - they would, in fact, hit diminishing returns. When it comes to helping the world, the basic principles of diversification still apply, but they apply to the whole world’s collective “charity portfolio” rather than yours. If the world “portfolio” has $10 billion less in global health than it should, and you have $1 billion to spend, it’s reasonable for you to put all $1 billion toward correcting that allocation.
-NUH-
But some degree of “risk aversion” still applies - the idea of giving all to one intervention that turns out to not work out the way I thought it did, and thus having zero impact, scares me.
-UH-
It scares you, but if all the potential beneficiaries were discussing how they wanted you to donate, it shouldn’t particularly scare them. Why would they care if your particular $1 billion was guaranteed to help N people, instead of maybe-helping 2N people and maybe-helping zero? From the perspective of one of the 2N people, they have about a 50% chance of being helped either way.
Risk aversion is a fundamentally selfish impulse - it makes sense in the context of personal spending, but in the context of donating, it’s just another way of making this about the donor.
-NUH-
Well, my thinking isn’t just about “risk aversion,” it’s also about the specific nature of the charities we’re talking about.
I live in an unfair society. A key area where things are unfair is that some of us are raised in safe, wealthy neighborhoods and go to excellent schools, while others experience the opposite. As someone who has benefited from this unfair setup, I have a chance to make a small difference pushing things in the opposite direction. If I find myself blessed with $1 billion to give, shouldn’t I spend some of it that way?
-UH-
That doesn’t sound like an argument about what the people you’re trying to help would (in the “veil of ignorance” sense I’ve been using) prefer. It sounds more like you’re trying to “show you care” about a number of things.
Perhaps, to you, “$100 million to ten different things” sounds about ten times as good as “$1 billion to one thing” - you don’t intuitively feel the difference between $1 billion and $100 million, due to scope neglect.
-NUH-
I’m not sure what it’s about. Some of it is that I feel a number of “debts” for ways in which I’ve been unfairly privileged, which I acknowledge is about my own debts rather than others’ preferences.
For whatever reason, it feels exceedingly strange to plow all of $1 billion into a single sort of charity, while there are injustices all around me that I ignore.
-UH-
There are a number of responses I might give here, such as:
- One reason the bednets have higher value-for-money is that they’re more neglected in some sense. If everyone reasoned the way I’m reasoning, everyone would have a bednet by now, and the world would have moved on to other interventions.
- Not all problems are equally fit for all kinds of solutions. Lack of bednets is a problem that’s very responsive to money. To improve education, you might be more effective working in the field yourself.
- I think you’re kind of imagining that “giving all $1 billion to bednets” means “the problem of education gets totally ignored.” But you aren’t the world. Instead, imagine yourself as part of a large society of people working on all the problems you’re concerned about, some getting more attention than others. By giving all $1 billion to bednets, you’re just deciding that’s the best thing you can do to do your part.
But I think those responses would miss the point of this particular dialogue, which is about utilitarianism. So I’ll instead repeat my talking point from last time: if your giving doesn’t conform to what the beneficiaries would want under the veil of ignorance, then it has to be in some sense about you rather than about them. You have an impulse to feel that you’re “doing your part” on multiple causes - but that impulse is about your feelings of guilt, debt, etc., not about how to help others.
For further reading on diversification in giving, see:
- Giving Your All, a short article against diversification
- How Many Causes Should You Give To? - briefly explores arguments for and against diversification
- Worldview diversification - gives some arguments for diversification, but mostly in the context of very large amounts of giving and mostly for practical reasons
Hopefully this has given a sense of the headspace and motivations behind some of the stranger things utilitarianism tells one to do. As noted above, I ultimately have very mixed feelings on the whole matter, and NUH will have some stronger objections in future pieces (but the next couple of dialogues will continue to defend some of the strange views motivated by the attempt to have future-proof ethics).
Comment/discuss [EA(p) · GW(p)]Footnotes
-
https://en.wikipedia.org/wiki/Micromort#Travel states that traveling 230-250 miles in a car gives a 1 in 1 million chance of death by accident, implying that traveling 2.3-2.5 miles would give a 1 in 100 million chance. ↩
-
Note that the "veil of ignorance" refers to what a person would choose in a hypothetical situation, whereas most of the dialogue up to this point has used the language of what is better for a person. These can be distinct since people might not always want what's best for them. I'm using the veil of ignorance as a simplification; we should generally assume that people behind the veil of ignorance are being rational, i.e., choosing what is actually best for them. What ultimately matters is what's best for someone, not what they prefer, and that's what I've talked about throughout the early part of the dialogue. ↩
-
I think total costs per student tend to be about $10-20k per year; here I’m assuming you can “significantly improve” someone’s education with well-targeted interventions for $1k per year. Based on my recollections of education research I think I’m more likely to be overstating than understating the available impact here. ↩
-
According to this page, people who lack a HS diploma earn about $592 per week. If we assume that getting the diploma brings them up to the overall median earnings of $969 per week, that implies $377 per week in additional earnings, or a bit under $20k per year. I think this is a very aggressive way of estimating the value of a high school diploma, since graduating high school likely is correlated with lots of other things that predict high earnings (such as being from a higher-socioeconomic-status family), so I cut it in half. This isn’t meant to be a real estimate of the value of a high-school diploma; it’s still meant to be on the aggressive/generous side, because I’ll still be claiming the other benefit is better. ↩
9 comments
Comments sorted by top scores.
comment by cousin_it · 2022-02-15T22:45:45.974Z · LW(p) · GW(p)
“You’re scratching your own moral-seeming itches. You’re making yourself feel good. You’re paying down imagined debts that you think you owe, you’re being partial toward people around you. Ultimately, that is, your philanthropy is about you and how you feel and what you owe and what you symbolize. My philanthropy is about giving other people more of the lives they’d choose."
“My giving is unintuitive, and it’s not always ‘feel-good,’ but it’s truly other-centered. Ultimately, I’ll take that trade.”
I think the Stirnerian counterargument would be that global utilitarianism wouldn't spare me a red cent, because there are tons of people with higher priority than me, so basically you're asking me to be altruist toward something that is overall egoist (or indistinguishable from egoist) toward me. Not saying I subscribe to this argument 100%, but what do you think of it?
Replies from: cousin_it, PeterMcCluskey↑ comment by cousin_it · 2023-05-15T15:23:52.794Z · LW(p) · GW(p)
Coming back to this idea again after a long time, I recently heard a funny argument against morality-based vegetarianism: no animal ever showed the slightest moral scruple against eating humans, so why is it wrong for us to eat animals? I go back and forth on whether this "Stirnerian view" makes sense or not.
↑ comment by PeterMcCluskey · 2022-02-17T03:29:10.576Z · LW(p) · GW(p)
If you follow other centered ethics, then the counterargument seems irrelevant.
The post is excellent at explaining the implications of other centered ethics, but it doesn't seem intended to explain why I should adopt those ethics.
Replies from: HoldenKarnofsky↑ comment by HoldenKarnofsky · 2022-03-31T22:47:02.225Z · LW(p) · GW(p)
I agree with Peter's comment.
comment by Dave Orr (dave-orr) · 2022-02-15T19:48:58.668Z · LW(p) · GW(p)
I love how useless the headings on the left are. I guess it's not really set up for dialogs.
ANYway, I thought this bit was interesting to think about more:
I think it’s reasonable to diversify your investments and your personal spending. The reason I think it’s reasonable is essentially because of diminishing marginal returns.
There's another key consideration and that's variance. If you have all your investments in the highest EV stock, there's some chance that it will go to zero and you'll lose everything, and for most people, that's a super bad outcome worth paying something to avoid. Also, more subtly, variance erodes returns -- if you have $100 and it goes up 10% and down 10% in either order, you have $99. If it goes up and down 50%, you end up with $75.
I think you could make a similar argument for large charitable contributions. Suppose it turns out that the insecticide in bednets has large bad downstream effects -- now all your gains are wiped out. Whereas if you also funded the second best thing as well, your overall EV might be lower, but in return you reduce the chance of really bad outcomes.
I expect that the gains for saving a life are so large and the cost so low that in practice it still makes sense to focus on the top very few opportunities with marginal dollars. But I suspect that if someone is skeptical of that argument, the rejoinder here around diversification is missing something.
Replies from: HoldenKarnofsky, Pattern↑ comment by HoldenKarnofsky · 2022-03-31T22:47:25.281Z · LW(p) · GW(p)
Sorry about the table of contents! The LessWrong versions of my posts are auto-generated (the originals appear here).
I think your comments about variance could technically be cast in terms of diminishing marginal returns. If having zero (or negative) impact is "especially bad", this implies that going from zero to small positive impact is "more valuable" to you than going from small positive to large positive impact (assuming we have some meaningful units of impact we're using). UH's argument is that this shouldn't be the case.
The point about variance eroding returns is an interesting one and not addressed in the piece. I think the altruistic equivalent would be something like: "If humanity stakes all of its resources on something that doesn't work out, we get wiped out and don't get to see future opportunities; if humanity simply loses a large amount in such fashion, this diminishes its ability to try other things that might go well." But I think the relevant actor here is mostly/probably humanity, not an altruistic individual - humanity would indeed "erode its returns" by putting too high a percentage of its resources into particular things, but it's not clear that a similar dynamic applies for an altruistic individual (that is, it isn't really clear that one can "reinvest" the altruistic gains one realizes, or that a big enough failure to have impact wipes someone "out of the game" as an altruistic actor).
comment by Pattern · 2022-02-15T19:59:28.374Z · LW(p) · GW(p)
Contents:
1. Other centered ethics?
2. A quick synthesis of utilitarianism and other things
1. Other centered ethics?
To set the stage, I think utilitarianism is the best candidate for an other-centered ethics, i.e., an ethics that's based as much as possible on the needs and wants of others, rather than on my personal preferences and personal goals. If you start with some simple assumptions that seem implied by the idea of “other-centered ethics,” then you can derive utilitarianism.
It also seems ethically concerned with you as well.
The benefit of having such a division, is that it seems like if you spend effort/time/resources/thought/other things on others, having stuff that you can direct resources to, and ways of operating that can take into account what other people in order to help them...well, you may already know what you want. To some extent, when people talk about utilitarianism they identify areas/projects that, in principle, derive a lot from general knowledge (about people). For example, 'malaria is bad' -> 'look for ways to help with that'. In other words, it largely uses abstraction, and broadly beneficial criteria for improving the world.
Stuff more derived from say, your interests, can also be applicable to people who share those interests. An obvious example is, lots of people want to read Harry Potter 7, so when it comes out, the public library has a lot of copies. More generally, public goods* have a lot to offer - both one their own, and where they interface with other types.
*From Wikipedia:
In economics, a public good (also referred to as a social good or collective good)[1] is a good that is both non-excludable and non-rivalrous. For such goods, users cannot be barred from accessing or using them for failing to pay for them. Also, use by one person neither prevents access of other people nor does it reduce availability to others.[1] Therefore, the good can be used simultaneously by more than one person.[2] This is in contrast to a common good,
2. A quick synthesis of utilitarianism and other things
Ethics is a complex suite of intuitions, many of them incompatible. There’s no master system for it. So a statement as broad as “Providing a modest benefit to a large enough number of persons can swamp all other ethical considerations” sounds like an overreach.
However “Providing a modest benefit to a large enough number of persons may do well on many ethical scales”.
comment by Ben (ben-lang) · 2022-02-17T18:58:55.831Z · LW(p) · GW(p)
Enjoyable dialogues. I am convinced of the "all in one place" argument from a strictly efficiency standpoint. Even more so if we have a lot less money than the example where diminishing returns (some beds that need bednets are out in the wilderness far from roads, you do those last) will not matter.
A (fairly unorigonal) utilitarianism problem that I find myself often toying with is something like the following:
What if we could spend the $1 billion on a brain in a jar connected to the matrix. We can feed that brain the sensation of a day at the beach. Then wipe its memory (to avoid diminishing returns) and then feed it that exact sensation again. Then repeat. Even better, (Using science!) we can overclock this process like crazy and can lead this brain through its day at the beach millions of times per second. In some sense this is a staggeringly good return on investment. It also does something weird to the veil of ignorance "you could be any one of the instances of that brain! They will soon outnumber the population of real humans so most likely you either get a nice day at the beach or you don't exist at all".
I would be interested in what you think.