Does donating to EA make sense in light of the mere addition paradox ?

post by George3d6 · 2020-02-19T14:14:51.569Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    7 Kaj_Sotala
    5 Dagon
None
No comments

TL;DR Assuming that utilitarianism as a moral philosophy has flaws along the way, such that one can follow it but only to some unkown extent, what would be the moral imperative for donating to EA ?

I've not really found a discussion around this on the internet so I wonder if someone around here as thought about it.

It seems to me that EA makes perfect sense in light of a utilitarian view of morals. But it seems that a utilitarian view of morals is pretty shaky in that, if you follow it to it's conclusion you get to the merge addition paradox or the "utility monster".

So in light of that does donating to an EA organisation (as in: one that tries to save or improve as many lives as possible for as little money as possible) really make any sense ?

I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

From an intuitive perspective "helping my mother/father/best-friend/pet-dog" seem to topple "helping 10 random strangers" for most people, thus it would seem that it doesn't make sense barring you are someone that's very rich and can thus safely help anyone close to him and still have some wealth leftover.

I can also see EA making sense from the perspective of other codes of ethics, but it seems like most people donating to EA don't really follow the other prescripts of those codes that the codes hold to be more valuable.

E.g.

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

Answers

answer by Kaj_Sotala · 2020-02-19T15:59:45.846Z · LW(p) · GW(p)
I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness [LW · GW] but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe [LW · GW].

So the answer to

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

comment by George3d6 · 2020-02-19T17:00:44.172Z · LW(p) · GW(p)
Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Agree, I wasn't trying to imply otherwise

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness [LW · GW] but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe [LW · GW].

Also agree, as in, this is how I usually formulate my moral decision and it's basically a pragmatic view on ethics, which is one I generally agree with.

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems "intuitively right" to them to do so.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2020-02-19T18:12:41.056Z · LW(p) · GW(p)
So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ?

Not sure whether every EA would endorse this description, but it's how I think of it, yes.

Replies from: Korz, George3d6
comment by Mart_Korz (Korz) · 2020-02-19T21:52:30.215Z · LW(p) · GW(p)

Regarding "intuitive moral sense", I would add that one's intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.

For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for different people, which is evidence that this behaviour of my intuition does not reflect my 'actual' preferences), my intuitions updated towards valuing strangers.

I am not sure how strongly other EAs have reshaped their intuitions, but I think that using and accepting quantitative arguments for moral questions needs quite a bit of intuition-reshaping for most people.

comment by George3d6 · 2020-02-19T18:58:20.530Z · LW(p) · GW(p)

No worries, I wasn't assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.

answer by Dagon · 2020-02-19T21:22:16.195Z · LW(p) · GW(p)

(note: I don't identify as Utilitarian, so discount my answer as appropriate)

You can split the question into multiple parts:

1) should I be an altruist, who gives up resources to benefit others more than myself?

2) if so, what does "benefit" actually mean for others?

3) How can I best achieve my desires, as defined by #1 and #2?


#1 is probably not answerable using only logic - this is up to you and your preferred framework for morals and decision-making.

#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There's a lot more writing on this, but no clear enough answers that it can be considered solved.

#3 is the focus of E in EA - if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.


To answer your direct question - maybe! To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

To the extent that you care about topics they don't, don't. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don't believe those things are smooth enough curves to overwhelm my other preferences.

comment by George3d6 · 2020-02-19T23:23:51.197Z · LW(p) · GW(p)
To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

Well yes, this is basically the crux of my question.

As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or 'care' about on a personal level).

So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).

I also agree with: It's efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.

Where I don't understand how the intuition really works is "Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love" (or some such equivalence).

As in, I get there some intuition about it being "better" and I agree that might be strong enough in some people that it's just "obvious", but my thinking was that there might be some sort of better ethic-rooted argument for it.

Replies from: Dagon
comment by Dagon · 2020-02-20T00:55:31.381Z · LW(p) · GW(p)

Nope, in the end it all comes down to your personal self-conception and intuition. You can back it up with calculations and testing your emotional reaction to intellectual counterfactuals ("how does it feel that I saved half a statistical life, but couldn't support my friend this month"). But all the moral arguments I've seen come down to either religious authority or assertion that some intuitions are (or should be) universal.

No comments

Comments sorted by top scores.