'Effective Altruism' as utilitarian equivocation.

post by Dias · 2013-11-24T18:35:58.342Z · LW · GW · Legacy · 81 comments

Contents

81 comments

Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.

Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.

The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.

However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.

A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and  hence 2) inferior to EA things.

Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*

...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.

Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.

There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.

* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.

81 comments

Comments sorted by top scores.

comment by wdmacaskill · 2013-11-25T22:18:36.675Z · LW(p) · GW(p)

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

Replies from: Dias, None, lmm
comment by Dias · 2013-11-26T03:53:47.001Z · LW(p) · GW(p)

Thanks for the response. I agree with most of the territory covered, of course, but my objection here is to the framing, not the philosophy.

Maybe you want to do other things effectively, but then it's not effective altruism

So why does the website explicitly list fairness, justice and trying to do as much good as possible as EA goals in themselves? And why does user:weeatquince (whose identity we both know but I will not 'out' on a public forum) think that "actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good" are EA?

Replies from: wdmacaskill
comment by wdmacaskill · 2013-11-26T15:32:05.395Z · LW(p) · GW(p)

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

comment by [deleted] · 2015-06-21T13:48:51.553Z · LW(p) · GW(p)

sd

comment by lmm · 2013-11-26T20:00:51.916Z · LW(p) · GW(p)

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

That's rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.

comment by Nisan · 2013-11-24T20:07:45.189Z · LW(p) · GW(p)

I'm not a memetic architect of the EA movement; but speaking as an observer it seems pretty clear that EA is about doing good by helping others. If you care about other things in addition to helping others, there's still a place for you in the movement, as long as you want to set aside a portion of your resources and help others as much as possible with it. On the other hand, if you aren't interested in the charities that most effectively help people, GiveWell is of no use to you and the EA movement doesn't seem very relevant to you either.

Your interests are broader than the interests held in common by the EA community. This shouldn't be a problem, but reading between the lines it looks like you're disappointed by the movement because they were dismissive of a charitable cause that's important to you. I think it would be best if the EA movement framed its distinctions in terms of "effective (at helping people) vs. not effective (at helping people) charities", instead of "good vs. bad charities", at least in public statements. It was my impression that they're doing a good job of this, but I could be wrong.

Replies from: Dias
comment by Dias · 2013-11-24T21:07:47.661Z · LW(p) · GW(p)

it seems pretty clear that EA is about doing good by helping others.

I half agree... except they explicitly include "justice, fairness and/or other values" in the movement.. Perhaps Luke was not speaking on behalf of the movement there, but it was posted on their website without disclaimer.

Replies from: Nisan
comment by Nisan · 2013-11-25T02:59:42.383Z · LW(p) · GW(p)

Oh I see. I wonder what Luke meant by including justice and fairness in there.

Replies from: somervta
comment by somervta · 2013-11-25T05:35:09.997Z · LW(p) · GW(p)

perhaps that increasing justice and fairness will help others.

comment by Adele_L · 2013-11-24T19:18:16.735Z · LW(p) · GW(p)

So instead of being altruistic, you should be Friendly (in the AI sense).

Replies from: Benito, Gunnar_Zarncke, komponisto
comment by Ben Pace (Benito) · 2013-11-27T21:38:31.613Z · LW(p) · GW(p)

"Hey, we're Friendly Optimizers."

"Hey, we're Effectively Friendly."

"Hey, we're about as Friendly as a Friendly AI would be if it were human."

comment by Gunnar_Zarncke · 2013-11-24T21:24:56.221Z · LW(p) · GW(p)

'Friendly in the AI sense' is a quite compact summary of a precise (albeit non-constructive) definition of perfectly globally ethically correct behavior. Nice that you pointed it out. I'd hope for a more readable version. 'AI-friendly' will not do. Maybe 'total friendlyness'? If it can be a goal for an AI it can be an ideal for mere humans.

comment by Douglas_Knight · 2013-11-24T18:58:04.669Z · LW(p) · GW(p)

I think putting "altruist" in the name is more explicit about their utilitarianism than any disclaimer could possibly be.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-11-24T19:29:00.572Z · LW(p) · GW(p)

I agree. Every non-sentientist value that you add to your pool of intrinsic values needs an exchange rate (which can be non-linear and complex and whatever) that implies you'd be willing to let people suffer in exchange for said value. This seems egoistic rather than altruistic because you'd be valuing your own preference for tradition more than you value the well-being of others for their own sake. If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people's preferences for it. This would be the utilitarian way to include "complexity of value".

Replies from: Dias, komponisto, Ghatanathoah
comment by Dias · 2013-11-24T21:03:20.247Z · LW(p) · GW(p)

This seems egoistic rather than altruistic because you'd be valuing your own preference for tradition more than you value the well-being of others for their own sake.

If you're a moral realist, you're not letting others suffer for the sake of your preference for tradition, you're letting them suffer for the sake of the moral value of tradition.

Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-11-24T21:36:11.366Z · LW(p) · GW(p)

If you're a moral realist, you're not letting others suffer for the sake of your preference for tradition, you're letting them suffer for the sake of the moral value of tradition.

This would only be an argument for being "moral" (whatever that may mean) rather than altruistic; it doesn't address my point that utilitarianism is systematized altruism. Utilitarianism is what results if you apply veil of ignorance type of reasoning without being risk-averse.

Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.

As someone with negative utilitarian inclinations, I sympathize with the "self-centered preference for hedonism" objection against classical utilitarianism.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-12T06:52:51.611Z · LW(p) · GW(p)

it doesn't address my point that utilitarianism is systematized altruism. Utilitarianism is what results if you apply veil of ignorance type of reasoning without being risk-averse.

Preference Utilitarianism, or Parfit's Success Theory, might be considered systematizied altruism. But classical utilitarianism isn't altruistic at all. It doesn't care about anything or anyone. It only cares about certain types of feelings. It ignores all other personal goals people have in a monstrously sociopathic fashion.

As someone with negative utilitarian inclinations, I sympathize with the "self-centered preference for hedonism" objection against classical utilitarianism.

I occasionally desire to do activities that make me suffer because I value the end result of that activity more than I value not suffering. If you try to stop me you're at least as selfish as a hedonistic utilitarian who makes people suffer in order to generate a large amount of pleasure. (This is of course, if you're a hedonistic negative utilitarian. If you're a negative preference utilitarian I presume you'd want me to do that activity to prevent one of my preferences from not being satisfied.)

In my view, both classical and negative (hedonistic) utilitarianism are sort of "selfish" because being unselfish implies you respect other people's desires for how their lives should go. If you make someone feel pleasure when they don't want to, or not feel pain when they do want to, you are harming them. You are making their lives worse.

In fact, I think that classical and negative (hedonistic) utilitarianism are misnamed, because "utilitarian" is derived from the word "utility," meaning "usefulness." Something is "utilitarian" if it is useful to people in achieving their goals. But classical and negative utilitarians do not consider people's goals to be valuable at all. All they value is Pleasure and NotPain. People are not creatures with lives and goals of their own, they are merely receptacles for containing Pleasure and NotPain.

Preference utilitarianism, Parfit's Success Theory, and other similar theories do deserve the title of "utilitarian." They do place value on the lives and desires of others. So I would say that they are "unselfish." But all pleasure-maximizing/pain-minimizing theories of value are.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-12-12T18:55:13.988Z · LW(p) · GW(p)

Preference Utilitarianism, or Parfit's Success Theory, might be considered systematizied altruism.

Yes, certainly a strong case can be made if you have negative population ethics (i.e. don't intend to bring about new preferences just to satisfy them). However, I also have sympathies for those who criticize the moral relevance of preferences.

I occasionally desire to do activities that make me suffer because I value the end result of that activity more than I value not suffering. If you try to stop me you're at least as selfish as a hedonistic utilitarian who makes people suffer in order to generate a large amount of pleasure.

As you note, a negative preference utilitarian would let you go on. I think this view is plausible. I'm leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don't think the concept makes any sense. I don't think my present self has any privileged (normative) authority over my future selves, because when I just think in terms of consciousness-moments, I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-15T07:29:05.352Z · LW(p) · GW(p)

I'm leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don't think the concept makes any sense.

I consider nearly all arguments of the form "X is not a coherent concept, therefore we ought not to care about it" to be invalid. I don't mean to give offense, but such arguments seem to me to be a form of pretending to be wise. This is especially true if X has predictive power, if knowing something is X can cause you to correctly anticipate your experiences. And you have to admit, knowing someone is the same person as someone you've encountered before makes you more likely to be able to predict their behavior.

Arguments that challenge the coherency of a concept typically function by asking a number of questions that our intuitions about the concept cannot answer readily, creating a sense of dumbfoundment. They then do not bother to think further about the questions and try to answer them, instead taking the inability to answer the question readily as evidence of incoherence. These arguments also frequently appeal to the fallacy of the gray, assuming that because there is no clear-cut border between two concepts or things that no distinction between them must exist.

This fact was brought home to me when I came across discussions of racism that argued that racism was wrong because "race" was not a coherent concept. The argument initially appealed to me because it hit a few applause lights, such as "racism is bad," and "racists are morons." However, I became increasingly bothered because I was knowledgeable about biology and genetics, and could easily see several simple ways to modify the concept of "race" into something coherent. It also seemed to me that the reason racism was wrong was that preference violation and suffering were bad regardless of the race of the person experiencing them, not because racists were guilty of incoherent reasoning. I realized that the argument was a terrible one, and could only be persuasive if one was predisposed to hate racism for other reasons.

The concept of personal identity makes plenty of sense. I've read Parfit too, and read all the questions about the nature of personal identity. I then proceeded to actually answer those questions and developed a much better understanding of what exactly it is I value when I say I value personal identity. To put it (extremely) shortly:

There are entities that have preferences about the future. These preferences include preferences about how the entity itself will change in the future (Parfit makes a similar point when he discusses "global preferences"). These preferences constitute a "personal identity." If an entity changes we don't need to make any references to the changed entity being the "same person" as a past entity. We simply take into account whether the change is desirable or not. I write much more about the subject here.

I don't think my present self has any privileged (normative) authority over my future selves

I don't necessarily think that either. That's why I want to make sure that the future self that I turn into remains similar to my present self in certain ways, especially in his preferences. That way the issue won't ever come up.

because when I just think in terms of consciousness-moments

This might be your first mistake. We aren't just consciousness moments. We're utility functions, memories, personalities and sets of values. Our consciousness-moments are just the tip of the iceberg. That's one reason why it's still immoral to violate a person's preferences when they're unconscious, their values still exist somewhere in their brain, even when they're not conscious.

I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.

It seems obvious to me that they're both relevant.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-12-15T12:35:35.276Z · LW(p) · GW(p)

I consider nearly all arguments of the form "X is not a coherent concept, therefore we ought not to care about it" to be invalid.

I agree, I'm not saying you ought not care about it. My reasoning is different: I claim that people's intuitive notion of personal identity is nonsense, in a similar way as the concept of free will is nonsense. There is no numerically identical thing existing over time, because there is no way such a notion could make sense in the first place. Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of "same person" and care about that, even though it is now different from what they thought it was, or they can conclude that actually, now that they know it is something else, they don't really care about it at all.

I think your view is entirely coherent, by the way. I agree that a reductionist account of personal identity still leaves room for preferences, and if you care about preferences as opposed to experience-moments, you can keep a meaningful and morally important notion of personal identity via preferences (although this would be an empirical issue -- you could imagine beings without future-related preferences).

I guess the relevance for personal identity on the question of hedonism or preferences for me comes from a boost in intuitiveness of the hedonistic view after having internalized empty individualism.

It seems obvious to me that they're both relevant.

I'm 100% sure that there is something I mean by "suffering", and that it matters. I'm only maybe 10-20% sure that I'd also want to care about preferences if I knew everything there is to know.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-30T19:22:53.123Z · LW(p) · GW(p)

Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of "same person" and care about that, even though it is now different from what they thought it was

I don't know if your analysis is right or not, but I can tell you that that isn't what it felt like I was doing when I was developing my concepts of personal identity and preferences. What it felt like I was doing was elucidating a concept I already cared about, and figured out exactly what I meant when I said "same person" and "personal identity." When I thought about what such concepts mean I felt a thrill of discovery, like I was learning something new about myself I had never articulated before.

It might be that you are right and that my feelings are illusory, that what I was really doing was realizing a concept I cared about was incoherent and reaching about until I found a concept that was similar, but coherent. But I can tell you that's not what it felt like.

EDIT: Let me make an analogy. Ancient people had some weird ideas about the concept of "strength." They thought that it was somehow separate from the body of a person, and could be transferred by magic, or by eating a strong person or animal. Now, of course, we understand that that is not how strength works. It is caused by the complex interaction of a system of muscles, bones, tendons, and nerves, and you can't transfer that complex system from one entity to another without changing many of the properties of the entity you're sending it to.

Now, considering that fact, would you say that ancient people didn't want anything coherent when they said they wanted to be strong? I don't think so. They were mistaken about some aspects about how strength works, but they were working from a coherent concept. Once they understood how strength worked better they didn't consider their previous desire for strength to be wrong.

I see personal identity as somewhat analagous to that. We had some weird ideas about it in the past, like that it was detached from physical matter. But I think that people have always cared about how they are going to change from one moment to the next, and had concrete preferences about it. And I think when I refined my concepts of personal identity I was making preferences I already had more explicit, not swapping out some incoherent preferences and replacing them with similar coherent ones.

I'm 100% sure that there is something I mean by "suffering", and that it matters. I'm only maybe 10-20% sure that I'd also want to care about preferences if I knew everything there is to know.

I am 100% certain that there are things I want to do that will make me suffer (learning unpleasant truths for instance), but that I want to do anyway, because that is what I prefer to do.

Suffering seems relevant to me too. But I have to admit, sometimes when something is making me suffer, what dominates my thoughts is not a desire for it to stop, but rather annoyance that this suffering is disrupting my train of thought and making it hard for me to think and get the goals I have set for myself accomplished. And I'm not talking about mild suffering, the example in particular that I am thinking of is throwing up two days after having my entire abdomen cut open and sewn back together.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-08T21:38:44.916Z · LW(p) · GW(p)

This is interesting. I wonder what a CEV-implementing AI would do with such cases. There seems to be a point where you're inevitably going to hit the bottom of it. And in a way, this is at the same time going to be a self-fulfilling prophecy, because once you start identifying with this new image/goal of yours, it becomes your terminal value. Maybe you'd have to do separate evaluations of the preferences of all agent-moments and then formalise a distinction between "changing view based on valid input" and "changing view because of a failure of goal-preservation". I'm not entirely sure whether such a distinction will hold up in the end.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2014-01-18T05:06:41.980Z · LW(p) · GW(p)

I wonder what a CEV-implementing AI would do with such cases.

Even if it does turn out that my current conception of personal identity isn't the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it's similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.

Of course, if I am right and my current conception of personal identity is based on my simply figuring out what I meant all along by "identity," then the AI would just extrapolate that.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-29T11:45:36.791Z · LW(p) · GW(p)

This is because, if I truly did adopt a new concept of identity because of it's similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.

Maybe, but I doubt whether "as similar as possible" is (or can be made) uniquely denoting in all specific cases. This might sink it.

comment by komponisto · 2013-11-24T19:45:10.291Z · LW(p) · GW(p)

If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people's preferences for it. This would be the utilitarian way to include "complexity of value".

If other people value tradition instead of helping other people, then the utilitarian thing to do is to get them to value helping other people more and tradition less. And on it goes, until you've tiled the universe with altruistic robots who only care about helping other altruistic robots (help other altrustic robots (help other altruistic robots (....(...(

Utilitarianism is fundamentally incompatible with value complexity.

Replies from: Viliam_Bur, drethelin, Lukas_Gloor
comment by Viliam_Bur · 2013-11-25T09:11:34.284Z · LW(p) · GW(p)

Utilitarianism is fundamentally incompatible with value complexity.

Could you explain why exactly? To me it seems that if you value multiple things, let's call them A, B, C, you could construct a function such as F = min(A, B, C), which by its maximization supports all of these values.

In such situation, imagine that currently e.g. A = 10, B = 1000, C = 1500. Which could mean e.g. that we have a lot of good music, many good movies, but thousands of people are literally starving to death. In such situation, trying to increase the function F means fully focusing on increasing A and ignoring the values B and C (until A reaches them). In a short term, it may seem as not having complex values. But that's just a local situation.

Shortly: Even if you have complex value, you may find that in current situation the best way to increase total outcome is to focus on one of these values.

Near mode: Imagine that you live in a village with 1000 citizens, where half of them are starving to death, and the other half is watching movies. One person proposes a new food program. Another person proposes making another movie (of which you already have a few dozens). As a mayor, you choose to spend the tax money on the former. The latter guy accuses you of not understanding the complexity of values. Do you think the accusation is fair?

Replies from: komponisto
comment by komponisto · 2013-11-25T15:25:00.931Z · LW(p) · GW(p)

Utilitarianism is fundamentally incompatible with value complexity.

To me it seems that if you value multiple things, let's call them A, B, C, you could construct a function

It sounds like you might be confusing utilitarianism with utility functions (a common mistake on LW). While utilitarianism always involves a utility function, not all utility functions are utilitarian.

Even if you have complex value, you may find that in current situation the best way to increase total outcome is to focus on one of these values.

Yes, that's always theoretically possible. In real life, however, humans are subject to value drift, and have to "practice" their values, lest they lose them.

One person proposes a new food program. Another person proposes making another movie (of which you already have a few dozens). As a mayor, you choose to spend the tax money on the former. The latter guy accuses you of not understanding the complexity of values.

That doesn't sound like the latter guy's true rejection. It sounds like he really means to accuse the mayor of undervaluing movies specifically. (After all, if the mayor had made the opposite choice, why couldn't the food program guy equally well accuse the mayor of not understanding the complexity of value?)

comment by drethelin · 2013-11-24T22:38:34.990Z · LW(p) · GW(p)

If you value you something the correct thing to do is to convince others to value it. OBVIOUSLY AND WHATEVER YOUR VALUE IS. This is not a problem with utilitarianism. It's a problem with Values. If you value tradition it helps your values to convince other people to value tradition until the universe is tiled with traditional robots.

Replies from: komponisto
comment by komponisto · 2013-11-24T23:07:54.715Z · LW(p) · GW(p)

It's a problem with simple values, not values in general. If you have a complex value system, it might contain detailed, not-concisely-summarizable specifications about exactly when it helps to convince other people to value tradition.

comment by Lukas_Gloor · 2013-11-24T19:59:30.975Z · LW(p) · GW(p)

Good points, but I think there are ways to still keep value complexity.

The population ethics of preference utilitarianism seem underdetermined. Rather than maximizing the total amount of satisfied preferences, you could also go for minimizing the amount of frustrated preferences. If there is no obligation to bring new preferences into existence just in order to satisfy them, then the originally existing preferences won't be overriden by future robots optimized for mutual preference-satisfaction.

And assuming that there is such a thing as a "terminal value", then getting someone to abandon their true value for tradition for something else would still count as a preference-violation if you assume idealized preference utilitarianism (which seems very similar to CEV) rather than straightforward preference utilitarianism that takes people's stated preferences at face value.

Replies from: komponisto
comment by komponisto · 2013-11-24T20:26:32.408Z · LW(p) · GW(p)

Rather than maximizing the total amount of satisfied preferences, you could also go for minimizing the amount of frustrated preferences.

This is a form of negative utilitarianism, and inherits the major problems with that theory (such as its endorsement of destroying the universe to stop all the frustrated preferences going on it in it right now).

And assuming that there is such a thing as a "terminal value", then getting someone to abandon their true value for tradition for something else would still count as a preference-violation

It might, but it would be one that was outweighed by the larger number of preference-satisfactions to be gained from doing so, just like the disutility of torturing someone for 50 years is outweighed by the utility of avoiding 3^^^3 dust-speck incidents (for utilitarian utility functions).

Replies from: None, Lukas_Gloor, Ghatanathoah
comment by [deleted] · 2013-11-24T23:11:02.711Z · LW(p) · GW(p)

This is a form of negative utilitarianism, and inherits the major problems with that theory (such as its endorsement of destroying the universe to stop all the frustrated preferences going on it in it right now).

Well hold on. Is destroying the universe easier than just eliminating the frustration but leaving the universe intact? I mean, surely forcibly wireheading everyone is easier than destroying the entire damned universe ;-).

It might, but it would be one that was outweighed by the larger number of preference-satisfactions to be gained from doing so, just like the disutility of torturing someone for 50 years is outweighed by the utility of avoiding 3^^^3 dust-speck incidents (for utilitarian utility functions).

True, but utility monsters and tile-the-universe-in-your-favorite-sapients also work for utilitarianism. Naive utilitarianism breaks down from the sheer fact that real populations are not Gaian hiveminds who experience each other's joy and suffering as one.

Even if you believe in such a thing as emotional utility that matters somehow at all, you can still point out that the dust-speckers are suffering at the absolute minimum level they can even notice, and that surely they can freaking cope with it to keep some poor bastard from being tortured horrifically for 50 years straight.

(Sorry, I'm a total bullet-dodger on ethical matters.)

Replies from: komponisto
comment by komponisto · 2013-11-24T23:17:07.400Z · LW(p) · GW(p)

Is destroying the universe easier than just eliminating the frustration but leaving the universe intact?

It could be, if (for example) building UFAI turns out to be easier than eliminating the frustration.

Replies from: None
comment by [deleted] · 2013-11-24T23:20:59.641Z · LW(p) · GW(p)

Yes, but we're talking about abstract ethical theories, so we're already playing as the AI. An AI designed to minimize frustrated preferences will find it easier (that is, a better ratio of value to effort) to wirehead than to kill, unless the frustration-reduction of killing an individual is greater than the frustration-creation happening to all the individuals who are now mourning, scared, screaming in pain from shrapnel, etc.

Replies from: Viliam_Bur, komponisto
comment by Viliam_Bur · 2013-11-25T16:27:08.526Z · LW(p) · GW(p)

Step 1: Wirehead all the people.

Step 2A: Continue caring about them.

Step 2B: Kill them.

How exactly could the option 2A be easier than 2B? No one is mourning, because eveyone alive is wireheaded. And surely killing someone is less work than keeping them alive.

comment by komponisto · 2013-11-25T14:44:46.787Z · LW(p) · GW(p)

we're already playing as the AI

Doesn't matter. If humans can build an AI, an AI can build an AI as well.

Replies from: None
comment by [deleted] · 2013-11-25T14:59:40.023Z · LW(p) · GW(p)

Yes, but the point is not to speculate about AI, it's to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we're modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.

Replies from: komponisto
comment by komponisto · 2013-11-25T15:30:48.943Z · LW(p) · GW(p)

Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it's sufficiently easy).

Replies from: None
comment by [deleted] · 2013-11-25T16:04:02.098Z · LW(p) · GW(p)

Ok, again, let's assume we're already "playing as the AI". We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.

Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.

Which one of these options do you think is negutilitarian!better?

Replies from: komponisto
comment by komponisto · 2013-11-25T16:28:24.665Z · LW(p) · GW(p)

Which one of these options do you think is negutilitarian!better?

If the first is easier (i.e. costs less utility to implement), or if they're equally easy to implement, the first.

If the second is easier, it would depend on how much easier it was, and the answer could well be the second.

A superintelligence is still subject to tradeoffs.

But even if it turns out that wireheading is better on net than paperclipping, (a) that's not an outcome I'm happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.

Replies from: None
comment by [deleted] · 2013-11-25T19:00:04.223Z · LW(p) · GW(p)

Neither of us is happy with wireheading. Still, it's better to be accurate about why we're rejecting negutilitarianism.

Replies from: komponisto
comment by komponisto · 2013-11-25T19:16:24.334Z · LW(p) · GW(p)

The fact that it prefers paperclipping to the status quo is enough for me (and consistent with what I originally wrote).

comment by Lukas_Gloor · 2013-11-24T20:42:40.324Z · LW(p) · GW(p)

This is a form of negative utilitarianism, and inherits the major problems with that theory (such as its endorsement of destroying the universe to stop all the frustrated preferences going on it in it right now).

Not if people have a strong preference to go on living. Killing them would frustrate this preference. This view would only imply that you shouldn't go on procreating indefinitely into the future if this would produce lots of unsatisfied preferences overall. So this conclusion depends on empirical circumstances, and it would be odd to reject a normative view because of this.

It might, but it would be one that was outweighed by the larger number of preference-satisfactions to be gained from doing so

Perhaps, that too is an empirical question. If people's preferences for tradition were strong enough, it likely wouldn't get outweighed. And if the preferences are weak, it being outweighed wouldn't pose much of a problem, given the framework in consideration.

Replies from: komponisto
comment by komponisto · 2013-11-24T21:10:43.663Z · LW(p) · GW(p)

Not if people have a strong preference to go on living.

The preference to go on living results in large part from the good things (i.e. opportunities for preference satisfaction) available in life. If we didn't care about those any more, the strength of the preference to go on living would presumably diminish considerably.

But yes, the policy recommendations of utilitarianism always depend on how the numbers actually come out. The point is that they're too dependent on a single parameter, or a small subset of parameters, contrary to complexity of value.

(I would go so far as to argue that this is by design: utilitarianism historically comes from an intellectual context in which people thought moral theories ought to be simple.)

comment by Ghatanathoah · 2013-12-12T07:13:37.885Z · LW(p) · GW(p)

This is a form of negative utilitarianism, and inherits the major problems with that theory (such as its endorsement of destroying the universe to stop all the frustrated preferences going on it in it right now)

I'm pretty sure that most forms of negative preference utilitarianism are "timeless." Once a strong "terminal value" type preference is created it counts as always existing, forever. If you destroy the universe the frustrated preferences will still be there, even harder to satisfy than before.

It might, but it would be one that was outweighed by the larger number of preference-satisfactions to be gained from doing so, just like the disutility of torturing someone for 50 years is outweighed by the utility of avoiding 3^^^3 dust-speck incidents (for utilitarian utility functions).

To get around this I employ a sort of "selective negative utilitarianism." To put it bluntly, I count the creation of people with the sort of complex humane values I appreciate to be positive, for the most part,* but consider creating creatures with radically simpler values (or modifying existing complex creatures into them) to count as a negative.

This results in a sort of two-tier system, where I'm basically a preference utilitarian for regular ethics, and an ideal utilitarian for population ethics. In situations where the population is fixed I value all preferences fairly equally.** But when adding new people, or changing people's preferences, I consider it bad to add people who don't have preferences for morally valuable ideals like Truth , Freedom, Justice, etc.

*Of course, I also reject the Repugnant Conclusion. So I also consider adding complex creatures to be negative if it pushes the world in the direction of the RC.

**One exception is that I don't value extremely sadistic preferences at all. I'd rescue the person who is about to be tortured in the Thousand Sadist' Problem.

comment by Ghatanathoah · 2013-12-12T06:30:24.914Z · LW(p) · GW(p)

Every non-sentientist value that you add to your pool of intrinsic values needs an exchange rate (which can be non-linear and complex and whatever) that implies you'd be willing to let people suffer in exchange for said value....If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people's preferences for it. This would be the utilitarian way to include "complexity of value".

I am proceeding along this line of thought as well. I believe in something similar to G. E. Moore's "Ideal Utilitarianism." I believe that we should maximize certain values like Truth, Beauty, Curiousity, Freedom, etc. However, I also believe that these values are meaningless when divorced from the existence of sentient creatures to appreciate them. Unlike Moore, I would not place any value at all on a piece of lovely artwork no one ever sees. There would need to be a creature to appreciate it for it to have any value.

So basically, I would shift maximizing complex values from regular ethics to population ethics. I would give "extra points" to the creation of creatures who place intrinsic value on these ideals, and "negative points" to the creation of creatures who don't value them.

Now, you might argue that this does create scenarios where I am willing to create suffering to promote an ideal. Suppose I have the option of creating a wireheaded person that never suffers, or a person who appreciates ideals, and suffers a little (but not so much that their life is not worth living). I would gladly choose the idealistic person over the wirehead.

I do not consider this to be "biting a bullet" because that usually implies accepting a somewhat counterintuitive implication, and I don't find this implication to be counterintuitive at all. As long as the idealistic person's life is not so terrible that they wish they had never been born I can not truly be said to have hurt them.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-12-12T19:00:54.178Z · LW(p) · GW(p)

So would you accept the very repugnant conclusion for total preference utilitarianism? If you value the creation of new preferences (of a certain kind), would this allow for tradeoff-situations where you had to frustrate all the currently existing preferences, create some more beings with completely frustrated preferences, and then create a huge number of beings living lives just slightly above the boundary where the satisfaction-percentage becomes "valuable" in order to make up for all the suffering (and overall improve the situation)? This conclusion seems to be hard to block if you consider it morally urgent to bring new satisfied preferences into existence. And it, too, seems to be selfish in a way, although this would have to be argued for further.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-15T04:03:13.728Z · LW(p) · GW(p)

So would you accept the very repugnant conclusion for total preference utilitarianism?

I did not mention it because I didn't want to belabor my view, but no, I wouldn't. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population. That's one reason why the repugnant conclusion is repugnant. This means that sometimes it is good to add people, at other times it is bad.

Of course, this view needs some qualifiers. First of all, once someone is added to a population, they count as being part of it even after they are dead, so you can't arrive at an ideal population size by killing people. This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird "sometimes negative sometimes positive depending on the context" variety I employ.

I think a helpful analogy would be Parfit's concept of "global preferences," which he discusses on page 3 of this article. Parfit argues that we have "Global Preferences," which are meta-preferences about what sort of life we should live and what sort of desires we should have. He argues that these Global Preferences dictate the goodness of whether we develop a new preference.

For instance, Parfit argues, imagine someone gets you addicted to a drug, and gives you a lifetime supply of the drug. You now have a strong desire to get more of the drug, which is satisfied by your lifetime supply. Parfit argues that this does not make you life better, because you have a global meta-preference to not get addicted to drugs, which has been violated. By contrast (my example, not Parfit's) if I enter into a romantic relationship with someone it will create a strong desire to spend time with that person, a desire much stronger than my initial desire to enter the relationship. However, this is a good thing, because I do have a global meta-preference to be in romantic relationships.

We can easily scale this up to population ethics. I have Global Moral Principles about the type and amount of people who should exist in the world. Adding people who fulfill these principles makes the world better. Adding people who do not fulfill these principles makes the world worse, and should be stopped.

And it, too, seems to be selfish in a way, although this would have to be argued for further.

Reading and responding to your exchange with other people about this sort of "moral selfishness" has gotten me thinking about what people mean and what concepts they refer to when they use that word. I've come to the conclusion that "selfish" isn't a proper word to use in these contexts. Now, obviously this is something of a case of "disputing defintions", but the word "selfish" and the concepts it refers to are extremely "loaded" and bring a lot of emotional and intuitive baggage with them, so I think it's good mental hygiene to be clear about what they mean.

To me what's become clear is that the word "selfish" doesn't refer to any instance where someone puts some value of theirs ahead of something else. Selfishness is when someone puts their preferences about their own life and about their own happiness and suffering ahead of the preferences, happiness, and suffering of others.

To illustrate this, imagine the case of a racial supremacist who sacrifices his life in order to enable others of his race to continue oppressing different races. He is certainly a very bad person. But it seems absurd to call him "selfish." In my view this is because, while he has certainly put some of his preferences ahead of the preferences of others, none of those preferences were preferences about his own life. They were preferences about the overall state of the world.

Now, obviously what makes a preference "about your own life" is a fairly complicated concept (Parfit discusses it in some detail here). But I don't see that as inherently problematic. Most concepts are extremely complex once we unpack them.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-12-15T12:45:30.476Z · LW(p) · GW(p)

I did not mention it because I didn't want to belabor my view, but no, I wouldn't. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population.

It seems to me like your view is underdetermined in regard to population ethics. You introduce empirical considerations about which types of preferences people happen to have in order to block normative conclusions. What if people actually do want to bite the bullet, would that make it okay to do it? Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not. Would this ever be ok according to your view? If not, you seem to not intrinsically value the creation of satisfied preferences.

I agree with your analysis of "selfish".

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-30T19:00:22.981Z · LW(p) · GW(p)

If not, you seem to not intrinsically value the creation of satisfied preferences.

You're right that I do not intrinsically value the creation of all satisfied preferences. This is where my version of Moore's Ideal Utilitarianism comes in. What I value is the creation of people with satisfied preferences if doing so also fulfills certain moral ideals I (and most other people, I think) have about how the world ought to be. In cases where the creation of a person with satisfied preferences would not fulfill those ideals I am essentially a negative preference utilitarian, I treat the creation of a person who doesn't fulfill those ideals the same way a negative preference utilitarian would.

I differ from Moore in that I think the only way to fulfill an ideal is to create (or not create) a person with certain preferences and satisfy those preferences. I don't think, like he did, that you can (for example) increase the beauty in the world by creating pretty objects no one ever sees.

I think a good analogy would again be Parfit's concept of global preferences. If I read a book, and am filled with a mild preference to read more books with the same characters, such a desire is in line with my global preferences, so it is good for it to be created. By contrast, being addicted to heroin would fill me with a strong preference to use heroin. This preference is not in line with my global preferences, so I would be willing to hurt myself to avoid creating it.

Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not.

I have moral ideals about many things, which include how many people there should be, their overall level of welfare, and most importantly, what sort of preferences they ought to have. It seems likely to me that the scenario with the torture+new people scenario would violate those ideals, so I probably wouldn't go along with it.

To give an example where creating the wrong type of preference would be a negative, I would oppose the creation of a sociopath or a paperclip maximizer, even if their life would have more satisfied preferences than not. Such a creature would not be in line with my ideals about what sort of creatures should exist. I would even be willing to harm myself or others, to some extent, to prevent their creation.

This brings up a major question I have about negative preference utilitarianism, which I wonder if you could answer since you seem to have thought more about the subject of negative utilitarianism than I have. How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born? For instance, suppose you had a choice between torturing every person on Earth for the rest of their lives, or creating one new person who will live the life of a rich 1st world person with a high happiness set point? Surely you wouldn't torture everyone on Earth? A hedonist negative utilitarian wouldn't of course, but we're talking about negative preference utilitarianism.

A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?

The best thing I can come up with is to give the creation of such a creature a utility penalty equal to "However much utility the creature accumulates over its lifetime, minus x," where x is a moderately sized number. However, it occurs to me that someone whose thought more about the subject than me might have figured out something better.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-08T20:38:01.167Z · LW(p) · GW(p)

Something you wrote in a comment further above:

This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird "sometimes negative sometimes positive depending on the context" variety I employ.

I don't think so, neither negative preference nor negative hedonistic utilitarianism implies the Sadistic Conclusion. Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:

The Sadistic Conclusion: In some circumstances, it would be better with respect to utility to add some unhappy people to the world (people with negative utility), rather than creating a larger number of happy people (people with positive utility).

Now, according to classical utilitarianism, the large number of happy beings would each be of "positive utility". However, given the evaluation function of the negative view, their utility is neutral if their lives were perfect, and worse than neutral if their lives contain suffering. The Sadistic Conclusion is avoided, although only persuasively so if you find the axiology of the negative view convincing. Otherwise, you're still left with an outcome that seems counterintuitive, but this seems to be much less worrisome than having something that seems to be messed up even on the theoretical level. You say you're okay with the Sadistic Conclusion because there are no alternatives, but I would assume that, if you did not yet know that there are no alternatives (you'd want to go with), then you would have a strong inclination to count it as a serious deficiency of your stated view.

Addressing the comment right above now:

How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born?

Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering. Whether this is going to happen to a new person that you'll bring into existence, or whether it is going to happen to a person that already exists, does not make a difference. (No presence-bias, as I said above.) So a negative preference utilitarian should be indifferent between killing an existing person and bringing a new person (fully developed, with memories and life-goals) into existence if this later person is going to die / be killed soon as well. (Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)

This implies that the preferences of existing people may actually lead to it being the best action to bring new people into existence. If humans have a terminal value of having children, then these preferences of course count as well, and if the children are guaranteed perfect lives, you should bring them all into existence. You should even bring them into existence if some of them are going to suffer horribly, as long as the existing people's preferences would, altogether, contain even more frustrations.

A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?

You will need some way of normalizing all preferences, setting the difference between "everything fulfilled" and "everything frustrated" equal for beings of the same "type". Then the question is whether all sentient beings fall under the same type, or whether you want to discount according to intensity of sentience, or some measure of agency or something like that. I have not yet defined my intuitions here, but I think I'd go for something having to do with sentience.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2014-01-17T22:53:09.195Z · LW(p) · GW(p)

Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:

When I read the formulation of the Sadistic Conclusion I interpreted "people with positive utility" to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives instead of a large population of almost ideal lives to be the Sadistic Conclusion.

If I understand you correctly, you are saying that negative utilitarianism technically avoids the Sadistic Conclusion because it considers a life with any suffering at all to be a life of negative utility, regardless of how many positive things that life also contains. In other words, it avoid the SC because it's criterion for what makes a life positive and negative are different than the criterion Arrenhius used when he first formulated the SC. I suppose that is true. However, NU does not avoid the (allegedly) unpleasant scenario Arrenhius wanted to avoid (adding a tortured life instead of a large amount of very positive lives).

Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering....(Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)

Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die? In which case you might as well do whatever afterwards, since infinite harm has already occurred? Should you torture everyone on Earth for decades to prevent such a person from being added? That seems weird.

The best solution I can currently think of is to compare different alternatives, rather than try to measure things in absolute terms. So if a person who would have lived to 80 dies at 75 that generates 5 years of unsatisfied preferences, not infinity, even if the person would have preferred to live forever. But that doesn't solve the problem of adding people who wouldn't have existed otherwise.

What I'm trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die. So how many unsatisfied preferences should adding a new person count as creating? How big a disutility is it compared to other disutilities, like thwarting existing preferences and inflicting pain on people.

A couple possibilities that occurs to me off the top of my head. One would be to find the difference in satisfaction between the new people and the old people, and then compare it to the difference in satisfaction between the old people and the counter-factual old people in the universe where the new people were never added.

Another possibility would be to set some sort of critical level based on what the maximum level of utility it is possible to give the new people given our society's current level of resources, without inflicting greater disutilities on others than you give utility to the new people. Then weigh the difference between the new peoples actual utility and their "critical possible utility" and compare that to the dissatisfaction the existing people would suffer if the new people are not added.

Do either of these possibilities sound plausible to you, or do you have another idea?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-29T11:24:42.297Z · LW(p) · GW(p)

I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are "positive" welfare levels. I don't think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes "tarnished" and non-optimal. Under a Buddhist view of value, this would be different.

Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die?

If all one person cared about was to live for at least 1'000 years, and all a second person cared about was to live for at least 1'000'000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500'000? I don't think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching "half" of their true and only goal. I don't think the first person would somehow care less in overall terms about achieving her goal than the second person.

To what extent would this way of comparing preferences change things?

What I'm trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die.

I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different.

I'm actually not sure how I would proceed here, and this is of course a problem. Since I'd (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like "intensity of sentience". However, I suspect that I'm inclined to do this because I have strong leanings towards hedonistic views, so it would not necessarily fit elegantly with a purely preference-based view on what matters. And that would be a problem because I don't like ad hoc moves.

Or maybe a better way to deal with it would be the following: Preferences ought to be somewhat specific. If people just say "infinity", they still aren't capable to envision what this would actually mean. So maybe a chimpanzee could only envision a certain amount of things because of some limit of brain complexity, while typical humans could envision slightly more stuff, but nothing close to infinity. In order for someone to at a given moment have the preference to live forever, that person then would in this case need an infinitely complex brain to properly envision all this implies. So you'd get an upper bound that prevents the problems you mentioned from arising.

You could argue that humans actually want to live for infinity by making use of personal identity and transitivity (e.g. "if I ask in ten years, the person will want to live for the next ten years and be able to give you detailed plans; and keep repeating that every ten years), but here I'd say we should just try to minimize preference-dissatisfaction of all consciousness-moments, not of persons. I might be talking nonsense with the word "envision", but something along these lines seems plausible to me too.

The two possibilities you propose don't seem plausible to me. I have a general aversion to things you'd only come up with in order to fix a specific problem and that wouldn't seem intuitive from the beginning / from a top-down perspective. I need to think about this further.

comment by weeatquince · 2013-11-25T21:32:21.797Z · LW(p) · GW(p)

1) As an EA I strongly resist any attempt to say that EA as utilitarianism as I would see doing so as harmful for the movement and it would exclude many of the non-utilitarian EAs I know.

Ea is not utilitarianism. There is no reason why you cannot apply rationality to doing good and be an EA and believe in Christian ethics / ethical anti-realism / virtue ethics / deontolgical ethics / etc. For example I have an EA friend who would never kill one person to save 5 people, but believes strongly that we should research and give to the very best charities and so on. I see the above point as unequivecal, insofar as I

2. I would or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good. EG. if someone truly believed in some Rawlesian concept of justice and supported a charity that best lead to that idea. HOWEVER

  • I have some arbitrarily ill-defined limits on what counts as good. Eg I would never except as an EA someone who believed that killing Jews is the good.
  • If I meet someone with a very strange view (Eg the best cause is saving snails) I would assume that they are being irrational rather than just had a different understanding of morality.

3. I think it is bad of CEA to push OP away on utilitarian grounds. That said I find it hard to conceive of any form of moral view that would lead someone to believe that the best action they could take would be to create a charity to promote promise-keeping, so I have some sympathy for CEA. (Also I would be interested to hear an elaboration of why a promise keeping charity is the best thing to do.)

Replies from: Dias, savageorange
comment by Dias · 2013-11-26T03:48:58.011Z · LW(p) · GW(p)

I would or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good.

You're a CEA employee, if I remember correctly? If so, your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I would be interested to hear an elaboration of why a promise keeping charity is the best thing to do

I'm far from certain it is. But as far as I'm aware no effort at all is put into it at present, so there could be very low hanging fruit.

Replies from: weeatquince, wdmacaskill
comment by weeatquince · 2013-12-02T23:37:33.686Z · LW(p) · GW(p)

This sort of mixed messaging is exactly what I was objecting too

Firstly could you elaborate on how what I said differs from what Will has said please. I am fairly sure we both agree with what EA is.

You're a CEA employee

Incorrect although I do volunteer for them in ways that help spread EA.

comment by wdmacaskill · 2013-11-26T15:40:08.427Z · LW(p) · GW(p)

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstrably false.

But now the post reads more like the main conclusion is:

  1. EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.
Replies from: Dias
comment by Dias · 2013-11-26T23:42:25.981Z · LW(p) · GW(p)

I haven't revised the post subsequent to anyone commenting. I did make a ninja edit to clear up some formatting immediately after submitting.

comment by savageorange · 2013-11-26T04:22:49.260Z · LW(p) · GW(p)

.I see the above point as unequivecal, insofar as I

I see the above sentence as incomplete, and it's not obvious what the ending would be. You might want to fix that.

comment by komponisto · 2013-11-24T19:10:00.665Z · LW(p) · GW(p)

My sentiments exactly. Thank you for this well-written and badly-needed post. (Also for correctly understanding the meaning of "utilitarianism".)

comment by Lukas_Gloor · 2013-11-24T19:09:31.968Z · LW(p) · GW(p)

Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.

I think your critique would have a higher chance of improving (in your view) something if you framed it as a concern about your personal values not being included adequately, rather than a two-line (plus an overused link that is begging the question as well) "refutation" of utilitarianism that also implicitly includes the controversial premise of moral realism.

Replies from: Dias
comment by Dias · 2013-11-24T21:04:56.575Z · LW(p) · GW(p)

The truth of utilitarianism doesn't matter to my argument. A strategy can be intellectually dishonest even if it's goal is correct.

comment by fubarobfusco · 2013-11-25T08:19:55.535Z · LW(p) · GW(p)

A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and hence 2) inferior to EA things.

Keeping promises is a good thing.

I've heard the claim that societies and subcultures where people expect promises to be kept, will prosper over those where people don't. (I've also heard the claim that members of the latter groups tend to regard members of the former to be hopelessly gullible ... and also the claim that the former two claims are used to justify social prejudices and thus to weaken people by discouraging them from cooperating.)

However ...

I'm not sure how effectively a charity can go about promoting promise-keeping in general. A traditional way to promote virtues is by preaching, in one form or another. The Mormons used to run ads during Saturday morning cartoons, promoting virtues such as honesty. That one was very memorable (at least to me) ... but I'm not sure how we would check whether it actually made people tell fewer lies.

comment by buybuydandavis · 2013-11-25T02:18:09.390Z · LW(p) · GW(p)

I found I agreed with the summary, but I think for different reason than the OP.

It would be more accurate to label what goes on around here in the name of Effective Altruism as Effective Utilitarianism, as an equal weighting between people is usually baked into the analysis. That doesn't have to be the case for Altruism.

comment by drethelin · 2013-11-24T19:41:03.264Z · LW(p) · GW(p)

Most people do not have identical values. This means that if you're trying to help a lot of people, you have to rely on things you can assess most easily. It's a lot harder to tell how much truth beauty or honor (ESPECIALLY honor) someone has access to than how much running water or whether they have malaria. I say we should concentrate on welfare and let people take care of their own needs for abstract morality, especially considering how much they will disagree on what they want.

Effective altruism doesn't say anything about general ethics, and I don't know why you're claiming it tries to. It's about how to best help the most people. It's about charity and reducing worldsuck. I think this is pretty obvious to everyone involved, and I don't think people are being fooled.

Replies from: komponisto, Dias
comment by komponisto · 2013-11-24T20:05:06.384Z · LW(p) · GW(p)

The issue is whether people like the OP and myself, who are interested in reducing worldsuck, but not necessarily in the same kind of way as utilitarians, belong in the EA community or not.

I'm quite confused about this. I think my values are pretty compatible with Yudkowsky's, but Yudkowsky seems to think he's an EA. On the other hand, my values seem incompatible with those of e.g. Paul Christiano, who I think everyone would agree clearly is an EA. Yet those two seem to act as though they believed their values were compatible with each other. Now both of them are as intelligent as I, maybe more. So if I update on their apparent beliefs about what sets of values are compatible, should I conclude that I'm an EA, despite my non-endorsement of utilitarianism or any other kind of extreme altruism, or should I instead conclude that I don't want Yudkowskian FAI after all, and start my own rival world-saving project?

Replies from: jkaufman, drethelin
comment by jefftk (jkaufman) · 2013-11-25T18:00:38.585Z · LW(p) · GW(p)

Could you expand more on the incompatibility you see between Yudkowsky and Christiano's values?

Replies from: komponisto
comment by komponisto · 2013-11-25T19:07:22.806Z · LW(p) · GW(p)

Christiano strikes me as the sort of person who would embrace the Repugnant Conclusion; whereas I think Yudkowsky would ultimately dodge any bullet that required him to give up turning the universe into an interesting sci-fi world whose inhabitants did things like write fanfiction stories.

Nobody actually acts like they believe in total utilitarianism, but Christiano comes as close as anyone I know of to at least threatening to act as if they believe in it. Yudkowsky, having written about complexity of value, doesn't give me the same worry.

comment by drethelin · 2013-11-24T22:31:45.713Z · LW(p) · GW(p)

utilitarianism isn't extreme altruism. It's just a way of trying to quantify morality. It doesn't decide what you care about. I'm pretty tired of people reacting to the concept of Utilitarianism with "Oh shit does that mean I need to give away all my money and live subsistence style to be a good person!?" A selfish utilitarian is just as possible as an extremely altruistic one or as one who's moderately altruistic. Effective altruism is about your ALTRUISM being EFFECTIVE. Not that you NEED to be an effective altruist. When you decide to give to a charity based on their efficiency and percentage that goes to the overhead, you are making an effective altruism decision. This is the case regardless of if your life is dedicated to altruism or if you're just giving 100 bucks because it's christmas.

Replies from: komponisto, Nornagest
comment by komponisto · 2013-11-24T23:00:38.493Z · LW(p) · GW(p)

A selfish utilitarian is just as possible as an extremely altruistic one

Not on the traditional usage of the term, it isn't -- and more to the point, not as the term is being used both in the grandparent and the OP.

You're confusing utilitarianism with plain old instrumental rationality.

comment by Nornagest · 2013-11-26T19:17:01.639Z · LW(p) · GW(p)

utilitarianism isn't extreme altruism. It's just a way of trying to quantify morality. It doesn't decide what you care about. I'm pretty tired of people reacting to the concept of Utilitarianism with "Oh shit does that mean I need to give away all my money and live subsistence style to be a good person!?" A selfish utilitarian is just as possible as an extremely altruistic one or as one who's moderately altruistic.

There's enough ambiguity here that I'm not totally sure, but it sounds like you're describing consequential ethics, not utilitarianism as such. Utilitarianisms vary in details, but they all imply that people's utility is fungible, including that of their adherents; that a change in (happiness, fulfillment, preference satisfaction) is just as significant whether it applies to you or to, say, a bricklayer's son living in a malarial part of Burkina Faso.

It's certainly possible to claim utilitarian ethics and still prioritize your own utility in practice. But that's inconsistent -- aside from a few quibbles regarding asymmetric information -- with being a good person by that standard, if the standard means anything at all.

Replies from: drethelin
comment by drethelin · 2013-11-26T19:27:24.678Z · LW(p) · GW(p)

I've always thought a utilitarianism as an effort to quantify "good" and a framework for making moral decisions rather than an imperative. EG, the term Utility Function is a subset of utilitarian theory but does not presuppose utilitarian base motivations. Someone's utility function consists of their desire to maximize welfare as well as their desires to hope and honor and whatnot.

It's become increasingly clear that very few people think about it this way.

Replies from: Nisan
comment by Nisan · 2013-11-27T00:13:23.290Z · LW(p) · GW(p)

Yep, see the SEP on Utilitarianism and the LW wiki on utility functions.

comment by Dias · 2013-11-24T21:11:26.836Z · LW(p) · GW(p)

Effective altruism doesn't say anything about general ethics

Except when it talks about fairness, justice and trying to do as much good as possible without restriction.

comment by tog · 2015-07-12T16:43:34.994Z · LW(p) · GW(p)

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

comment by Arepo · 2013-11-25T14:11:31.530Z · LW(p) · GW(p)

‘A charity that very efficiently promoted beauty and justice’ would still be a utilitarian charity (where the form of util defined utility as beauty and justice), so if that’s not EA, then EA does not = utilitarianism, QED.

Also, as Ben Todd and others have frequently pointed out, many non-utilitarian ethics subsume the value of happiness. A deontologist might want more happiness and less suffering, but feel that he also had a personal injunction against violating certain moral rules. So long as he didn’t violate those codes, he might well want to maximise efficient use of welfare.

comment by hyporational · 2013-11-25T06:11:15.582Z · LW(p) · GW(p)

truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition

None of these can be easily measured, and to have any of these you need to have basic wellbeing covered. You can't really tell other people you're being effective if you've got nothing to show for it, so I think your criticism is misplaced.

Replies from: Lumifer
comment by Lumifer · 2013-11-25T06:24:45.347Z · LW(p) · GW(p)

to have any of these you need to have basic wellbeing covered

Don't think so -- traditionally the test for true honor and loyalty and justice, etc. was whether you'd be willing to stick with them when your "basic wellbeing" is not covered.

Sure, globally and in the long term Maslow's hierarchy takes over, but locally you can very well lack basic stuff and still insist on the higher concepts.

Replies from: hyporational
comment by hyporational · 2013-11-25T06:38:06.619Z · LW(p) · GW(p)

Then again, people who'd defend their honor while starving, dying of malaria and covered in filth probably don't need more of that stuff delivered to them do they? :)

Replies from: Lumifer
comment by Lumifer · 2013-11-25T06:42:57.622Z · LW(p) · GW(p)

Loyalty and tradition, probably no, but justice delivered might come in handy :-)