post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2018-08-14T05:16:00.774Z · LW(p) · GW(p)

Relevant reading: gwern's The Narrowing Circle. He makes the important point that moral circles have actually narrowed in various ways, and also that it never feels that way because the things outside the circle don't seem to matter anymore. Two straightforward examples are gods and our dead ancestors.

Replies from: habryka4
comment by habryka (habryka4) · 2018-08-14T20:52:10.372Z · LW(p) · GW(p)

Great article, can strongly recommend. Hadn't read it before and got quite a bit of value out of it.

comment by Said Achmiz (SaidAchmiz) · 2018-08-13T19:16:50.121Z · LW(p) · GW(p)

There are a few ideas that seem obvious to me, but which seem, perplexingly, to elude (or simply not to have occurred to) many folks I consider quite intelligent. One of these ideas is:

We can widen our circle of concern. But there’s no reason we must do so; and, indeed, there is—by definition!—no moral obligation to do it. (In fact, it may even be morally blameworthy to do so, in much the same way that it is morally blameworthy to take a pill that turns you into a murdering psychopath, if you currently believe that murder is immoral.)

It would be good[1], I think, for many in the rationalist community to substantially narrow their circles of moral concern (or, to be more precise, to shift from a simple “circle” of concern to a more complex model based on concentric circles / gradients).

[1] Here take “good” to mean something like “in accord with personal extrapolated volition”.

Replies from: habryka4
comment by habryka (habryka4) · 2018-08-13T19:38:43.964Z · LW(p) · GW(p)

I agree with this, and also think it's not obvious we should widen our moral circles. I do think that when many people reflect on their preferences, they will find that they do want to expand their moral circle, though it's also not obvious that they would come to the opinion that they should not, if they first heard some arguments against it.

I would be curious about hearing more detail about your reasoning for why narrowing their moral circles has a good chance of being more in accordance with their CEV for many people.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-13T20:43:43.427Z · LW(p) · GW(p)

I would be curious about hearing more detail about your reasoning for why narrowing their moral circles has a good chance of being more in accordance with their CEV for many people.

I will try and take some time to formulate my thoughts and post a useful response, but for now, a terminological quibble which I think is relevant:

“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)

I left out the ‘C’ part of ‘CEV’ deliberately. Whether our preferences and values cohere or not is obviously of great interest to the FAI builder, but it isn’t to me (in this context). I was, deliberately, referring only to the personal values and preferences of people in our community (and, perhaps, beyond). My intent was to refer not to what anyone prefers or values now; nor, on the other hand, to what they “should” prefer or value, on the basis of some interpersonal aggregation (such as, for instance, the oft-cited “judgment of history”, a.k.a. “trans-temporal peer pressure”); but rather, to what they, themselves, would prefer and value, if they learned more, considered more, etc. (In short, I am referring to the “best curve”—in Hanson’s “curve-fitting” model—that represents a person’s “ideal”, in some sense, morality.)

“Personal extrapolated volition” seems like as good a term as any. If there’s existing terminology I should be using instead, I’m open to suggestions.

Replies from: habryka4, steven0461
comment by habryka (habryka4) · 2018-08-13T20:55:40.077Z · LW(p) · GW(p)

Seems like a reasonable quibble. I tend to use CEV to also refer to personal extrapolation, where I tend to have similar uncertainty to whether the values of a single person will cohere, as I have about whether the values of multiple people will cohere, but it seems reasonable to still have different words to refer to the different processes. PEV does seem as good as any.

Replies from: Raemon
comment by Raemon · 2018-08-13T20:58:25.625Z · LW(p) · GW(p)

FWIW it’s a pet peeve of mine when people use CEV to refer to personal extrapolated volition - it makes a complicated concept harder to refer to.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2018-08-14T01:13:16.560Z · LW(p) · GW(p)

People have been using CEV to refer to both "Personal CEV" and "Global CEV" for a long time -- e.g., in the 2013 MIRI paper "Ideal Advisor Theories and Personal CEV."

I don't know of any cases of Eliezer using "CEV" in a way that's clearly inclusive of "Personal" CEV; he generally seems to be building into the notion of "coherence" the idea of coherence between different people. On the other hand, it seems a bit arbitrary to say that something should count as CEV if two human beings are involved, but shouldn't count as CEV if one human being is involved, given that human individuals aren't perfectly rational, integrated, unitary agents. (And if two humans is too few, it's hard to say how many humans should be required before it's "really" CEV.)

Eliezer's original CEV paper did on one occasion use "coherence" to refer to intra-agent conflicts:

When people know enough, are smart enough, experienced enough, wise enough, that their volitions are not so incoherent with their decisions, their direct vote could determine their volition. If you look closely at the reason why direct voting is a bad idea, it’s that people’s decisions are incoherent with their volitions.

See also Eliezer's CEV Arbital article:

Helping people with incoherent preferences
What if somebody believes themselves to prefer onions to pineapple on their pizza, prefer pineapple to mushrooms, and prefer mushrooms to onions? In the sense that, offered any two slices from this set, they would pick according to the given ordering?
(This isn't an unrealistic example. Numerous experiments in behavioral economics demonstrate exactly this sort of circular preference. For instance, you can arrange 3 items such that each pair of them brings a different salient quality into focus for comparison.)
One may worry that we couldn't 'coherently extrapolate the volition' of somebody with these pizza preferences, since these local choices obviously aren't consistent with any coherent utility function. But how could we help somebody with a pizza preference like this?

I think that absent more arguing about why this is a bad idea, I'll probably go on using "CEV" to refer to several different things, mostly relying on context to make it clear which version of "CEV" I'm talking about, and using "Personal CEV" or "Global CEV" when it's really essential to disambiguate.

Replies from: SaidAchmiz, steven0461
comment by Said Achmiz (SaidAchmiz) · 2018-08-14T04:34:40.060Z · LW(p) · GW(p)

On the other hand, it seems a bit arbitrary to say that something should count as CEV if two human beings are involved, but shouldn’t count as CEV if one human being is involved, given that human individuals aren’t perfectly rational, integrated, unitary agents. (And if two humans is too few, it’s hard to say how many humans should be required before it’s “really” CEV.)

Conversely, it seems odd to me to select / construct our terminology on the basis of questionable—and, more importantly, controversial—frameworks/views like the idea that it makes sense to view a human as some sort of multiplicity of agents.

The standard, “naive” view is that 1 person = 1 agent. I don’t see any reason not to say, nor anything odd about saying, that the concept of “CEV” applies when, and only when, we’re talking about two or more people. One person: personal extrapolated volition. Two people, three people, twelve million people, etc.: coherent extrapolated volition.

Anything other than this, I’d call “arbitrary”.

Replies from: Unnamed
comment by Unnamed · 2018-08-14T07:53:29.520Z · LW(p) · GW(p)

You could think of CEV applied to a single unitary agent as a special case where achieving coherence is trivial. It's an edge case where the problem becomes easier, rather than an edge case where the concepts threaten to break.

Although this terminology makes it harder to talk about several agents who each separately have their own extrapolated volition (as you were trying to do in your original comment in this thread). Though replacing it with Personal Extrapolated Volition only helps a little, if we also want to talk about several separately groups who each have their own within-group extrapolated volition (which is coherent within each group but not between groups).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-14T17:44:06.270Z · LW(p) · GW(p)

Yes, as you noted, I used “personal extrapolated volition” because the use case called for it. It seems to me that the existence of use cases that call for a term (in order to have clarity) is, in fact, the reason to have that term.

comment by steven0461 · 2018-08-14T20:08:07.543Z · LW(p) · GW(p)

If it were up to me, I'd use "CEV" to refer to the proposal Eliezer calls "CEV" in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use "extrapolated volition" to refer to the more general class of algorithms that extrapolate people's volitions, and use something like "true preferences" or "ideal preferences" or "preferences on reflection" when the algorithm for finding those preferences isn't important, like in the OP.

If I'm not mistaken, "CEV" originally stood for "Collective Extrapolated Volition", but then Eliezer changed the name when people interpreted it in more of a "tyranny of the majority" way than he intended.

comment by steven0461 · 2018-08-14T20:13:57.639Z · LW(p) · GW(p)
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)

I'll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn't claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that's interleaved with the extrapolation step instead of happening after the extrapolation step is over.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-14T20:41:26.348Z · LW(p) · GW(p)

Yes. I know.

Yours is exactly the kind of comment that I specifically hoped would not get made, and which I therefore explicitly requested that people restrain themselves from making.

Replies from: steven0461
comment by steven0461 · 2018-08-15T01:43:05.806Z · LW(p) · GW(p)

It didn't look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn't among the aspects you were hoping people wouldn't comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.

comment by Andaro · 2018-08-14T21:50:56.643Z · LW(p) · GW(p)

The moral circle is not ever expanding, and I consider that a good thing.

A very wide moral circle is actually very costly to a person. Not only can it cause a lot of stress to think of the suffering of beings in the far future or nonhuman animals in farming or in the wild, but it also requires a lot of self-sacrifice to actually live up to this expanded circle.

In addition, it can put you at odds with other well-meaning people who care about the same beings, but in a different way. For example, when I still cared about future generations, I mostly cared about them in terms of preventing their nonconsensual suffering and victimization. However, the common far-future altruism narrative is that we ought to make sure they exist, not that they be prevented from suffering or being victimized without their consent. This is cause for conflict, as exemplified by the -25 karma points or so I gathered on the Effective Altruism Forum for it at the time.

Since then, my moral circle has contracted massively, and I consider this to be a huge improvement. It now contains only me and the people who have made choices that benefit me (or at least benefit me more than they harm me). There is also a circle of negative concern now, containing all the people who have harmed me more than they benefit me. I count their harm as a positive now.

My basic mental heuristic is, how much did a being net-benefit or net-harm me through deliberate choices and intent, how much did I already reciprocate in harming or benefitting them, and how cheap or expensive is it for me to harm or benefit them further on the margin? These questions get integrated into an intuitive heuristic that shifts my indifference curves for everyday choices.

The psychological motivation for this contracted circle is based on the simple truth that the utility of others is not my utility, and the self-awareness that I have an intrinsic desire for reciprocity.

There is yet another cost to a wide circle of moral concern, and that is the discrepancy with people who have a smaller circle. If you're my compatriot or family member or fellow present human being, and you have a small circle of concern, I can expect you to allocate more of your agency to my benefit. If you have a wide circle of concern that includes all kinds of entities who can't reciprocate, I benefit less from having you as an ally.

When people have a wide circle of concern and advocate for its widening as a norm, this makes me nervous because it implies huge additional costs forced on me, through coercive means like taxation or regulations, or simply by spreading benevolence onto a large number of non-reciprocators instead of me and the people who've benefitted me. That actually makes me worse off, and people who make me worse off are more likely to receive negative reciprocity rather than positive reciprocity.

I love human rights because they're a wonderful coordination instrument that makes us all better off, but I now see animal rights as a huge memetic mistake. Similarly, there is little reason to care about far-future generations whose existence is never going to overlap with any of us in terms of reciprocity, and yet we're surrounded by memes that require we pay massive costs to their wellbeing.

Moralists who advocate this often use moralistic language to justify it. This gives them high social status and it serves as an excuse to impose costs on people who don't intrinsically care, like me. If I reciprocate this harm against them, I am instantly a villain who deserves to be shunned for being a villain. This dynamic has made me understand the weird paradoxical finding that some people punish what ostensibly seems to be prosocial behavior. Moralism can really harm us, and the moralists should be forced to compensate us for this harm.

Replies from: philh, Oscar_Cunningham
comment by philh · 2018-08-17T21:03:45.898Z · LW(p) · GW(p)

I think from a wide-circle perspective, the things you're talking about don't look like a cost of a wide circle, so much as just reasons the problem is hard. From a wide-circle perspective, the cost of a narrow circle is that you try to solve a problem that's easier than the real problem, and you don't solve the real problem, and children in Africa continue to die of malaria. It sounds like you're telling me that I shouldn't care about children dying of malaria because they're far away and can't do anything for me and I could spend that money on myself and people close to me... and my reaction is that none of that stops children from dying of malaria, which is really actually a thing I care about and don't want to stop caring about

There is yet another cost to a wide circle of moral concern, and that is the discrepancy with people who have a smaller circle. If you’re my compatriot or family member or fellow present human being, and you have a small circle of concern, I can expect you to allocate more of your agency to my benefit. If you have a wide circle of concern that includes all kinds of entities who can’t reciprocate, I benefit less from having you as an ally.

To be precise, this seems like a cost to Alice of Bob having a wide circle, if Alice and Bob are close. If they aren't, and especially if we bring in a veil of ignorance, then Alice is likely to benefit somewhat from Bob having a wide circle. Not definite, but this still seems like a thing to note.

Replies from: Andaro
comment by Andaro · 2018-08-18T11:46:09.588Z · LW(p) · GW(p)
To be precise, this seems like a cost to Alice of Bob having a wide circle, if Alice and Bob are close. If they aren't, and especially if we bring in a veil of ignorance, then Alice is likely to benefit somewhat from Bob having a wide circle.

Yes, but Alice doesn't benefit from Bob's having a circle so wide it contains nonhuman animals, far future entities or ecosystems/biodiversity for their own sake.

and my reaction is that none of that stops children from dying of malaria, which is really actually a thing I care about and don't want to stop caring about

The OP asks us to reexamine our moral circle. Having done that, I find that nonhuman animals and far future beings are actually a thing I don't care about and don't want to start caring about.

comment by Oscar_Cunningham · 2018-08-15T07:04:31.147Z · LW(p) · GW(p)

When people have a wide circle of concern and advocate for its widening as a norm, this makes me nervous because it implies huge additional costs forced on me, through coercive means like taxation or regulations

At the moment I (and many others on LW) are experiencing the opposite. We would prefer to give money to people in Africa, but instead we are forced by taxes to give to poor people in the same country as us. Since charity to Africa is much more effective, this means that (from our point of view) 99% of the taxed money is being wasted.

Replies from: Andaro
comment by Andaro · 2018-08-15T12:22:24.476Z · LW(p) · GW(p)

Yes, it certainly cuts both ways. Of course, your country's welfare system is also available to you and your family if you ever need it, and you benefit more directly from social peace and democracy in your country, which is helped by these transfers. It is hard to see how you could have a functioning democracy without poor people voting for some transfers, so unless you think democracy has no useful function for you, that's a cost in your best interest to pay.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2018-08-15T13:34:41.626Z · LW(p) · GW(p)

Really, the fact that different sizes of moral circle can incentivize coercion is just a trivial corollary of the fact that value differences in general can incentivize coercion.

Replies from: Andaro
comment by Andaro · 2018-08-15T14:39:30.812Z · LW(p) · GW(p)

But an expanding circle of moral concern increases value differences. If I have to pay for a welfare system, or else pay for a welfare system and also biodiversity maintainance and also animal protection and also development aid and also a Mars mission without a business model and also far-future climate change prevention, I'd rather just pay for the welfare system. Other ideological conflicts would also go away, such as the conflict between preventing animal suffering and maintaining pristine nature, ethical natalism vs. ethical anti-natalism, and so on.

comment by Donald Hobson (donald-hobson) · 2018-08-13T21:25:05.680Z · LW(p) · GW(p)

(Epistemic status, plausible conjecture)

I suspect the reason the moral circle seems to be expanding, is that we actually have a moral cone. Put someone in a situation where its the life of a close friend vs a distant stranger, and they will care about the close friend more. As the worlds standards of living rise, many people find that all their friends are already fine, so spare effort goes into helping the strangers. What we are seeing is our weaker preferences becoming more relevant as our strongest ones are satisfied.

Actually, in some social circles, you seem to draw a lot less criticism with an unusually wide circle than an unusually narrow one. This could be because if you only care about humans and they also care about chimps then they will still help humans nearly as much as you do. If you care about chimps and they don't, they are likely to harm many chimps for little gain.

comment by steven0461 · 2018-08-14T20:19:36.052Z · LW(p) · GW(p)

Moral circle widening groups together two processes that I think mostly shouldn't be grouped together:

1. Changing one's values so the same kind of phenomenon becomes equally important regardless of whom it happens in (e.g. suffering in a human who lives far away)

2. Changing one's values so more different phenomena become important (e.g. suffering in a squid brain)

Maybe if you do it right, #2 reduces to #1, but I don't think that should be assumed.

comment by Dagon · 2018-08-14T18:16:29.281Z · LW(p) · GW(p)

I think it's an error to think of this as a circle or binary classification. It's not "care or don't", it's "how much will I sacrifice". It's more of an electron orbital than a well-defined space, with probability being roughly analogous to caring intensity.

Also, our unstoppable imagination is a problem rather than an aide here. We often care about fictional characters more than many real people, and we often care about our preferred/projected desires in others more than their actual desires.

comment by Pattern · 2018-08-14T00:30:30.333Z · LW(p) · GW(p)

So far all references seem to be to 'one circle' as opposed to multiple circles which you are a member of.