[SEQ RERUN] Circular Altruism

post by MinibearRex · 2011-12-30T05:20:39.802Z · LW · GW · Legacy · 25 comments

Contents

25 comments

Today's post, Circular Altruism was originally published on 22 January 2008. A summary (taken from the LW wiki):

 

Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Against Discount Rates, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

25 comments

Comments sorted by top scores.

comment by [deleted] · 2011-12-30T16:43:29.257Z · LW(p) · GW(p)

I think there is something fundamentally wrong with the whole morality discussion on less wrong, also I fear that I lack the philosophical skills to accurately describe it. I will try nonetheless. If this will read like incoherent rambling I apologize in advance.

My problem is that I feel like the question of morality is normally framed as a universal preference for one situation over another. Like in the trolley problem, the question is framed as '"is it preferable to have 1 person die or 5?". And when someone prefers to pull the switch but not to push the fat man it is framed as preferring the one person dies situation in one case and preferring the five people die in the other case.

I think however, that there are two different uses of the word moral that need to be distinguished and only one of them allows for this preference framing. Eliezer hints at this distinction as "sacered" vs "non-sacered" values but he doesn't seem to go further with this - and in fact I don't understand how in this sequence post the distinction of these values relates to the rest of the post. But calling both "values" is still framing it as a problem of preference, only introducing one group of values which have to always be preferred over other values.

Instead I suggest that there is a fundamental morality that is not about preferences of some situations over others but about "things that you just don't do to other people" and I think that reframing that as "I prefer situations in which these things are not done to other people" is not the same thing at all. Because I think that morality should fundamentally be about agency and reframing the question as preference removes the question of agency.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-30T17:55:16.840Z · LW(p) · GW(p)

Do you think that each individual's private notion of "things you just don't do" is fundamental?

Or do you think that we derive our individual "don't do" lists from something else?

Replies from: None, mcclay
comment by [deleted] · 2011-12-30T18:39:10.655Z · LW(p) · GW(p)

I think morality cannot be a private thing. A lone person on a deserted island has no need for morality. I would say the "things you don't do" should probably be the minimal set of rules that make peaceful coexistence possible, which of course is only pushing the burden of definition to "peaceful coexistence" and "minimal set of rules", but the takeaway value is that these rules need to be derived somehow from the interaction with other people.

Replies from: TheOtherDave, TheOtherDave, arundelo
comment by TheOtherDave · 2011-12-30T20:04:46.611Z · LW(p) · GW(p)

So if today I have a set of rules R1 that meets (as far as I know) the "minimal set of rules that make peaceful coexistence possible" standard, and tomorrow someone demonstrates to me that adopting a different set of rules R2 meets that standard better, it follows that tomorrow I should stop following R1 and follow R2 instead?

If so, then it seems clear that what we ultimately care about is not the rules, but the consequences of following those rules. (If it were the other way around, we would continue to follow R1 no matter the consequences.)

Would you agree?

Replies from: None
comment by [deleted] · 2011-12-30T21:02:41.583Z · LW(p) · GW(p)

Edit: I now consider this comment to be completely irrelevant to the question asked. See my answer to TheOtherDave's answer to this one below.

If so, then it seems clear that what we ultimately care about is not the rules, but the consequences of following those rules.

The point I'm trying to make is that it's not about whether we care about the rules or their consequences, but that "caring about" is already the wrong question, in which I assume that "care" is another word for "value".

Let me make an example. Let's say, we both live in the same apartment and I like to play loud music and you like to study. Our only measure is to make a rule when I can play my music and when you can study. There is no way for us to measure my happiness from playing music against yours from studying.

Now, if neither of us would value peaceful coexistence having such a rule would help nothing because neither of us would stick to it, so the consequences of the rule would not be peaceful coexistence and from a consequentialist point of view, as I understand it, the rule would have no moral impact.

From a deontological point of view, however, it's still the right thing to do, because it is what would make our mutual agency in this situation possible. Again, that doesn't quite sound like what I am trying to convey, but the idea is, that even if neither of us would actually value that thing that I call agency, the choice of the agent cannot be to deny his own agency because that is a prerequisite for his choice to begin with, so the necessary preconditions of agency are not a question of valuation or caring about or even the consequence of actually enabling agency.

Now that I formulated this paragraph I notice two things, first it sounds quite a lot like objectivist morality, and second I am confused.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-30T21:38:38.011Z · LW(p) · GW(p)

I also am confused by what you're saying here.

You say that certain choices make our mutual agency possible, which suggests that other choices make it impossible. But you also say we can't choose to deny our own agency, which suggests that our choices don't affect whether our mutual agency exists or doesn't. I'm not really sure what "mutual agency" is in this context, though.

If it helps, I agree that in the situation you describe, it's important that we both be willing to stick to whatever agreement we make; without that, the agreements have no value. Whether that willingness derives from us valuing peaceful coexistence, or us valuing our reputations as word-keepers, or us believing that there's a powerful third party likely to punish noncompliance, or whatever, doesn't change the importance.

Replies from: None
comment by [deleted] · 2011-12-30T22:49:22.824Z · LW(p) · GW(p)

While writing a reply I realized that I was arguing a completely different point than I originally made. :-( And that's bad.

While writing a (german) wikibook on rationality I came to the conclusion that in addition to epistemological and instrumental rationality there should be a separate notion of "discourse rationality", that is, how to lead a rational discussion. And this is not it.

Now I'm not sure to what point to go back to start again.

Edit:

Ok, got it, it was my previous post that was confused.

What you were asking is, if I say, I have a rule that you do not make white people suffer the presence of black people and you come along and say that my rule is wrong, we should in fact welcome anybody's participation in our society no matter the color, how do we decide who's right?

And I am not entirely sure that the rules' actual ability to enable peaceful coexistence is the correct way to decide, since it is conceivable that segregation of the races might actually be more peaceful, like if we could build an impenetrable wall in northern Ireland to separate the religious groups. But I would say that by doing so we are solving the wrong problem.

Instead I would base my decision on an idealized concept of an agent. And with this idealized concept it is clear that color or identification as belonging to a certain tribe does not affect agency and therefore cannot be morally relevant. But a right to life, liberty, property, etc. does in fact affect agency because it defines the things an agent can make choices about, so these are moral questions. So in the above scenario I would have to agree that your rule is the better one, but not because of its consequences but because of its properties.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-31T03:03:23.336Z · LW(p) · GW(p)

This notion of "agency" is doing a lot of work in this account, and I have to admit I don't really understand it.

I understand that it's not the same thing as preference, and it's not the same thing as volition, and it's not related to things like ethnicity or nationality or upbringing and thus is not the same thing as values (which do depend on those things). But those are all negative statements, which are only marginally helpful.

Approaching it the other way: R2 is better than R1, you say, because of properties of R2... which are based on an idealized concept of an agent. All we know about R2 is that it meets the "minimal set of rules that make peaceful coexistence possible" standard better than R1 does. So those two things are presumably related in some way... but I don't grasp the relation.

I'm still pretty confused, here.

Replies from: None
comment by [deleted] · 2011-12-31T10:49:24.307Z · LW(p) · GW(p)

All we know about R2 is that it meets the "minimal set of rules that make peaceful coexistence possible" standard better than R1 does.

It's difficult to think about that without an example. Ideally the reason why you don't do certain things to other people should imply what those things are.

This notion of "agency" is doing a lot of work in this account, ...

Yes, I was using agency to replace "being human". I think we are moral because we recognize other people as humans like ourselves and use the same brain circuits to model what is happening to them that we use if it were happening to us, thus comes the golden rule morality.

From that I was thinking, is there something that makes us humans special which could actually justify such an approach? And I came up with agency, which I guess is the ability to make a conscious choice of action. So there are three parts to agency. A set of actions, the ability to choose not just based upon the current state of the world but based on preferences of expected consequences, and thirdly consciousness as ability to make meta-level choices. Maybe there needs to be something about learning there as well.

If I think about this notion of agency and try to come up with moral ground rules that are suggested by it, I come up with "life, liberty, and the pursuit of happiness" what is of course not a set of rules, but it's a more or less direct translation of my three parts of agency into a moral-ish language. How to get from there to a specific set of rules is something I don't know but think should be possible. And how to choose if different sets of rules would satisfy the purpose is also something I don't know. It could be that in this case it doesn't actually matter.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-31T18:10:04.077Z · LW(p) · GW(p)

So, you've described a human preference for having the things we'd want to happen to us also happen to systems we recognize as sufficiently like ourselves. Call that preference P.

A preference utilitarian would say that the moral value of a choice is proportional to the degree to which P is satisfied by that choice. (All else being equal.)

If I've understood you correctly, you reject preference utilitarianism as a moral framework. Instead, you suggest a deontological framework based on "agency." And agency is a concept you came up with to encapsulate whatever properties humans have that "justify" preferring P.

Have I followed you so far?

OK. Can you say more about how a preference is justified?

For example, you conclude that humans are justified in preferring P on the basis of various attributes of humans (the ability to take action based on expected consequences, the ability to make "meta-level" choices, "a set of actions," and maybe something about learning). I infer you believe we're _un_justified in preferring P on the basis of other attributes (say, skin color, or height above sea level, or tendency to slaughter other humans).

Is that right?

How did you arrive at those particular attributes?

Replies from: None
comment by [deleted] · 2012-01-01T14:27:53.181Z · LW(p) · GW(p)

So, you've described a human preference for having the things we'd want to happen to us also happen to systems we recognize as sufficiently like ourselves. Call that preference P.

A preference utilitarian would say that the moral value of a choice is proportional to the degree to which P is satisfied by that choice. (All else being equal.)

I was starting from my own intuitions about my moral preferences. But if you stop at treating morality as a preference you run into problems when people don't share these preferences. A common variation might for example be that people believe it is good for the strong to prey on the weak. But with morality being an interpersonal thing, any morality must account for differences in preferences and therefore cannot be based in preferences. That's why I reject preference utilitarianism.

And agency is a concept you came up with to encapsulate whatever properties humans have that "justify" preferring P.

My agency based morality does justify my moral preferences, but it doesn't "just" justify my moral preferences. I only have my moral preferences as a starting point. From that I construct an abstract moral framework, check if that abstract framework satisfies conditions of consistency and plausibility, and after I've been convinced it does, use it to justify or adjust my moral preferences.

Other people might come to different conclusions using this process, but since now our moral framework is removed from mere preferences we can use properties of the frameworks in question to try and integrate them or decide between them. A preference utilitarian would have to resort to some unjustified selection method like majority vote.

So how do I come up with the properties of a moral framework that make it better than an other? I don't know yet. I would suggest that minimalism is a good property. With a non minimal framework people could always ask, "why should we adopt this policy?". With a minimal framework it's either adopt all of it or don't adopt it at all. I also justify agency as the primary motivation since our agency is what creates the problem in the first place. Without choice we have no use for morality. Without deliberation we couldn't follow it. Without metalevel reasoning we couldn't adopt it, etc. Short, agency is the very thing that creates a solvable problem of morality and thus is the best place to solve it. If we start to argue that point, then we are coming to a point where we run into gödelian incompleteness.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-01T15:44:26.239Z · LW(p) · GW(p)

You keep tossing the word "justified" around, and I am increasingly unclear on how the work that you want that word to do is getting done.

For example: I agree with you that a preference utilitarian needs some mechanism for resolving situations where preferences conflict, but I'm not sure on what basis you conclude that such a mechanism must be unjustified, nor on what basis you conclude that your agency-based moral frameworks support a more justifiable method for integrating or deciding between different people's conflicting framework-based-conclusions.

I find your "without X we wouldn't have a problem and therefore X is the solution" argument unconvincing. Mostly it sounds to me like you've decided that your framework is cool, and now you're looking for arguments to support it.

Replies from: None
comment by [deleted] · 2012-01-01T16:17:01.152Z · LW(p) · GW(p)

but I'm not sure on what basis you conclude that such a mechanism must be unjustified

I was thinking it needs to be separately justified and is not justified from the principle of preference utilitarianism.

I find your "without X we wouldn't have a problem and therefore X is the solution" argument unconvincing.

It's a basic principle of engineering to solve a problem where it occurs. I think we've reached the point where I am not prepared to argue any further and don't think it would be fruitful to try. I thank you for the challenge.

Mostly it sounds to me like you've decided that your framework is cool, and now you're looking for arguments to support it.

That might be the case but I don't think it likely. I am an asshole enough to do what I want even without moral justification and I am a cynic enough not to expect anything else from other people. I was writing my original comment merely as an additional comment to the morality debate on Less Wrong because I believe that if Eliezer would create his FAI tomorrow it wouldn't be friendly towards me. The rest was just trying to answer your questions because I really think they helped me to think it through.

comment by TheOtherDave · 2012-01-01T06:41:54.338Z · LW(p) · GW(p)

Thinking about this some more... if I am an object with moral weight -- for example, if I have "agency" on your account -- then how I ought to treat myself is a question with a moral dimension, and attempting to answer that question without a moral theory (or with an incorrect moral theory) makes it less likely that I will treat myself the way I ought.

So I think "A lone person on a deserted island has no need for morality" is simply false.

Replies from: None
comment by [deleted] · 2012-01-01T14:37:15.690Z · LW(p) · GW(p)

My original point was, that there are two different kinds of morality. There is a core morality about human interaction that I am arguing here and that does not exist for a single individual. There is however a second use of the word morality that is closer to the word's ethymology of the latin "mores", it's the conduct that is expected from someone in some society. That second one definitely is a matter of preferences and it could extend to someone on a deserted island in as much as we still consider that person to be part of a "human society".

I think the importance of keeping your agency could be of the second kind. Maybe when people reject their agency it's their prerogative, or maybe we should expect from other people that they act as persons. I myself do regard people who reject their agency as beneath me, but I don't care too much.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-01T15:19:06.798Z · LW(p) · GW(p)

Does "core morality" have anything to say about whether I should keep someone from dying, given the choice?

Replies from: None
comment by [deleted] · 2012-01-01T15:52:11.203Z · LW(p) · GW(p)

No. That's preference. Unless you'd be the reason why he was dying, and of course you are not free to keep other people from helping.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-01T16:02:52.439Z · LW(p) · GW(p)

OK. So what kinds of choices does "core morality" contribute to making decisions about?

Replies from: None
comment by [deleted] · 2012-01-01T16:26:31.290Z · LW(p) · GW(p)

Now I feel like repeating myself. According to "core morality" you are not free to destroy someones agency unless in defense, e.g. you are not free to make choices that are expected to cause someone's death unless in defense if he violates this principle.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-01T18:39:23.735Z · LW(p) · GW(p)

I don't know how to reconcile your comments in this thread.

It seems to me that if someone is dying, and I choose to let them die, that's a choice that's expected to cause their death. So by your account, core morality says I'm not free to make that choice. Also by your account, core morality doesn't have anything to say about whether I should keep someone from dying given the choice.

Does my confusion make sense?
Can you help resolve it?

Replies from: None
comment by [deleted] · 2012-01-01T19:23:43.157Z · LW(p) · GW(p)

The test whether a choice causes something is to see whether if I did not exist or could not act (e.g. was unconscious), the thing would not occur.

In the given example, if I poison someone I cause his death, since if I could not act for whatever reason there would be no poison. But if someone is drowning without my doing and I just happen to come along and choose not to throw a life belt I would not be causing his death since he would drown anyway if I could not act.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-01T19:28:52.457Z · LW(p) · GW(p)

Ah, I see. OK, I understand you now. Thanks for clarifying.

comment by arundelo · 2011-12-31T00:46:24.861Z · LW(p) · GW(p)

The lone person on a deserted island will die if they don't choose their actions carefully.

Replies from: MinibearRex
comment by MinibearRex · 2011-12-31T05:55:31.833Z · LW(p) · GW(p)

But that could just be rationality. Such an individual would be perfectly able to survive without anything like what we would call morality.

comment by mcclay · 2012-01-11T18:14:44.616Z · LW(p) · GW(p)

"Altruism is an exalted human feeling, and its source is love. Whoever has the greatest share in this love is the greatest hero of humanity; these people have been able to uproot any feelings of hatred and rancor in themselves." (Fethullah Gulen) http://www.fethullah-gulen.org http://www.fethullah-gulen.net http://www.rumiforum.org/about/fethullah-gulen.html