On Caring

post by So8res · 2014-10-15T01:59:05.567Z · LW · GW · Legacy · 276 comments

Contents

  1
  2
  3
  4
  5
  6
  7
  8
None
277 comments

This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.

1

I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".

Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.

The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.

I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.

This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.

For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.

The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.

Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.

And this is a problem.

2

It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.

Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.

When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to grasp the gravity of the situation.

Saving a person's life feels great, and it would probably feel just about as good to save one life as it would feel to save the world. It surely wouldn't be many billion times more of a high to save the world, because your hardware can't express a feeling a billion times bigger than the feeling of saving a person's life. But even though the altruistic high from saving someone's life would be shockingly similar to the altruistic high from saving the world, always remember that behind those similar feelings there is a whole world of difference.

Our internal care-feelings are woefully inadequate for deciding how to act in a world with big problems.

3

There's a mental shift that happened to me when I first started internalizing scope insensitivity. It is a little difficult to articulate, so I'm going to start with a few stories.

Consider Alice, a software engineer at Amazon in Seattle. Once a month or so, those college students will show up on street corners with clipboards, looking ever more disillusioned as they struggle to convince people to donate to Doctors Without Borders. Usually, Alice avoids eye contact and goes about her day, but this month they finally manage to corner her. They explain Doctors Without Borders, and she actually has to admit that it sounds like a pretty good cause. She ends up handing them $20 through a combination of guilt, social pressure, and altruism, and then rushes back to work. (Next month, when they show up again, she avoids eye contact.)

Now consider Bob, who has been given the Ice Bucket Challenge by a friend on facebook. He feels too busy to do the ice bucket challenge, and instead just donates $100 to ALSA.

Now consider Christine, who is in the college sorority ΑΔΠ. ΑΔΠ is engaged in a competition with ΠΒΦ (another sorority) to see who can raise the most money for the National Breast Cancer Foundation in a week. Christine has a competitive spirit and gets engaged in fund-raising, and gives a few hundred dollars herself over the course of the week (especially at times when ΑΔΠ is especially behind).

All three of these people are donating money to charitable organizations… and that's great. But notice that there's something similar in these three stories: these donations are largely motivated by a social context. Alice feels obligation and social pressure. Bob feels social pressure and maybe a bit of camaraderie. Christine feels camaraderie and competitiveness. These are all fine motivations, but notice that these motivations are related to the social setting, and only tangentially to the content of the charitable donation.

If you took any of Alice or Bob or Christine and asked them why they aren't donating all of their time and money to these causes that they apparently believe are worthwhile, they'd look at you funny and they'd probably think you were being rude (with good reason!). If you pressed, they might tell you that money is a little tight right now, or that they would donate more if they were a better person.

But the question would still feel kind of wrong. Giving all your money away is just not what you do with money. We can all say out loud that people who give all their possessions away are really great, but behind closed doors we all know that people are crazy. (Good crazy, perhaps, but crazy all the same.)

This is a mindset that I inhabited for a while. There's an alternative mindset that can hit you like a freight train when you start internalizing scope insensitivity.

4

Consider Daniel, a college student shortly after the Deepwater Horizon BP oil spill. He encounters one of those college students with the clipboards on the street corners, soliciting donations to the World Wildlife Foundation. They're trying to save as many oiled birds as possible. Normally, Daniel would simply dismiss the charity as Not The Most Important Thing, or Not Worth His Time Right Now, or Somebody Else's Problem, but this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can. They simply don't have the resources to clean all the available birds. A pathetic young bird flops towards his feet, slick with oil, eyes barely able to open. He kneels down to pick it up and help it onto the table. One of the bird-cleaners informs him that they won't have time to get to that bird themselves, but he could pull on some gloves and could probably save the bird with three minutes of washing.

blog.bird-rescue.org

Daniel decides that he would spend three minutes of his time to save the bird, and that he would also be happy to pay at least $3 to have someone else spend a few minutes cleaning the bird. He introspects and finds that this is not just because he imagined a bird right in front of him: he feels that it is worth at least three minutes of his time (or $3) to save an oiled bird in some vague platonic sense.

And, because he's been thinking about scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of birds: the internal feeling of caring can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.

Thousands and thousands of birds were oiled by the BP spill alone. After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars. And that's not even counting wildlife threatened by other oil spills.

And if he cares that much about de-oiling birds, then how much does he actually care about factory farming, nevermind hunger, or poverty, or sickness? How much does he actually care about wars that ravage nations? About neglected, deprived children? About the future of humanity? He actually cares about these things to the tune of much more money than he has, and much more time than he has.

For the first time, Daniel sees a glimpse of of how much he actually cares, and how poor a state the world is in.

This has the strange effect that Daniel's reasoning goes full-circle, and he realizes that he actually can't care about oiled birds to the tune of 3 minutes or $3: not because the birds aren't worth the time and money (and, in fact, he thinks that the economy produces things priced at $3 which are worth less than the bird's survival), but because he can't spend his time or money on saving the birds. The opportunity cost suddenly seems far too high: there is too much else to do! People are sick and starving and dying! The very future of our civilization is at stake!

Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.

Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?

In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.

In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

Alice and Bob and Christine usually aren't spending time solving all the world's problems because they forget to see them. If you remind them — put them in a social context where they remember how much they care (hopefully without guilt or pressure) — then they'll likely donate a little money.

By contrast, Daniel and others who have undergone the mental shift aren't spending time solving all the world's problems because there are just too many problems. (Daniel hopefully goes on to discover movements like effective altruism and starts contributing towards fixing the world's most pressing problems.)

5

I'm not trying to preach here about how to be a good person. You don't need to share my viewpoint to be a good person (obviously).

Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.

The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.

But this is an error — prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters.

Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.

You don't get to feel the appropriate amount of "care", in your body. Sorry — the world's problems are just too large, and your body is not built to respond appropriately to problems of this magnitude. But if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control.

6

This, of course, leads us to the question of "what the hell do you then?"

And I don't really know yet. (Though I'll plug the Giving What We Can pledge, GiveWell, MIRI, and The Future of Humanity Institute as a good start).

I think that at least part of it comes from a certain sort of desperate perspective. It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.

I'm not trying to guilt you into giving more money away — becoming a philanthropist is really really hard. (If you're already a philanthropist, then you have my acclaim and my affection.) First it requires you to have money, which is uncommon, and then it requires you to throw that money at distant invisible problems, which is not an easy sell to a human brain. Akrasia is a formidable enemy. And most importantly, guilt doesn't seem like a good long-term motivator: if you want to join the ranks of people saving the world, I would rather you join them proudly. There are many trials and tribulations ahead, and we'd do better to face them with our heads held high.

7

Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid.

And similarly, addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face.

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Nobody gets to comprehend the scope of these problems. The closest we can get is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings.

Because our feelings lie to us.

When you do the multiplication, you realize that addressing global poverty and building a brighter future deserve more resources than currently exist. There is not enough money, time, or effort in the world to do what we need to do.

There is only you, and me, and everyone else who is trying anyway.

8

You can't actually feel the weight of the world. The human mind is not capable of that feat.

But sometimes, you can catch a glimpse.

276 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2014-10-07T18:50:49.052Z · LW(p) · GW(p)

I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.

Going section by section:

  1. I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to the main point.)

  2. No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.

  3. I occasionally donate modest amounts to this cause or that, if I feel like it. I don't think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.

  4. I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I "can't possibly do enough". I help people when I feel like it and I don't pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.

  5. I have read a fair amount of EA propaganda, and I still do not feel that I "should care about people suffering far away", sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.

  6. I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the "biggest problems", as opposed to doing what I am good at, though I am happy that some people feel that way; humanity's strength is in its diversity.

  7. Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like "Gandhi, Mother Theresa, Nelson Mandela" tend to have more effect than those of us within 1 standard deviation. Some people address "global poverty", others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

  8. I don't feel the weight of the world. Because it does not weigh on me.

Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.

Replies from: So8res, kalium, Kaj_Sotala, Richard_Kennaway, None, pianoforte611, ShardPhoenix
comment by So8res · 2014-10-10T06:51:20.484Z · LW(p) · GW(p)

I don't disagree, and I don't think you're a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)

A few things, though:

No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won't (e.g. after a syn bio proof of concept that kills 1/4 of the race I would hope that diversity in problem-selection would decrease). "It is best to diversify and hope" seems like a platitude that dodges the fun parts.

I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know.

I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the "you must be Fundamentally Different" fallacy. Part of the theme behind this post is "you can interpret the internal caring feelings differently if you want", and while I interpret my care-senses differently, I do empathize with this sentiment.

That's not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:

  1. Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
  2. Would you care more about humans in a context where humanity is treated as the 'in-group'? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
  3. I assume that you wouldn't push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?

In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don't matter less just because I'm not yet close to them). There's also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.

Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don't know most of them yet; see also the expanding circle).

I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don't "care" about most of the nameless masses who are out of my sight (in that I don't have feelings for them), but there's a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.

Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.

Replies from: Jiro
comment by Jiro · 2014-10-16T19:06:24.228Z · LW(p) · GW(p)

Do you care only about the people who are currently close friends, or also the people who could be close friends?

There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.

In that case, saying "any individual could become a close friend, so I should multiply 'caring for one friend' by the the number of individuals in the group" is wrong. Instead, I should multiply "caring for one friend' by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn't increase at all.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-17T15:01:36.274Z · LW(p) · GW(p)

The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.

Let's say, for example, that:

  1. If X was my close friend, I would care about X
  2. If Y was my close friend, I would care about Y
  3. X and Y could not both be close friends of mine simultaneously.

Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn't I just care about both X and Y regardless of whether they are my close friends or not?

Replies from: Jiro, Lumifer
comment by Jiro · 2014-10-17T15:23:55.871Z · LW(p) · GW(p)

Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, "only X or Y could be my close friend, but not both" may be an effect of that.

It's not "they're my close friend, and that's the reason to care about them", it's "they're under my caring limit, and that allows me to care about them". "Is my close friend" is just another way to express "this person happened, by chance, to be added while I was still under my limit". There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don't affect their merit as a person, such as living closer to you).

Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren't; if you had lived somewhere else or had different random experiences, you'd have had different close friends.

Replies from: Vaniver, lackofcheese
comment by Vaniver · 2014-10-17T16:46:48.256Z · LW(p) · GW(p)

Is my close friend" is just another way to express "this person happened, by chance, to be added while I was still under my limit". There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don't affect their merit as a person, such as living closer to you).

I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as 'time devoted to a common activity' or 'emotional availability.') Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there's an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can't.

(Friendships in this view don't have to be symmetric- there are people that I'd listen to them complain that I don't expect they'd listen to me complain, and the reverse exists as well.)

They aren't; if you had lived somewhere else or had different random experiences, you'd have had different close friends.

I think that it's reasonable to call facts 'special' relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-18T07:20:26.370Z · LW(p) · GW(p)

That's a solid point, and to a significant extent I agree.

There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed "allies", which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.

This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it's simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of "selfishness"--ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.

However, this is not the same thing as "caring" in the sense So8res is using the term; I think he's using the term more in the sense of "value". For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you're going to be better able to help your close friends than you are a random stranger on the street.

comment by lackofcheese · 2014-10-18T06:57:20.138Z · LW(p) · GW(p)

The way you put it, it seems like you want to care for both X and Y but are unable to.

However, if that's the case then So8res's point carries, because the core argument in the post translates to "if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y".

Replies from: Jiro
comment by Jiro · 2014-10-18T09:26:24.422Z · LW(p) · GW(p)

The way you put it, it seems like you want to care for both X and Y but are unable to.

"I want to care for an arbitrarily chosen person from the set of X and Y" is not "I want to care for X and Y". It's "I want to care for X or Y".

comment by Lumifer · 2014-10-17T15:24:33.399Z · LW(p) · GW(p)

the fact that someone happens to be your close friend seems like the wrong reason to care about them

Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-18T07:33:11.681Z · LW(p) · GW(p)

I think it depends on what you mean by "care".

If you mean "devote time and effort to", sure; I completely agree that it makes a lot of sense to do this for your friends, and you can't do that for everyone.

If you mean "value as a human being and desire their well-being", then I think it's not justifiable to afford special privilege in this regard to close friends.

Replies from: Lumifer
comment by Lumifer · 2014-10-18T21:20:03.323Z · LW(p) · GW(p)

I think it depends on what you mean by "care".

By "care" I mean allocating a considerably higher value to his particular human compared to a random one.

I think it's not justifiable

Yes, I understand you do, but why do you think so?

Replies from: lackofcheese
comment by lackofcheese · 2014-10-19T04:04:11.047Z · LW(p) · GW(p)

I don't think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.

If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that's so then there are many, many other people in the world who have similar qualities but are not my friends.

Replies from: Jiro, Lumifer
comment by Jiro · 2014-10-19T07:57:31.594Z · LW(p) · GW(p)

I don't think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself.

I assume you would pay your own mortgage. Would you mind paying my mortgage as well?

Replies from: lackofcheese
comment by lackofcheese · 2014-10-19T09:52:15.455Z · LW(p) · GW(p)

I can't pay everyone's mortgage, and nor can anyone else, so different people will need to pay for different mortgages.

Which approach works better, me paying my mortgage and you paying yours, or me paying your mortgage and you paying mine?

Replies from: Jiro, elharo
comment by Jiro · 2014-10-19T15:46:19.240Z · LW(p) · GW(p)

If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people's mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself.

And even if it's impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own.

But you don't. And you can't. And everyone doesn't and can't, not just for mortgages, but for, say, food or malaria nets. You don't send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don't care for them and yourself equally.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-19T18:31:16.798Z · LW(p) · GW(p)

Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion?

Perhaps I don't do those things, but that doesn't mean I can't and it doesn't mean I shouldn't.

Replies from: Jiro
comment by Jiro · 2014-10-20T03:42:51.609Z · LW(p) · GW(p)

You can say either

  1. You ought to value other human beings equally, but you don't.
  2. You do value other human beings equally, and you ought to act in accordance with that valuation, but you don't.

You appear to be claiming 2 and denying 1. However, I don't see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-20T04:22:04.505Z · LW(p) · GW(p)

I agree; I don't see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it's still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases.

That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it's (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post.

If it's (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help.

Of course, (1) and (2) aren't the only possibilities here; there's at least two more that are important.

Replies from: Jiro
comment by Jiro · 2014-10-20T14:29:43.261Z · LW(p) · GW(p)

You seem to be agreeing by not really agreeing. What does it even mean to say "I value other people equally but I don't act on that"? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It's like saying "I prefer chocolate over vanilla ice cream, but if you give me them I'll always pick the vanilla". Then you don't really prefer chocolate over vanilla, because that's what it means to prefer something.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-20T18:59:43.379Z · LW(p) · GW(p)

My actions alone don't necessarily imply a valuation, or at least not one that makes any sense.

There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.

Replies from: hyporational
comment by hyporational · 2014-10-21T05:14:48.865Z · LW(p) · GW(p)

My actions alone don't necessarily imply a valuation, or at least not one that makes any sense.

Is this basically another way of saying that you're not the king of your brain, or something else?

Replies from: lackofcheese
comment by lackofcheese · 2014-10-21T05:19:09.306Z · LW(p) · GW(p)

That's one way to put it, yes.

comment by elharo · 2014-10-19T10:16:26.858Z · LW(p) · GW(p)

As usual, the word "better" hides a lot of relevant detail. Better for whom? By what measure?

Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles

comment by Lumifer · 2014-10-20T16:37:49.639Z · LW(p) · GW(p)

the worth of a human being

Ah. It seems we have been talking about somewhat different things.

You are talking about the worth of a human being. I'm talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value.

I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on "their proximity and/or relation to myself".

Replies from: lackofcheese
comment by lackofcheese · 2014-10-20T17:30:42.013Z · LW(p) · GW(p)

I'm not entirely sure what a "personal perception of the value of a human being" is, as distinct from the value or worth of a human being. Surely the latter is what the former is about?

Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. "they make me happy"), but I don't think that's really the main thrust of what "caring" is.

Replies from: Lumifer
comment by Lumifer · 2014-10-20T17:37:09.030Z · LW(p) · GW(p)

I'm not entirely sure what a "personal perception of the value of a human being" is, as distinct from the value or worth of a human being.

The "worth a human being" implies that there is one, correct, "objective" value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the "true" value. The worth of a human being is a function with one argument: that human being.

The "personal perception of the value of a human being" implies that there are multiple, different, "subjective" values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-20T19:24:27.281Z · LW(p) · GW(p)

So, either there is such a thing as the "objective" value and hence, implicitly, you should seek to approach that value, or there is not.

I don't see any reason to believe in an objective worth of this kind, but I don't really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as "passing judgement on the worth of humans", because it's the only thing those words could refer to; you can't avoid the issue simply by calling it a subjective matter.

In my view, regardless of whether the value in question is "subjective" or "objective", I don't think it should be determined by the mere circumstance of whether I happened to meet that person or not.

Replies from: Lumifer
comment by Lumifer · 2014-10-20T20:35:43.443Z · LW(p) · GW(p)

So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth -- right? It's a "mere circumstance" that this particular human happens to be her child.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-21T03:09:43.477Z · LW(p) · GW(p)

Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults.

However, I do think that the mother should hold other peoples' children as being of equal value to her own. That doesn't mean valuing her own children less, it means valuing everyone else's more.

Sure, it's not very realistic to expect this of people, but that doesn't mean they shouldn't try.

Replies from: hyporational
comment by hyporational · 2014-10-21T03:25:58.358Z · LW(p) · GW(p)

one can reasonably argue that children should be valued more highly than adults.

One can reasonably argue the other way too. New children are easier to make than new adults.

However, I do think that the mother should hold other peoples' children as being of equal value to her own. That doesn't mean valuing her own children less, it means valuing everyone else's more.

Since she has finite resources, is there a practical difference?

It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-21T04:30:05.936Z · LW(p) · GW(p)

One can reasonably argue the other way too. New children are easier to make than new adults.

True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society.

Since she has finite resources, is there a practical difference?

Earlier I specifically drew a distinction between devoting time and effort and valuation; you don't have to value your own children more to devote yourself to them and not to other peoples' children.

That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples' children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively.

It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.

If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by "extreme altruism"?

Replies from: hyporational
comment by hyporational · 2014-10-21T05:05:03.463Z · LW(p) · GW(p)

If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse.

A good point. By abuse I wouldn't necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.

Besides, what exactly do you mean by "extreme altruism"?

Valuing people equally by default when their instrumental value isn't considered. I hope I didn't misunderstand you. That's about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-21T06:42:23.266Z · LW(p) · GW(p)

A good point. By abuse I wouldn't necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.

Sure, and there isn't really anything wrong with that as long as the person receiving the resources really needs them.

Valuing people equally by default when their instrumental value isn't considered. I hope I didn't misunderstand you. That's about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.

The term "altruism" is often used to refer to the latter, so the clarification is necessary; I definitely don't agree with that extreme.

In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.

comment by kalium · 2014-10-08T09:19:53.153Z · LW(p) · GW(p)

My view is similar to yours, but with the following addition:

I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I've effectively stolen their money and spent it on something they wouldn't have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.

comment by Kaj_Sotala · 2014-10-08T07:28:32.099Z · LW(p) · GW(p)

I feel like I'm somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.

comment by Richard_Kennaway · 2014-10-08T00:18:06.407Z · LW(p) · GW(p)

Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :))

On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.

comment by [deleted] · 2014-10-08T19:45:32.009Z · LW(p) · GW(p)

Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):

I do not think I am a worse person than you because of that.

I think you are.

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless.

You are heartless.

I care about the humanity surviving and thriving, in the abstract

Here's my question, and I hope you take the time to answer as honestly as you wrote your comment:

Why?

After all you've rejected to care about, why in the world would you care about something as abstract as "humanity surviving and thriving"? It's just an ape species, and there have already been billions of them. In addition, you clearly don't care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.

I don't mean to convince you otherwise, but it seems arbitrary - and surprisingly common - that someone who doesn't care about the suffering or lives of strangers would care about that one thing out of the blue.

Replies from: TheOtherDave, shminux, Bugmaster, Jiro
comment by TheOtherDave · 2014-10-08T21:01:02.150Z · LW(p) · GW(p)

I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?

Replies from: None
comment by [deleted] · 2014-10-09T02:08:36.970Z · LW(p) · GW(p)

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

Yes, rereading it, I think I misinterpreted response 2 as saying it doesn't matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn't matter, just durability and surivival. I thought this defeated the usual Big Future argument.

But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only "nicer" if they don't suffer (translation: their wellbeing doesn't really matter), then in what way would the Big Future matter?

I care a lot about humanity's future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.

comment by Shmi (shminux) · 2014-10-08T21:57:08.577Z · LW(p) · GW(p)

...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.

First, as usual, TheOtherDave has already put it better than I could.

Maybe to elaborate just a bit.

First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.

Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.

Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.

Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.

Replies from: Weedlayer, gjm, pianoforte611
comment by Weedlayer · 2014-10-09T08:32:50.186Z · LW(p) · GW(p)

I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?

That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.

Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.

Replies from: Lumifer
comment by Lumifer · 2014-10-09T14:51:42.434Z · LW(p) · GW(p)

but doesn't everyone think their morals are better than other people?

I don't.

Replies from: hyporational, Weedlayer
comment by hyporational · 2014-10-09T17:55:52.377Z · LW(p) · GW(p)

Would you make that a normative statement?

Replies from: Lumifer
comment by Lumifer · 2014-10-09T18:06:16.658Z · LW(p) · GW(p)

Well, kinda-sorta. I don't think the subject is amenable to black-and-white thinking.

I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don't feel that people who think their morals are bad are to be admired and emulated either.

There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn't good either.

Replies from: hyporational
comment by hyporational · 2014-10-09T18:17:53.275Z · LW(p) · GW(p)

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)

Replies from: Lumifer
comment by Lumifer · 2014-10-09T18:22:27.308Z · LW(p) · GW(p)

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems?

In one particular aspect, yes. There are many aspects.

The barber shaves everyone who doesn't shave himself..? X-)

comment by Weedlayer · 2014-10-09T15:44:22.256Z · LW(p) · GW(p)

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

I know it's possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.

Replies from: Lumifer, hyporational
comment by Lumifer · 2014-10-09T15:55:39.350Z · LW(p) · GW(p)

You are confused between two very different statements:

(1) I don't think that my morals are (always, necessarily) better than other people's.

(2) I have no basis whatsoever for judging morality and/or behavior of other people.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T17:07:11.144Z · LW(p) · GW(p)

What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

I mean, it's the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I'm evaluating others' beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.

Which of course is similar to the argument people sometimes bring up about "moral progress", claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).

My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?

Replies from: Lumifer
comment by Lumifer · 2014-10-09T17:29:35.904Z · LW(p) · GW(p)

if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

No, I don't think so.

Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.

When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.

Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there's no reason to consider your own value system to be the very best there is, especially given that it's your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.

Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T21:40:08.118Z · LW(p) · GW(p)

This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I'll make two points and see if they move the conversation forward:

1: "There's no reason to consider your own value system to be the very best there is"

This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren't the absolute best there is. The same logic holds true for morals. I know I'm making some mistakes, but I don't know where those mistakes are. On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong. This is what I mean by "thinking that one's own morals are the best". I know I might not be right on everything, but I think I'm right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but "least wrong").

Which brings me to point 2.

2: "Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were."

I'm absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I've been equivocating between the two, that's why). I know I can't alter my moral beliefs on a whim, but that's because I have no reason to want to. Consider self-modifying to want to murder innocents. I can't do this, primarily because I don't want to, and CAN'T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn't get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that's an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying "You should do X". So wishing I had a different morality is like saying "I wish I though I should do X". What does that mean?

Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.

In short, I'm with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there's a discontinuity either in my reading or your writing.

Replies from: Lumifer, hyporational
comment by Lumifer · 2014-10-10T04:22:21.609Z · LW(p) · GW(p)

we both seem to agree on what morality

That's already an excellent start :-)

To me, a moral belief and a factual belief are approximately equal

Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics -- it's a system of "hard" facts and, generally speaking, they are either correct or not. As you say, "On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong."

To me morals is more like preferences -- a system of flexible way to evaluate choices. You can have multiple ways to do that and they don't have to be either correct or not.

Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let's take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.

My moral position different from (in fact, diametrically opposed to) Alice's, but I'm not going to say that Alice's morals are wrong. They are just different and she has full right to have her own.

That does not apply to everything, of course. There are "zones" where I'm fine with opposite morals and there are "zones" where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.

If I wished I held certain moral beliefs, I already have them.

No, I don't think so. Morals are values, not desires. It's not particularly common to wish to hold different values (I think), but I don't see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn't too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.

In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn't happen in a single moment.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-10T05:02:53.011Z · LW(p) · GW(p)

My moral position different from (in fact, diametrically opposed to) Alice's, but I'm not going to say that Alice's morals are wrong

You do realize she's implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I'm having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn't accept murder as "just different", and something they have the full right to have.

No, I don't think so. (and following text)

I don't think your example works. He values success, AND he values other things (family, companionship, ect.) I'm not sure why you're calling different values "Different sides" as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn't sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn't seem like a case of my "Don't murder" side wanting me to value immortality less, it's more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I'm willing to accept. It's a straight calculation, no value readjustment required.

As for your last point, I've never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can't remember caring too much about it). I actually don't know what makes other people adopt ideologies. For me, I'm a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).

So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice's actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.

If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.

Replies from: Lumifer
comment by Lumifer · 2014-10-10T05:44:42.429Z · LW(p) · GW(p)

I'm having difficulty understanding how you can NOT say that her morality is wrong.

I think the terms "acceptable" and "not acceptable" are much better here than right and wrong.

If the positions were reversed, I might find Alice's morality unacceptable to me, but I still wouldn't call it wrong.

I'm not sure why you're calling different values "Different sides" as though they are separate agents.

No, I'm not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob's (let's call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob's conscious personality, but it still is having difficulty controlling his drive to win.

Or let's take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are... let's say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands -- consciously -- that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain't in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.

I've never experienced such a radical change

Sure, but do you accept that other people have?

comment by hyporational · 2014-10-10T03:16:06.205Z · LW(p) · GW(p)

I think akrasia could also be an issue of being mistaken about your beliefs, all of which you're not conscious of at any given time.

comment by hyporational · 2014-10-09T18:06:36.069Z · LW(p) · GW(p)

It's not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

So while I wouldn't murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn't seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn't seem to provide any new information.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T21:53:24.733Z · LW(p) · GW(p)

There's no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"? I think this is basically the point EY makes about the "psychological unity of humankind".

Of course, this dream goes out the window with UFAI and aliens. Lets hope we don't have to deal with those.

Replies from: Decius, army1987
comment by Decius · 2014-10-15T07:43:27.359Z · LW(p) · GW(p)

Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"?

Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality "Has empathy and values survival and survival is impaired by murder".

We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and "Has a moral system that prohibits murder" is a quality that successfully creates offspring that typically have the quality "Has a moral system that prohibits murder".

The different quality "Commits wanton murder" is less successful at creating offspring in modern society, because convicted murderers don't get to teach children that committing wanton murder is something to do.

comment by A1987dM (army1987) · 2014-10-11T09:32:09.485Z · LW(p) · GW(p)

I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.

comment by gjm · 2014-10-08T23:33:34.744Z · LW(p) · GW(p)

inflammatory and judgmental comments

It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.

And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.

(For what it's worth, I don't think your comments indicate that you're heartless.)

comment by pianoforte611 · 2014-10-08T23:55:01.809Z · LW(p) · GW(p)

It's interesting because people will often accuse a low status out group of "thinking they are better than everyone else" *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw .... until I saw Hedonic Treader's comment.

I do sort of understand the attitude of the utilitarian EA's. If you really believe that everyone must value everyone else's life equally, then you'd be horrified by people's brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.

*I've seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc

Replies from: gjm, None
comment by gjm · 2014-10-09T12:41:27.936Z · LW(p) · GW(p)

the accusation was always made of straw

I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:

  • A thinks her group is better than others.
  • A's thinking this is obvious enough for B to be able to discern it with some confidence.
  • A never explicitly says that her group is better than others.

and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.

I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:

  • A really does think that, to some extent and in some respects, her group is better than others.
  • But so does everyone else.
  • B imagines that he's discerned unusual or unreasonable opinions of this sort in A.
  • But really he hasn't; at most he's picked up on something that he could find anywhere if he chose to look.

[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn't say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Replies from: CCC, army1987
comment by CCC · 2014-10-09T13:51:19.121Z · LW(p) · GW(p)

I do imagine that the first situation is more common, in general, than the second.

This is entirely because of the point:

  • But so does everyone else.

A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.

Replies from: gjm
comment by gjm · 2014-10-09T13:54:27.296Z · LW(p) · GW(p)

Sorry, I wasn't clear enough. By "so does everyone else" I meant "everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others".

Replies from: CCC
comment by CCC · 2014-10-09T18:17:58.420Z · LW(p) · GW(p)

Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I'm not sure that it's actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.

comment by A1987dM (army1987) · 2014-10-11T09:38:12.042Z · LW(p) · GW(p)

but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream -- by my standards; it doesn't follow that I also believe it's better by your standards or by objective standards (whatever they might be) and feel smug about it.

Replies from: gjm
comment by gjm · 2014-10-11T12:33:28.000Z · LW(p) · GW(p)

Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.

(Even food preferences -- a thing so notoriously subjective that the very word "taste" is used in other contexts to indicate something subjective and personal -- can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)

comment by [deleted] · 2014-10-09T01:58:47.735Z · LW(p) · GW(p)

Perhaps to avoid confusion, my comment wasn't intended as an in-group out-group thing or even as a statement about my own relative status.

"Better than" and "worse than" are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It's a rather simple statement of relative moral status.

Here's the problem: If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.

Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.

Replies from: Lumifer, Jiro
comment by Lumifer · 2014-10-09T04:43:07.969Z · LW(p) · GW(p)

If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal

That's a strawman. I haven't seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn't make them equal, by the way).

there is no social incentive to behave prosocially when possible

Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.

moral judgments have their legitimate place in any on-topic discourse.

Why is that?

Replies from: army1987, hyporational
comment by A1987dM (army1987) · 2014-10-11T09:42:18.500Z · LW(p) · GW(p)

Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.

What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?

Replies from: Lumifer
comment by Lumifer · 2014-10-14T17:40:56.969Z · LW(p) · GW(p)

By morality I mean a particular part of somebody's system of values. Roughly speaking, morality is the socially relevant part of the value system (though that's not a hard definition, but rather a pointer to the area where you should search for it).

comment by hyporational · 2014-10-09T05:38:51.861Z · LW(p) · GW(p)

It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.

comment by Jiro · 2014-10-09T02:04:05.541Z · LW(p) · GW(p)

One can judge "judgmentalism on set A" without being "judgemental on set A" (while, of course, still being judgmental on set B).

comment by Bugmaster · 2014-10-08T23:20:18.513Z · LW(p) · GW(p)

You are saying that shminux is "a worse person than you" and also "heartless", but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, "whoever cares about more people is better", then all you're saying is, "shminux cares about fewer people because he cares about fewer people". This is true, but tautologically so.

Replies from: roryokane
comment by roryokane · 2014-10-16T19:27:40.166Z · LW(p) · GW(p)

All morals are axioms, not theorems, and thus all moral claims are tautological.

Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-17T14:57:07.590Z · LW(p) · GW(p)

I don't agree.

Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms.

Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.

comment by Jiro · 2014-10-08T22:40:10.390Z · LW(p) · GW(p)

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. You are heartless.

Then every human being in existence is heartless.

Replies from: CBHacking
comment by CBHacking · 2014-11-29T13:21:12.163Z · LW(p) · GW(p)

I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, "nice" is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such "non-zero benefit" or "decent" (which I, and probably most people, would rank below "nice") and "excellent", "wonderful", "the best thing in the world" (in the hyperbolic "best thing I have in mind right now" sense), or "literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time"; I suspect most native English speakers would agree that those are stronger sentiments than "nice". I can certainly think of things that are more important than merely "nice" yet less important than a reduction in death and suffering.

For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there's any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn't spend it on one... I don't actually need a car, in fact don't have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example.

Leaving aside the fact that "every human being in existence" appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it "nice". Maybe not proportionately more positively - as the post says, our care-o-meters don't scale that far - but there would still be a major difference. I don't know how common, in actual numbers, that reaction is vs. the "It would be nice" reaction (not to mention other possible reactions), but it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal.

Replies from: Jiro
comment by Jiro · 2014-11-29T18:37:20.465Z · LW(p) · GW(p)

Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering.

In that sense, it's nice that death and suffering are alleviated, but that's all.

it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal

"Not devoting their whole life towards stopping death and suffering" equates to "thinks something else is more important than stopping death and suffering".

Replies from: CBHacking
comment by CBHacking · 2014-12-01T08:43:25.950Z · LW(p) · GW(p)

False dichotomy. You can have (many!) things which are more than merely "nice" yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates' view on malaria eradication is that "it's nice that death and suffering are alleviated, but that's all"?

We should probably taboo the word "nice" here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn't even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.

Replies from: gjm, Richard_Kennaway
comment by gjm · 2014-12-01T12:39:49.569Z · LW(p) · GW(p)

Let's be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans' lives less than you value having a second yacht. I'm sure that second yacht is fun to have, but I think it's reasonable to categorize something that you value less than 1/10000 of the increment from "one yacht" to "two yachts" as no more important than "nice".

This is of course not a problem unique to billionaire playboys, but it's maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1/100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some -- e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There's still something kinda indecent about valuing your second car more than a person's life, but at least to my mind it's substantially less indecent than valuing your second megayacht more than 10000 people's lives.

Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending -- but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it's even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do.

Pedantic notes: I'm supposing that your second yacht costs you $100M and that you can save one African's life for $10k; billionaires' yachts are often more expensive and the best estimates I've heard for saving poor people's lives are cheaper. Presumably if you focus on ending poverty rather than on e.g. preventing malaria then you think that's a more efficient way of helping the global poor, which makes your luxury trade off against more lives. I am using "saving lives" as a shorthand; presumably what you actually care about is something more like time-discounted aggregate QALYs. Your billionaire playboy's luxury purchase might be something other than a yacht. Offer void where prohibited by law. Slippery when wet.

And, for the avoidance of doubt, I strongly endorse devoting half your resources to ending poverty and half to being a billionaire playboy, if the alternative is putting it all into being a billionaire playboy. The good you can do that way is tremendous, and I'd take my hat off to you if I were wearing one. I just don't think it's right to describe that situation by saying that poverty is the most important thing to you.

Replies from: Jiro
comment by Jiro · 2014-12-01T15:49:16.599Z · LW(p) · GW(p)

Thank you, that's what I would have said.

comment by Richard_Kennaway · 2014-12-01T12:24:57.458Z · LW(p) · GW(p)

You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing.

I don't myself act on the conclusion, but I also don't see a flaw in the argument.

comment by pianoforte611 · 2014-10-08T23:38:17.266Z · LW(p) · GW(p)

This is exactly how I feel. I would slightly amend 1 to "I care about family, friends, some other people I know, and some other people I don't know but I have some other connection to". For example, I care about people who are where I was several years ago and I'll offer them help if we cross paths - there are TDT reasons for this. Are the they the "best" people for me to help under utilitarian grounds? No, and so what?

comment by ShardPhoenix · 2014-10-10T13:24:39.110Z · LW(p) · GW(p)

Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they're probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they're only middle class to begin with I feel more pity than admiration.

* Meaning the extreme, "all human lives are equally valuable to me" version, rather than just a desire to not waste charity money.

Replies from: leplen
comment by leplen · 2014-10-27T16:44:18.356Z · LW(p) · GW(p)

I don't understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me?

I don't have a good logical reason for why my life is a lot more valuable than anyone else's. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can't come up with a good reason to have a dominantly large "Life of leplen" term in my utility function. Much of the data suggests that happiness/life quality isn't well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn't I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn't seem efficient to keep investing in myself for little to no return.

I wouldn't feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I'd probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn't mean I'm entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky.

With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2014-10-28T08:44:17.184Z · LW(p) · GW(p)

I don't understand this. Why should my utility function value me having a large income or having a large amount of money?

With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.

Sounds like you answered your own question!

(It's one thing to have some simplistic far-mode argument about how this or that doesn't matter, or how we should sacrifice ourselves for others, but the near-mode nitty-gritty of the real-world is another thing).

comment by VAuroch · 2014-10-08T09:01:54.346Z · LW(p) · GW(p)

I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where

everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.

I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.

Replies from: Kaj_Sotala, NancyLebovitz, torekp, Gunnar_Zarncke, John_Maxwell_IV, AnthonyC, None
comment by Kaj_Sotala · 2014-10-09T09:00:30.002Z · LW(p) · GW(p)

This is why I prefer to frame EA as something exciting, not burdensome.

Replies from: NancyLebovitz, John_Maxwell_IV, VAuroch
comment by NancyLebovitz · 2014-10-15T14:18:58.824Z · LW(p) · GW(p)

Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.

This just might have more general application.

Replies from: Capla, None
comment by Capla · 2014-10-21T01:17:50.955Z · LW(p) · GW(p)

If I'm working at my capacity, I don't see how it's my fault for not having the world fixed immediately. I can't do any more than I can do and I don't see how I'm responsible for more than what my efforts could change.

comment by [deleted] · 2014-10-15T22:29:16.562Z · LW(p) · GW(p)

From my perspective, it's "I have to think about all the problems in the world and care about them." That's burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don't actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don't have to worry about connecting them to people starving in distant lands.

comment by John_Maxwell (John_Maxwell_IV) · 2014-10-09T23:32:06.152Z · LW(p) · GW(p)

Do we have any data on which EA pitches tend to be most effective?

comment by VAuroch · 2014-10-09T22:27:32.296Z · LW(p) · GW(p)

I've read that. It's definitely been the best argument for convincing me to try EA that I've encountered. Not convincing, currently, but more convincing than anything else.

comment by NancyLebovitz · 2014-10-08T15:44:23.949Z · LW(p) · GW(p)

I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.

Replies from: Richard_Kennaway, VAuroch
comment by Richard_Kennaway · 2014-10-09T09:07:05.072Z · LW(p) · GW(p)

Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?

Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.

Replies from: Lumifer, tog, Dentin
comment by Lumifer · 2014-10-09T14:56:38.292Z · LW(p) · GW(p)

To buy an expensive pair of shoes (he says) is morally equivalent to killing a child.

I always find it curious how people forget that equality is symmetrical and works in both directions.

So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...

Replies from: army1987, Richard_Kennaway, pianoforte611
comment by A1987dM (army1987) · 2014-10-10T16:02:22.134Z · LW(p) · GW(p)

I always find it curious how people forget that equality is symmetrical and works in both directions.

See also http://xkcd.com/1035/, last panel.

So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...

One man's modus ponens... I don't lose much sleep when I hear that a child I had never heard of before was killed.

comment by Richard_Kennaway · 2014-10-09T16:32:18.894Z · LW(p) · GW(p)

No, except by interpreting the words "morally equivalent" in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn't matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).

The only symmetry here is that of "equal and opposite".

Did anyone actually need that spelled out?

Replies from: Lumifer
comment by Lumifer · 2014-10-09T17:13:35.982Z · LW(p) · GW(p)

These verbal contortions do not look convincing.

The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.

Replies from: None, dthunt
comment by [deleted] · 2014-10-15T22:50:30.188Z · LW(p) · GW(p)

A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it's much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.

For instance, what do you think George Bush Sr's worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn't fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.

This is utterly bizarre on many levels, but I'm grateful too -- I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.

Replies from: gjm
comment by gjm · 2014-10-15T23:26:38.943Z · LW(p) · GW(p)

When you ask how bad an action is, you can mean (at least) two different things.

  • How much harm does it do?
  • How strongly does it indicate that the person who did it is likely to do other bad things in future?

Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.

In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.

That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.

Replies from: None
comment by [deleted] · 2014-10-16T02:46:25.352Z · LW(p) · GW(p)

How strongly does it indicate that the person who did it is likely to do other bad things in future?

But this recurses until all the leaf nodes are "how much harm does it do?" so it's exactly equivalent to how much harm we expect this person to inflict over the course of their lives.

Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.

By the same token, it's easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn't change the analysis that the embargo was ten thousand times worse than the school shooting.

Replies from: gjm
comment by gjm · 2014-10-16T20:21:53.263Z · LW(p) · GW(p)

But this recurses [...] so it's exactly equivalent to how much harm we expect [...]

For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.)

So this doesn't change the analysis that the embargo was ten thousand times worse [...]

That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all.

For the avoidance of doubt, I'm not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn't perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.)

But if the question is who is more evil*, GHWB or the Columbine killers?", the answer depends on what you mean by "evil" and most people most of the time don't mean "causing harm"; they mean something they probably couldn't express in words but that probably ends up being close to "having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with" -- which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn't include a tendency to callousness when dealing with the lives of strangers thousands of miles away.

comment by dthunt · 2014-10-09T17:35:25.140Z · LW(p) · GW(p)

Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.

I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.

Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.

Replies from: tog
comment by tog · 2014-10-09T19:21:43.744Z · LW(p) · GW(p)

Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.

In case anyone else was curious about this, here's a quote:

Barbara Ann Radnofsky, a Houston lawyer and Democratic candidate for attorney general, says that a 22-word clause in a 2005 constitutional amendment designed to ban gay marriages erroneously endangers the legal status of all marriages in the state.

The amendment, approved by the Legislature and overwhelmingly ratified by voters, declares that “marriage in this state shall consist only of the union of one man and one woman.” But the troublemaking phrase, as Radnofsky sees it, is Subsection B, which declares:

“This state or a political subdivision of this state may not create or recognize any legal status identical or similar to marriage.”

Oops.

comment by pianoforte611 · 2014-10-10T01:58:05.235Z · LW(p) · GW(p)

Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.

Replies from: Lumifer
comment by Lumifer · 2014-10-10T04:26:08.300Z · LW(p) · GW(p)

Are some cases of killing a child equivalent to buying expensive shoes?

Replies from: gjm, William_Quixote
comment by gjm · 2014-10-12T00:43:17.370Z · LW(p) · GW(p)

Those in which the way you kill the child is by spending money on luxuries rather than saving the child's life with it.

Replies from: Lumifer
comment by Lumifer · 2014-10-14T17:50:06.606Z · LW(p) · GW(p)

the way you kill the child is by spending money on luxuries

Do elaborate. How exactly does that work?

For example, I have some photographic equipment. When I bought, say, a camera, did I personally kill a child by doing this?

Replies from: gjm
comment by gjm · 2014-10-14T19:19:21.471Z · LW(p) · GW(p)

(I have the impression that you're pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we're discussing. But I'm going to take what you say at face value anyway.)

The context here is the idea (stated forcefully by Peter Singer, but he's by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities.

In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that's comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.

(Not necessarily specifically a child. It may be more expensive to save children's lives, in which case it would need to be a more expensive camera.)

Of course there isn't a specific child you have killed all by yourself personally, but no one suggested there is.

So, that was the original claim that Richard Kennaway described. Your objection to this wasn't to argue with the moral principles involved but to suggest that there's a symmetry problem: that "killing a child is morally equivalent to buying an expensive luxury" is less plausible than "buying an expensive luxury is morally equivalent to killing a child".

Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer's claim is something like "for all expensive luxury purchases, there exists a morally equivalent case of killing a child"; your proposed reversal is something like "for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury".) Hence pianoforte611's response.

You seemed happy to accept an amendment that attempts to fix up the asymmetry. And (I assumed) you were still assuming for the sake of argument the Singer-ish position that buying luxury goods is like killing children, and aiming to show that there's an internal inconsistency in the thinking of those who espouse it because they won't accept its reversal.

But I think there isn't any such inconsistency, because to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.

Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong. Which would be fair enough if you weren't saying that what's wrong with the original principle is that its reversal is no good.

Replies from: Lumifer, army1987
comment by Lumifer · 2014-10-15T00:15:05.221Z · LW(p) · GW(p)

I have the impression that you're pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we're discussing.

Nope. I express my rhetorical contempt in, um, more obvious ways. It's not exactly that I don't understand, it's rather that I see multiple ways of proceeding and I don't know which one do you have in mind (you, of course, do).

By they way, as a preface I should point out that we are not discussing "right" and "wrong" which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We're talking mostly about implications of certain moral positions and how they might or might not conflict with other values.

you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things

Yes, I accept that.

by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.

Not quite. I don't think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions -- for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.

However in this thread I didn't focus on that issue -- for the purposes of this argument I accepted the thesis and looked into its implications.

Your objection to this wasn't to argue with the moral principles involved but to suggest that there's a symmetry problem

Correct.

"killing a child is morally equivalent to buying an expensive luxury" is less plausible than "buying an expensive luxury is morally equivalent to killing a child"

It's not an issue of plausibility. It's an issue of bringing to the forefront the connotations and value conflicts.

Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what's commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions -- making the heinous more normal works just as well.

your proposed reversal is something like "for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury".

I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer's proposition and if you don't think it does, I would like to know why it doesn't.

to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.

Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don't see what "luxuries" have to do with it -- you kill children by failing to max out your credit cards as well).

In common language, however, "killing a child" does not mean "fail to do something which could, we think, on the average, avoid one death somewhere in Africa". "Killing a child" means doing something which directly and causally leads to a child's death.

Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong.

No. I think the original principle is wrong, but that's irrelevant here -- in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.

Replies from: gjm
comment by gjm · 2014-10-15T21:38:54.566Z · LW(p) · GW(p)

Averages are not equal to specific actions

Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die -- but I don't see how it's a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you're as responsible for the consequences of your inactions as of your actions.)

Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1/5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can't be known to be the rest of your action.) Which do you prefer?

I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths.

In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first.

Incidentally, I'm happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I'm content to choose a world in which people drive as much as I do rather than one in which there's much less driving, much more inconvenience, and fewer deaths on the road. (I'll add that I don't drive very much, and drive quite carefully.)

making the heinous more normal works just as well.

I think that when you come at it from that direction, what you're doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen that.

the weaker form of this reversal [...] logically follows from Singer's proposition and if you don't think it does, I would like to know why it doesn't.

I agree: it does. The equivalence seems obvious enough to me that I'm not sure why it's supposed to change anyone's mind about anything, though :-).

I don't see what "luxuries" have to do with it

Only the fact that trading luxuries against other people's lives seems like a worse problem than trading "necessities" against other people's lives.

"Killing a child" means doing something which directly and causally leads to a child's death.

Sure. Which is why the claim people actually make (at least when they're being careful about their words) is not "buying a $2000 camera is killing a child" but "buying a $2000 camera is morally equivalent to killing a child".

Replies from: Lumifer
comment by Lumifer · 2014-10-16T16:41:19.239Z · LW(p) · GW(p)

but I don't see how it's a coherent one.

I said upfront that human morality is not coherent.

However I think that the root issue here is whether you can do morality math.

You're saying you can -- take the suffering of one person, multiply it by a thousand and you have a moral force that's a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.

I'm saying morality math doesn't work, at least it doesn't work by normal math rules. "A single death is a tragedy; a million deaths is a statistic" -- you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.

First, a simple example of values/preferences math not working (note: it's not a seed of a new morality math theory, it's just an example). Imagine yourself as an interior decorator and me as a client.

You: Welcome to Optimal Interior Decorating! How can I help you?
I: I would like to redecorate my flat and would like some help in picking a colour scheme.
You: Very well. What is your name?
I: Lumifer!
You: What is your quest?
I: To find out if strange women lyin' in ponds distributin' swords are a proper basis for a system of government!
You: What is your favourite colour?
I: Purple!
You: Excellent. We will paint everything in your flat purple.
I: Errr...
You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.

And now back to the serious matters of death and dismemberment. You offered me a hypothetical:

Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1/5

Let me also suggest one for you.

You're in a boat, somewhere offshore. Another boat comes by and it's skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: "Hey! I've got an offer for you!" Joker's offer looks as follows. Sometime ago he put a bomb with a timer under a children's orphanage. He can switch off the bomb with a radio signal, but if he doesn't, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.

Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands -- you can strangle her or beat her to death or something else, your choice -- he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.

Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.

And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.

Consider a concept such as decency. It's a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...

I suspect there were people who didn't like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch's soul would suffer, and picked up their torches and pitchforks.

I suspect there were people who didn't particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.

In a contemporary example, I suspect there are people who don't think it's a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.

So, no, I don't think shut up and multiply is good advice always. Sometimes it's appropriate, but some other times it's a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.

Replies from: lackofcheese
comment by lackofcheese · 2014-10-21T03:47:16.213Z · LW(p) · GW(p)

Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done "morality math".

Granted, people can't really be relied upon to actually do it right, and it may not be a good idea to "shut up and multiply" if you can expect to get it wrong... but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.

However, none of this talk of failure modes represents a solid counterargument to Singer's main point. I agree with you that there is no strict moral equivalence to killing a child, but I don't think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.

comment by A1987dM (army1987) · 2014-10-14T20:47:08.471Z · LW(p) · GW(p)

$2000 ... that's comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities.

Now that the funding gap of the AMF has closed, I'm not sure this is still the case.

Replies from: gjm, Capla
comment by gjm · 2014-10-14T21:06:24.108Z · LW(p) · GW(p)

Yeah, I wondered about adding a note to that effect. But it seems unlikely to me that the AMF is that much more effective than everything else out there. Maybe it's $4000 now. Maybe it always was $4000. Or $1000. I don't think the exact numbers are very critical.

comment by Capla · 2014-10-21T01:26:05.179Z · LW(p) · GW(p)

Then tell me where I can most cheaply save a life.

Replies from: army1987
comment by A1987dM (army1987) · 2014-10-22T11:32:01.419Z · LW(p) · GW(p)

I don't know, and I wouldn't be surprised if there's no way to reliably do it with less than $5000.

comment by William_Quixote · 2014-10-10T11:47:43.098Z · LW(p) · GW(p)

Presumably if you stole a child's lunch money and bought a pair of shoes with it

comment by tog · 2014-10-09T19:18:55.192Z · LW(p) · GW(p)

NancyLebovitz:

I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.

RichardKennaway:

Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?

Richard's question is a good one, but even if there's no good answer it's a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending - and that this is an easier sell. So the framing of EA that Nancy describes has practical value.

comment by Dentin · 2014-10-17T16:12:56.720Z · LW(p) · GW(p)

The biggest problem I have with 'dead baby' arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don't find that babies have a whole lot of intrinsic value until they're properly programmed.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-21T03:08:49.791Z · LW(p) · GW(p)

If you don't take care of babies, you'll eventually run out of adults. If you don't have adults, the babies won't be taken care of.

I don't know what a balanced approach to the problem would look like.

comment by VAuroch · 2014-10-08T23:10:43.691Z · LW(p) · GW(p)

I'm not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn't improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn't seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.

Replies from: hyporational
comment by hyporational · 2014-10-09T00:47:16.599Z · LW(p) · GW(p)

except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.

For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It's difficult to enjoy a delusion you're conscious of.

comment by torekp · 2014-10-10T02:05:00.496Z · LW(p) · GW(p)

Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.)

Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.

Replies from: SaidAchmiz, VAuroch
comment by Said Achmiz (SaidAchmiz) · 2014-10-13T16:25:15.959Z · LW(p) · GW(p)

apply the sympathy-without-personal-distress trick to yourself

If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with?

Wouldn't you just shrug and dismiss the misfortune as irrelevant?

Replies from: hyporational, torekp
comment by hyporational · 2014-10-13T18:40:57.557Z · LW(p) · GW(p)

If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-10-13T22:25:54.802Z · LW(p) · GW(p)

I would not. This is a fair point.

Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?

Replies from: CCC
comment by CCC · 2014-10-14T07:32:55.671Z · LW(p) · GW(p)

Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the "pain" nerves at a given intensity.

Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.

So, considering this counterexample, I'd say that no, not every possible misfortune includes damage. Though I imagine that most do.

Replies from: Lumifer, hyporational
comment by Lumifer · 2014-10-14T18:00:21.352Z · LW(p) · GW(p)

Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series).

No need for sci-fi.

comment by hyporational · 2014-10-14T09:53:01.091Z · LW(p) · GW(p)

Much of what could be called damage in this context wouldn't necessarily happen within your body, you can take damage to your reputation for example.

You can certainly be deluded about receiving damage especially in the social game.

Replies from: CCC
comment by CCC · 2014-10-14T14:29:33.506Z · LW(p) · GW(p)

That is true; but it's enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).

Replies from: hyporational
comment by hyporational · 2014-10-14T14:58:57.761Z · LW(p) · GW(p)

Yes. I didn't mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.

comment by torekp · 2014-10-13T21:01:38.480Z · LW(p) · GW(p)

Let's say you cut your finger while chopping vegetables. If you don't feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.

comment by VAuroch · 2014-10-11T01:10:56.487Z · LW(p) · GW(p)

Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other's distress vs. sympathy is understanding other's pain as it reflects your own distress. Specifically mitigating 'feeling pain from other's distress' as applied to a broad sphere of 'others' has been a significant part of my turn away from an altruistic outlook; this wasn't hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people's distress viscerally distant.

comment by Gunnar_Zarncke · 2014-10-08T21:00:40.214Z · LW(p) · GW(p)

and find it literally unbearable.

But you don't have to bear it alone. It's not as if one person has to care about everything (nor each single person has to care for all).

Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).

Replies from: VAuroch, Lumifer
comment by VAuroch · 2014-10-08T23:05:05.637Z · LW(p) · GW(p)

Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can't let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent.

Or in short, blur out the details and this is me:

"Yeah," said the Boy-Who-Lived, "that pretty much nails it. Every time someone cries out in prayer and I can't answer, I feel guilty about not being God."

Neville didn't quite understand that, but... "That doesn't sound good."

Harry sighed. "I understand that I have a problem, and I know what I need to do to solve it, all right? I'm working on it."

Replies from: AnthonyC
comment by AnthonyC · 2014-10-09T13:02:33.433Z · LW(p) · GW(p)

Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.

Replies from: dthunt, dthunt, Jiro
comment by dthunt · 2014-10-09T18:11:59.764Z · LW(p) · GW(p)

Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.

That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.

comment by dthunt · 2014-10-09T18:16:29.147Z · LW(p) · GW(p)

It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes)

"That's not how responsibility works, Professor." Harry's voice was patient, like he was explaining things to a child who was certain not to understand. He wasn't looking at her anymore, just staring off at the wall to her right side. "When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and blaming gravity. Gravity isn't going to change next time. There's no point in trying to allocate responsibility to people who aren't going to alter their actions. Once you look at it from that perspective, you realize that allocating blame never helps anything unless you blame yourself, because you're the only one whose actions you can change by putting blame there. That's why Dumbledore has his room full of broken wands. He understands that part, at least."

I don't think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.

comment by Jiro · 2014-10-16T18:54:16.523Z · LW(p) · GW(p)

"A part of the system that you cannot change" is a vague term (and it's a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like "if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can't change?"

Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can't change affects the amounts that you assign to the parts that you can change.

comment by Lumifer · 2014-10-09T00:32:23.111Z · LW(p) · GW(p)

But you don't have to bear it alone.

That's one way for people to become religious.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T08:14:55.195Z · LW(p) · GW(p)

I'm not sure what point is being made here. Distributing burdens is a part of any group, why is religion exceptional here?

Replies from: Lumifer
comment by Lumifer · 2014-10-09T14:36:41.403Z · LW(p) · GW(p)

Theory of mind, heh... :-)

The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don't look at you strangely), it's not your church community which shares your burden. It's Jesus who lifts this burden off your shoulders.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T15:49:49.242Z · LW(p) · GW(p)

Ah, that's probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).

Replies from: Lumifer
comment by Lumifer · 2014-10-09T15:51:55.200Z · LW(p) · GW(p)

that's probably not what the parent meant then

Yes, of course. I pointed out another way through which you don't have to bear it alone.

Replies from: Weedlayer
comment by Weedlayer · 2014-10-09T16:48:04.146Z · LW(p) · GW(p)

Ah, I understand. Thanks for clearing up my confusion.

comment by John_Maxwell (John_Maxwell_IV) · 2014-10-09T23:51:18.172Z · LW(p) · GW(p)

Here's a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that's not an issue. The idea is to score as many points as possible before that happens.

If you save someone's life on expectation, you save someone's life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.

comment by AnthonyC · 2014-10-08T16:18:53.667Z · LW(p) · GW(p)

I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4.

Ditto, though I diverged differently. I said, "Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?"

I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at "zero, despair, crying at every news story." But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won't add or subtract much from the total utility of human civilization.

comment by [deleted] · 2015-05-05T20:24:34.489Z · LW(p) · GW(p)

I concluded that I am not a good person and won't be for the foreseeable future

Super relevant slatestarcodex post: Nobody Is Perfect, Everything is Commensurable.

Replies from: VAuroch
comment by VAuroch · 2015-05-10T05:48:30.187Z · LW(p) · GW(p)

Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.

Replies from: None
comment by [deleted] · 2015-05-11T04:20:04.319Z · LW(p) · GW(p)

But what about the quantitative way? :(

Edit: Forget that... I finally get it. Like, really get it. You said:

and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders

Oh, my gosh... I think that's why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what's true, but that's probably not true. Honestly, I probably gave it up because having the power to impact someone else's eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone's eternal soul.

And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don't seem nearly as important as they probably would had I not come from the perspective that people's lives on earth are just specks, just one-infinitieth total existence.

comment by kilobug · 2014-10-09T12:27:04.827Z · LW(p) · GW(p)

Interesting article, sounds a very good introduction to scope insensitivity.

Two points where I disagree :

  1. I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.

  2. I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."

Replies from: MugaSofer, Vaniver, CCC
comment by MugaSofer · 2014-10-10T13:47:24.205Z · LW(p) · GW(p)

I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.

This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?

(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)

Replies from: AmagicalFishy, dthunt
comment by AmagicalFishy · 2014-11-23T20:12:14.192Z · LW(p) · GW(p)

It may be more accurate to say something along the lines of "I mind large numbers of animals dying for no good reason. Food is a good reason, and thus do not mind eating chicken. An oil spill is not a good reason."

comment by dthunt · 2014-10-17T18:08:22.383Z · LW(p) · GW(p)

Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.

It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2014-10-18T17:32:29.623Z · LW(p) · GW(p)

That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.

comment by MugaSofer · 2014-10-18T17:07:07.208Z · LW(p) · GW(p)

Double-post.

comment by Vaniver · 2014-10-09T15:22:25.859Z · LW(p) · GW(p)

I don't think birds are a good example of it, at least not for me.

Birds are the classic example, both in the literature and (through the literature) here.

comment by CCC · 2014-10-09T13:54:11.653Z · LW(p) · GW(p)

I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics.

I very strongly agree with your point here, but would like to add that the problem of finding a political structure which properly maximises the happiness of the people living under it is a very difficult one, and missteps are easy.

comment by blacktrance · 2014-10-20T00:14:57.944Z · LW(p) · GW(p)

Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.

comment by timujin · 2014-10-12T08:27:53.120Z · LW(p) · GW(p)

Did the oil bird mental exercise. Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits. Sad.

Replies from: Capla, Richard_Kennaway
comment by Capla · 2014-10-21T01:10:13.000Z · LW(p) · GW(p)

If you acctully think it's sad (Do you?), then you have a higher order set of values that wants you to want to care about others.

If you want to want to care, you can do things to change yourself so that you do care. Even more importantly, you can begin to act act as if you care, because "caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway."

All I know is that I want to the sort of person who cares. So, I act as that sort of person, and thereby become her.

Replies from: Philip_W
comment by Philip_W · 2014-12-09T15:39:32.173Z · LW(p) · GW(p)

you can do things to change yourself so that you do care.

Would you care to give examples or explain what to look for?

Replies from: Capla
comment by Capla · 2014-12-09T17:07:40.855Z · LW(p) · GW(p)

The biggest thing is just to act like you are already the sort of person who does care. Go do the good work.

Find people who are better than you. Hang out with them. "You become like the 6 people you spend the most time with" and all that. (I remember reading the chapter on penetrating Azkaban in HP:MoR, and feeling how much I didn't care. I knew that there are places in the world where the suffering is as great as in that fictional place, but that it didn't bother me, I would just go about my day and go to sleep, where the fictional Harry is deeply shaken by his experience. I felt, "I'm not Good [in the moral sense] enough" and then thought that if I'm not good enough, I need to find people who are, who will help me be better. I need to find my Hermiones.)

I'm trying to find the most Good people of my generation, but I realized long ago that I shouldn't be looking for Good people, so much as I should be looking for people who are actively seeking to be better than they are. (If you want to be as Good as you can be, please message me. Maybe we can help each other.)

My feeling of moral inadequacy compared to Harry's feelings towards Azkaban (fictional) aren't really fair. My brain isn't designed to be moved abstract concepts. Harry (fictional) saw that suffering first hand and was changed by it, I only mentally multiply. I'm thinking that I need to put myself in situations where I can experience the awfulness of the world viscerally. People make fun of teenagers going to "help" build houses in the third world: it's pretty massively inefficient to ship untrained teenagers to Mexico to do manual labor (or only sort of do it), when their hourly output would be much higher if they just got a college degree and donated. Yet I know at least one person (someone who I respect, one of my "Hermines") who went to build houses in Mexico for a month and was heavily impacted by it and it spurred her to be of service more generally. (She told me that on the flight back to the states she was emotionally upset because, while she was homesick and tired of eating beans and rice for every meal (she's vegan), she knew that life would get in the way, and she would lose the perspective she had in Mexico. The test tomorrow has a way of seeming all important, and she was afraid of losing that perspective of how much worse other people had it, and what the Truly important things are. She got a tattoo that reads "Gratitude" in Spanish, as a permanent and perpetual reminder.)

Maybe you need to go see squalor? I haven't, so I can't say. I have thought that I have chose someone concrete to help, perhaps on a weekly basis, so that when I'm considering buying something I don't need, my thought process isn't "If I buy this, that's 4 dollars less that I can give to charity", but instead, "If I buy this, I Annie won't get that vaccine." I haven't implemented this yet, so I can't say how effective will be. Social pressure might help: let me know if you want to try something like this with me.

Does that help?

Replies from: Lumifer
comment by Lumifer · 2014-12-09T18:30:00.817Z · LW(p) · GW(p)

Maybe you need to go see squalor? I haven't, so I can't say.

I have seen squalor, and in my particular case it did not recalibrate my care-o-meter at all. YMMV, of course.

Replies from: TomStocker
comment by TomStocker · 2015-05-14T13:01:45.776Z · LW(p) · GW(p)

living in pain sent my carometer from below average to full. Seeing squalor definitely did something. I think it probably depends how you see it - did you talk to people as equals or see them as different types of people you couldn't relate to / didn't fit a certain criteria? Being surrounded by suffering from a young age doesn't seem to make people care - its being shocked by suffering after not having had much of it around that is occasionally very powerful - Like the story about the Buddha growing up in the palace then seeing sickness, death and age for the first time?

comment by Richard_Kennaway · 2014-10-12T19:11:00.060Z · LW(p) · GW(p)

Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits.

What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

Replies from: PeterisP, JoshuaMyer, timujin, hyporational
comment by PeterisP · 2014-10-15T16:07:25.516Z · LW(p) · GW(p)

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

comment by JoshuaMyer · 2014-10-19T21:54:19.129Z · LW(p) · GW(p)

I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.

From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.

comment by timujin · 2014-10-13T06:26:50.972Z · LW(p) · GW(p)

Because I wouldn't actually care if my actions actually help, as long as my brain thinks they do.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-10-13T08:17:12.642Z · LW(p) · GW(p)

Are you favouring wireheading then? (See hyporational's comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?

Replies from: Jiro, timujin, Philip_W
comment by Jiro · 2014-10-13T14:34:50.627Z · LW(p) · GW(p)

I think he wants to do things for which his brain whispers "this is altruistic" right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.

comment by timujin · 2014-10-15T09:04:15.425Z · LW(p) · GW(p)

No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn't wirehead into constantly feeling sweetness in my mouth.

Replies from: lmm
comment by lmm · 2014-10-17T21:03:59.983Z · LW(p) · GW(p)

I find this a confusing position. Please expand

Replies from: timujin
comment by timujin · 2014-10-18T18:42:43.197Z · LW(p) · GW(p)

Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.

comment by Philip_W · 2014-12-09T10:38:06.303Z · LW(p) · GW(p)

Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally).

When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: "I want to help people/be a good person" and "I want it to actually be real" are separate urges. What the OP seems to be appealing to is a system which says "I want to actually help people" in one go - sympathy, perhaps, as opposed to satisfying your altruism self-image.

comment by hyporational · 2014-10-13T05:39:23.386Z · LW(p) · GW(p)

What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

If there's no difference we arrive at the general problem of wireheading. I suspect very few people who identify themselves as altruists would choose being wireheaded for altruistic high. What are the parameters that would keep them from doing so?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-10-13T08:17:02.237Z · LW(p) · GW(p)

If there's no difference we arrive at the general problem of wireheading.

Yes. Let me change my question. If (absent imaginary interventions with electrodes or drugs that don't currently exist) an altruistic high is, literally, what it feels like when you care about others and act to help them, then saying "I don't care about them, I just wanted the high" is like saying "I don't enjoy sex, I just do it for the pleasure", or "A stubbed toe doesn't hurt, it just gives me a jolt of pain." In short, reductionism gone wrong, angst at contemplating the physicality of mind.

Replies from: hyporational
comment by hyporational · 2014-10-13T15:01:39.304Z · LW(p) · GW(p)

It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem.

It's not clear to me where the physicality of mind or reductionism gone wrong enter the picture, not to mention angst. Oversimplification is aesthetics gone wrong.

ETA: I suppose it would be appropriately generous to assume that you meant altruistic high as one of the many mind states that caring feels like, but in many instances caring in the sense that I'm motivated to do something doesn't seem to feel like anything at all. Perhaps there's plenty of automation involved and only novel stimuli initiate noticeable perturbations. It would be an easy mistake to only count the instances where caring feels like something, which I think happened in timujin's case. It would also be a mistake to think you only actually care about something when it doesn't feel like anything.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-10-15T08:14:58.497Z · LW(p) · GW(p)

It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem.

I was addressing timujin's original comment, where he professed to desiring the altruistic high while being indifferent to other people, which on the face of it is paradoxical. Perhaps, I speculate, noticing that the feeling is a thing distinct from what the feeling is about has led him to interpret this as discovering that he doesn't care about the latter.

Or, it also occurs to me, perhaps he is experiencing the physical feeling without the connection to action, as when people taking morphine report that they still feel the pain, but it no longer hurts.

Brains can go wrong in all sorts of ways.

comment by NancyLebovitz · 2014-10-07T14:07:52.955Z · LW(p) · GW(p)

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

Replies from: None
comment by [deleted] · 2014-10-07T14:45:03.640Z · LW(p) · GW(p)

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

Yes, that is how adults help in real life. In science we chop off little sub-sub-problems we think we can address to do our part to address larger questions whose answers no one person will ever find alone, and thus end up doing enormous work on the shoulders of giants. It works roughly the same in activism.

comment by hyporational · 2014-10-16T01:39:06.793Z · LW(p) · GW(p)

I see suffering the whole day in healthcare but I'm actually pretty much numbed to it. Nothing really gets to me, and if it did it could be quite crippling. Sometimes I watch sad videos or read dramatizations of real events to force myself to care for a while, to keep me from forgetting why I show up at work. Reading certain types of writings by rationalists helps too.

You shouldn't get more than glimpses of the weight of the world, or rather you shouldn't let them through the defences, to be able to function.

"Will the procedure hurt?" asked the patient. "Not if you don't sting yourself by accident!" answered the doctor with the needle.

comment by Gunnar_Zarncke · 2014-10-07T21:03:24.799Z · LW(p) · GW(p)

I'm not sure what to make out of it, but one could run the motivating example backwards:

this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can.

"He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he'd endure that and quickly arrives at the conclusion that he doesn't care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low."

What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he'd feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.

It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.

Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can't solve. But maybe societies can.

Replies from: So8res
comment by So8res · 2014-10-10T06:27:04.948Z · LW(p) · GW(p)

That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it's implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small (not really what I mean, but I'm short on time right now, I hope you can see what I'm pointing at), and obviously mine aren't going to work for everyone.

(As a matter of fact, my actual "bird death" thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.)

If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-10-10T06:54:32.944Z · LW(p) · GW(p)

That sounds like a failure of the thought experiment to me.

You didn't give details on how or how not to set up the thought experiment. I took it to mean 'your spontaneous valuation when imagining the situation' followed by n objective'multiplication'. Now my reaction wasn't that of aversion, but I tried to think of possible reactions and what would follow from that.

The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small. But the 'intrinsic' value appears to heavily depend on the setup of the thought experiment. And it humans value small things nonlinearly more than large/many things one can hack the valuation by constraining the thought experiment to only small things.

Nothing wrong with mind hacks per se. I have read your productivity post. But I don't think they don't help in establishing 'intrinsic' value. For personal self-modification (motivation) it seems to work nice.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-10-07T14:00:08.709Z · LW(p) · GW(p)

Wow this post is pretty much exactly what I've been thinking about lately.

Saving a person's life feels great.

Yup. Been there. Still finding a way to use that ICU-nursing high as motivation for something more generalized than "omg take all the overtime shifts."

Also, I think that my brain already runs on something like virtue ethics, but that the particular thing I think is virtuous changes based on my beliefs about the world, and this is probably a decent way to do things for reasons other than visceral caring. (I mean, I do viscerally care about being virtuous...)

comment by diegocaleiro · 2014-10-10T18:13:37.912Z · LW(p) · GW(p)

Cross commented from the EA forum

First of all. Thanks Nate. An engaging outlook on overcoming point and shoot morality.

You can stop trusting the internal feelings to guide your actions and switch over to manual control.

Moral Tribes, Joshua Greene`s book, addresses the question of when to do this manual switch. Interested readers may want to check it out.

Some of us - where "us" here means people who are really trying - take your approach. They visualize the sinking ship, the hanging souls silently glaring at them in desperation, they shut up and multiply, and to the extent possible, they let go of the anchoring emotions that are sinking the ship.

They act.

This approach is invaluable, and I see it working for some of the heroes of our age, you, Geoff Anders, Bastien Stern, Brian Tomasik, Julian Savulescu, yet I don't think it's the only way to help a lot - and we need all the approaches we can get - so I'll expose the other one, currently a minority, best illustrated by Anders Sandberg.

Like those you address, some people really want to care, however, the emotional bias that is stopping them from doing so is not primarily scope insensitivity, but something akin to loss aversion, except it manifests as a distaste for negative motivation and an overwhelming drive for positive motivation. When facing a choice between

  • Join our team of Transhumanists who will improve the human condition
  • Help us transform the world into a place as happy as possible
  • Help us prevent catastrophe, hurry up, people are suffering
  • Join our cause, we will decrease risks that humanity will be extinct

they will always pick one of the top two, because they are framed positively. The bottom two may sound more pressing, but they mention negative, undesirable, uncomfortable forces. They are staged in a frame where we feel overpowered by nature. Nature is a force trying to change our state into a worse state, and you are asked to join the gate keepers who will contain the destructive invasion that is to come.

The top two however, are not only more cheerful, they are set in a completely different frame: you are given a grandiose vision of a possible future, and told you can be part of the force that will sculpt it. What they tell you is we have the tools for you, join us, and with our amazing equipment, we will reshape the earth.

I am one of these people, Stephen Frey, João Fabiano, Anders Sandberg, being some other examples. David Pearce once attentively noticed this underlying characteristic, and jokingly attributed to this category the welcoming name of "Positive Utilitarian".

Some of us, who are driven by this cheerful positive idea, have found a way to continue our efforts on the right lane despite that strong inclination to go towards the riches instead of away from darkness.

We are driven by the awesomeness of it all.

Pretend for an instant the problems of the world are shades, pitch black shades. They are spread around everywhere. The world is mostly dark. You now find yourself in a world illuminated in exact proportion to the good things it has, all you see around you are faint glimpses of beauty and awesome here and there, candles of good intention, and the occasional lamps of concerted effort. What moves you is an exploratory urge. You want to see more, to feel more. Those dark areas are not helping you with that. Since they are problems, your job is to be inventive, to find solutions. You are told once upon a time it was all dark, your ancestors were able to ignite the first twigs into a bone fire. Sat by the fire you hear from wise sages' stories of the dark age that lies behind us, Hans Rosling, Robert Wright, Jared Diamond and Steve Pinker show how all the gadgets, symbols and technologies we created gave light to all we see now. By now we have lamps of many kinds and shapes, but you know more can be found. With diligence, smarts and help, you know we can beam lasers and create floodlights, we can solve things at scale, we can cause the earth to shine. But you are not stopping there, you are ambitious. You want to harness the sun.

It so happens that there's a million billion billion suns out there, so we too, shut up and multiply.

Why do we look at the world this way, why do we feel energized by this metaphor but not the prevention one? I don't know. As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn't matter. At the end of the day, we act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.

comment by tjohnson314 · 2014-10-10T11:12:30.171Z · LW(p) · GW(p)

I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible. But I don't focus much effort on it. I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

For example:

  • The best way for me to improve math and science education is to work on my own teaching ability.
  • The best way for me to improve the mental health of college students is to make time to support friends that struggle with depression and suicidal thoughts.
  • The best way for me to stop racism or sexism is to first learn to recognize and quash it in myself, and then to expose it when I encounter it around me.

Changing my own actions and attitudes is hard, but it's also the one area where I have the most control. And as I've worked on this for the past few years, I've managed to create a positive feedback loop by slowly increasing the size of my care-o-meter. Empathy is a useful habit that can be trained, just as much as rationality can be.

I realize that it's hard to get an accurate sense of the impact a donation can have for someone on the other side of the world. It's possible that I'm being led astray by my care-o-meter to focus on people near at hand. I do in principle care equally about people in other parts of the world, even if my care-o-meter hasn't figured that out yet. So if you'd like to prove to me that I can be more effective by focusing my efforts elsewhere, I'd be happy to listen. (I am a poor grad student, so donating large amounts of money isn't really feasible for me yet, although I do realize I still make far more than the world average.) For now, I'm doing the best that I can in the way that I know how.

To conclude, I wouldn't call myself an effective altruist, but I do count them as allies. And I wouldn't want to convert everyone to my perspective; as others have mentioned already, it's good to have a wide range of different approaches.

Replies from: Ixiel, Philip_W, Philip_W, Capla
comment by Ixiel · 2014-10-10T21:44:27.383Z · LW(p) · GW(p)

I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible.

I would love to see a splinter group, Efficient Altruism. I have no desire to give as much as I can afford, but feel VERY strongly about giving as efficiently to the causes I support as I can. When I read, I think from EA themselves, the estimated difference in efficiency of African aid organizations, it changed my whole perspective on charity.

comment by Philip_W · 2014-12-09T13:51:30.824Z · LW(p) · GW(p)

(separated from the other comment, because they're basically independent threads).

I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the side-effects of ~600 people not feeling ill all the time. So if you primarily care about helping people live better lives, $50/yr to SCI ought to equal your stated current efforts.

However, that doesn't count flow-through effects. EA is rare enough that you might actually get a large portion of the credit of convincing someone to donate to a more effective charity, or even become an effective altruist: expected marginal utility isn't conserved across multiple agents (if you have five agents who can press a button, and all have to press their buttons to save one person's life, each of them has the full choice of saving or failing to save someone, assuming they expect the others to press the button too, so each of them has the expected marginal utility of saving a life). Since it's probably more likely that you convince someone else to donate more effective than that one of the dewormed people will be able to have a major impact because of their deworming, flow-through effects should be very strong for advocacy relative to direct donation.

To quantify: Americans give 1% of their incomes to poverty charities, so let's make that $0.5k/yr/student. Let's say that convincing one student to donate to SCI would get them to donate that much more effectively about 5 years sooner than otherwise (those willing would hopefully be roped in eventually regardless). Let's also say SCI is five times more effective than their current charities. That means you win $2k to SCI for every student you convince to alter their donation patterns.

You probably enjoy helping people directly (making you happy, which increases your productivity and credibility, and is also just nice), and helping them will earn you social credit which is more likely to convince them, so you could mostly keep doing what you're doing, just adding the advocacy bit in the best way you see fit. Suppose you manage to convince 2.5% of each class, that means you get around $5k/year to SCI, or about 100 times more impact than what you're doing now, just by doing the same AND advocating people to donate more effectively. That's six thousand sick people, more than a third of them children and teens, you would be curing extra every year.

Note: this is a rough first guess. Better numbers and the addition of ignored or forgotten factors may influence the results by more than one order of magnitude. If you decide to consider this advice, check the results thoroughly and look for things I missed. 80000hours has a few pages on advocacy, if you're interested.

Replies from: tjohnson314
comment by tjohnson314 · 2014-12-26T18:27:28.267Z · LW(p) · GW(p)

(Sorry, I didn't see this until now.)

I'll admit I don't really have data for this. But my intuitive guess is that students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them. At least for me, that's a large part of why I'm in the field that I chose.

It's possible that I'm being misled by the warm fuzzy feelings I get from helping someone face-to-face, which I don't get from sending money halfway across the world. But it seems like there's many things that matter in life that don't have a price tag.

Replies from: Philip_W
comment by Philip_W · 2015-01-02T16:19:50.044Z · LW(p) · GW(p)

I'll admit I don't really have data for this. But my intuitive guess is that ...

Have you made efforts to research it? Either by trawling papers or by doing experiments yourself?

students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them.

Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150.

But it seems like there's many things that matter in life that don't have a price tag.

That sounds deep, but is obviously false: would you condemn yourself to a year of torture so that you get one unit of the thing that allegedly doesn't have a price tag (for example a single minute of a conversation with a student where you feel a real connection)? Would you risk a one in a million chance to get punched on the arm in order to get the same unit? If the answer to these questions is [no] and [yes] respectively, as I would expect them to be, those are outer limits on the price range. Getting to the true value is just a matter of convergence.

Perhaps more to the point, though, those people you would help halfway across the world are just as real, and their lives just as filled with "things that don't have a price tag" as people in your environment. For $3000, one family is not torn apart by a death from malaria. For $3, one child more attends grade school regularly for a year because they are no longer ill from parasitic stomach infections. These are not price tags, these are trades you can actually make. Make the trades, and you set a lower limit. Refuse them, and the maximum price tag you put on a child's relationship with their teacher is set, period.

It does seem very much like you're guided by your warm fuzzies.

Replies from: tjohnson314
comment by tjohnson314 · 2015-01-05T21:52:59.796Z · LW(p) · GW(p)

Have you made efforts to research it?

This is based on my own experience, and on watching my friends progress through school. I believe that the majority of successful people find their life path because someone inspired them. I don't know where I could even look to find hard numbers on whether that's true or not, but I'd like to be that person for as many people as I can.

That sounds deep, but is obviously false... It does seem very much like you're guided by your warm fuzzies.

My emotional brain is still struggling to accept that, and I don't know why. I'll see if I can coax a coherent reason from it later. But my rational brain says that you're right and I was wrong. Thanks.

comment by Philip_W · 2014-12-09T13:49:28.668Z · LW(p) · GW(p)

Empathy is a useful habit that can be trained, just as much as rationality can be.

Could you explain how? My empathy is pretty weak and could use some boosting.

Replies from: tjohnson314
comment by tjohnson314 · 2014-12-26T18:46:56.332Z · LW(p) · GW(p)

For me it works in two steps: 1) Notice something that someone would appreciate. 2) Do it for them.

As seems to often be the case with rationality techniques, the hard part is noticing. I'm a Christian, so I try to spend a few minutes praying for my friends each day. Besides the religious reasons, which may or may not matter to you, I believe it puts me in the right frame of mind to want to help others. A non-religious time of focused meditation might serve a similar purpose.

I've also worked on developing my listening skills. Friends frequently mention things that they like or dislike, and I make a special effort to remember them. I also occasionally write them down, although I try not to mention that too often. For most people, there's a stronger signaling effect if they think you just happened to remember what they liked.

Replies from: Philip_W
comment by Philip_W · 2015-01-02T16:35:27.666Z · LW(p) · GW(p)

You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.

comment by Capla · 2014-10-21T01:35:36.821Z · LW(p) · GW(p)

Empathy is a useful habit that can be trained, just as much as rationality can be.

This is key.

comment by Weedlayer · 2014-10-09T12:40:33.495Z · LW(p) · GW(p)

It's also worth mentioning that cleaning birds after an oil spill isn't always even helpful. Some birds, like gulls and penguins, do pretty well. Others, like loons, tend to do poorly. Here are some articles concerning cleaning oiled birds.

http://www.npr.org/templates/story/story.php?storyId=127749940

http://news.discovery.com/animals/experts-kill-dont-clean-oiled-birds.htm

And I know that the oiled birds issue was only an example, but I just wanted to point out that this issue, much like the "Food and clothing aid to Africa" examples you often see, isn't necessarily a good idea even ignoring opportunity cost.

comment by mwengler · 2014-11-30T22:31:55.901Z · LW(p) · GW(p)

I wonder if in some interesting way the idea that the scope of what needs doing for other people is so massive as to preclude any rational response then to work full time on it is related to the insight that voting doesn't matter. In both cases, the math seems to preclude bothering to do something which will be easy, but will help in the aggregate.

My dog recently tore both of her ACL's, and required two operations and a total of about 10 weeks recovery. My vet suggested I had a choice as to whether to do the 2X $3100 operations on the knees. I realized with the amount of money that I have, $6200 just simply wasn't an important enough amount for me to consider killing my dog at the age of 7 because she couldn't walk. But I was also acutely aware of being goddamn glad that I had only two dogs I cared about, because I sure as hell wasn't interested in discovering the upper limit to how much I would spend before I would start killing off my dogs. Meanwhile, I can live with all the dogs in shelters that will be killed even though they can walk just fine, because they are not my dogs.

I don't want to care any more about the billions of poor people in the world than I already do. I am willing to "blame" their parents: those parents did know, or should have, what they were dooming their children to, approximately when they decided to have them. If I spend my resources to help these poor people, they will be that much healthier that they will proceed to generate that many poor people in the next generation tugging at the heart strings or mind strings of my children. What kind of a father would I be to dump that kind of problem in my kids' lap?

I don't consider it rational to let my moral sentiments run roughshod over my own self interest. I donate, essentially, when I can't help myself, when my sentiments are already involved. To me it seems irrational to spend one iota more effort or money on problems than my sentimental moral self already requires.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-30T23:40:37.708Z · LW(p) · GW(p)

"I don't consider it rational to let my moral sentiments run roughshod over my own self interest."

To be clear, do you consider the choice to repair your dog's knees an expression of what you're labelling "moral sentiments" here, or what you're labelling "self-interest"?

Replies from: mwengler
comment by mwengler · 2014-12-03T16:06:14.865Z · LW(p) · GW(p)

Spending $6200 to fix my 7 year old dog's knees was primarily moral sentiments at work. I could get a healthy 1 year old dog for a fraction of that price. My 7 year old dog will die very likely within the next 3 or 4 years, larger dogs don't tend to live that long. So I haven't saved myself from experiencing the loss of her death, I've just put that off. The dog keeps me from doing all sorts of other things I'd like to do, I have to come home to check on her and feed her and so on, precluding just going on and doing social stuff after work when I want to.

Its important to keep in mind that we are not "homo economicus." We do not have a single utility function with a crank that can be turned to determine the optimum thing to do, and even if in some formal sense we did have such a thing, our reaction to it would not be a deep acceptance of its results.

What we do have is a mess and a mass of competing impulses. I want to do stuff after work. I want to "take care" of those in my charge. My urge to take care of those in my charge presumably arises in me because my humans before me who had less of that urge got competed out of the gene pool.

100,000 years ago, some wolves started hacking humans and as part of that hack, got themselves triggering the stuff that humans have for taking care of their babies. Including the fact that these wolves were pretty good "kids," able to help with a variety of things, we hacked them back and made them even more to our liking by selective killing of the ones we didn't like, and then selective breeding of the ones we did like. At this point, we love our babies more than our dogs, but our babies grow into teenagers. But our dogs always stay baby like in their hacked relationship with us.

My wife took my human children and left me a few years ago, but she left the dogs she had bought. I'm not going to abandon them, the hack is strong in me. Don't get me wrong, I love them. That doesn't mean I am happy about it, or at least not consistently happy about it.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-12-03T17:43:42.015Z · LW(p) · GW(p)

(nods)
Thanks for clarifying.

comment by A1987dM (army1987) · 2014-10-09T20:52:12.132Z · LW(p) · GW(p)

After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars.

Fifty thousand times the marginal utility of a dollar, which is probably much less than the utility difference between the status quo and having fifty thousand dollars less unless Daniel is filthy rich.

Replies from: So8res
comment by So8res · 2014-10-10T06:59:17.547Z · LW(p) · GW(p)

Yeah it's actually a huge pain in the ass to try to value things given that people tend to be short on both time and money. (For example, an EA probably rates a dollar going towards de-oiling a bird as negative value due to the opportunity cost, even if they feel that de-oiling a bird has positive value in some "intrinsic" sense.)

I didn't really want to go into my thoughts on how you should try to evaluate "intrinsic" worth (or what that even means) in this post, both for reasons of time and complexity, but if you're looking for an easier way to do the evaluation yourself, consider queries such as "would I prefer that my society produce, on the margin, another bic lighter or another bird deoiling?". This analysis is biased in the opposite direction from "how much of my own money would I like to pay", and is definitely not a good metric alone, but it might point you in the right direction when it comes to finding various metrics and comparisons by which to probe your intrinsic sense of bird-worth.

comment by 27chaos · 2014-10-07T22:54:27.795Z · LW(p) · GW(p)

I don't have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don't have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.

Because of this, and because I'm basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on "more rational" values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I'm not sure why many people here understand that idea in some cases but ignore it in others. If we're going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.

Gunnar Zarncke makes a good point as well, one I think complements my argument. There's no standard with which to choose between helping all the birds and helping none, once you've thrown the care-o-meter away.

Replies from: AnthonyC
comment by AnthonyC · 2014-10-09T13:22:20.613Z · LW(p) · GW(p)

I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right.

But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).

Replies from: 27chaos
comment by 27chaos · 2014-10-09T20:46:13.389Z · LW(p) · GW(p)

I worry that such consistency isn't possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.

comment by William_Quixote · 2014-10-07T14:36:00.401Z · LW(p) · GW(p)

I think this is a really good post and extreamly clear. The idea of of the broken care-O-meter is a very compelling metaphor. It might be worthwhile to try to put this somewhere higher exposure where people who have money and are not allready familiar with the LW memeplex would see it

Replies from: So8res
comment by So8res · 2014-10-07T16:02:20.488Z · LW(p) · GW(p)

I'm open to circulating it elsewhere. Any ideas? I've crossposted it on the EA forum, but right now that seems like lower exposure than LW.

Replies from: therufs, John_Maxwell_IV
comment by therufs · 2014-10-07T19:17:29.224Z · LW(p) · GW(p)

No ideas here, but maybe ping David, Jeff or Julia?

comment by John_Maxwell (John_Maxwell_IV) · 2014-10-09T23:52:23.694Z · LW(p) · GW(p)

Submitting things to reddit/metafilter/etc. can work surprisingly well.

Replies from: So8res
comment by So8res · 2014-10-10T07:12:42.661Z · LW(p) · GW(p)

I'm slightly averse to submitting my own content on reddit, but you (John_Maxwell_IV, to combat the bystander effect, unless you decline) are encouraged to do so.

My preference would be for the Minding Our Way version over the EA forum version over the LW version.

comment by [deleted] · 2014-10-07T11:29:48.011Z · LW(p) · GW(p)

Nice write-up. I'm one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year... and then ends up giving away bits of spending money for other things anyway, but that's warm-fuzzies.

So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes) of income, around the end of the year or tax time, every year, in "JUST F-ING DO IT" mode.

At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.

This is the only thing I actually object to here. Any choice we make that influences the future at all could be said to reallocate probability between one set of future people and another set. There will only be one real future, though. While I vastly prefer for it to be a good one, I don't consider abortion to be murder, and so I don't feel any moral compulsion to maximize future people, or even to direct the future population towards a particular number. That would imply, to my view, that I'm already deciding the destinies of next year's people, let alone next aeon's, and that's already deeply immoral.

Replies from: TrE
comment by TrE · 2014-10-07T16:12:54.659Z · LW(p) · GW(p)

We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that's worth living in, not an arid desert, neither a universe tiled in smiley faces.

While I agree that, since future people will never be concrete entities, like shadowy figures, we don't get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It's easier in our days than it's ever been!

Additionally, I personally value a universe filled with humans higher than a universe filled with ■.

Replies from: 27chaos
comment by 27chaos · 2014-10-08T01:47:23.037Z · LW(p) · GW(p)

My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar's number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe's energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don't want to have changed.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-08T02:08:07.480Z · LW(p) · GW(p)

Bioengineering might lead to humans who are much less similar to each other.

Replies from: 27chaos, AnthonyC
comment by 27chaos · 2014-10-09T20:37:50.482Z · LW(p) · GW(p)

Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other's values or not value each other. Difficult potential balance to maintain, though that doesn't necessarily mean it should be rejected as an option.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-09T22:56:33.970Z · LW(p) · GW(p)

Bioengineering makes CEV a lot harder.

comment by AnthonyC · 2014-10-09T13:16:58.545Z · LW(p) · GW(p)

And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar's number.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-09T14:14:29.232Z · LW(p) · GW(p)

That's something I've wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar's numbers.

Replies from: Decius
comment by Decius · 2014-10-15T07:32:26.696Z · LW(p) · GW(p)

Or a breeding population selecting for higher Dunbar's numbers.

Or does that qualify as bioengineering?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-15T14:12:00.015Z · LW(p) · GW(p)

I suppose it should count as bioengineering for purposes of this discussion.

comment by Sniknib · 2020-12-28T20:05:46.239Z · LW(p) · GW(p)

Thank you for writing this. I was stuck on 3, and found the answer to a question I asked myself the other day.

comment by [deleted] · 2014-10-09T06:20:47.861Z · LW(p) · GW(p)

Many of us go through life understanding that we should care about people suffering far away from us, but failing to.

That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.

If anybody knows how to fix that, please tell me.

Replies from: Lumifer, Weedlayer, MugaSofer, hyporational
comment by Lumifer · 2014-10-09T14:52:44.172Z · LW(p) · GW(p)

Why do you think it needs fixing?

Replies from: None
comment by [deleted] · 2014-10-09T16:18:57.789Z · LW(p) · GW(p)

I think this might be holding me back. People talk about "support" from friends and family which I don't seem to have, most likely because I don't return that sentiment.

Replies from: Lumifer
comment by Lumifer · 2014-10-09T16:24:03.167Z · LW(p) · GW(p)

Holding you back from what?

Also, you said (emphasis mine) "incapable of even imagining the thought process that would lead me to care for people I don't know" -- you do know your friends and family, right?

Replies from: None
comment by [deleted] · 2014-10-11T20:39:29.186Z · LW(p) · GW(p)

excellent question. I think I'm on the wrong track and something else entirely might be going on in my brain. Thank you.

comment by Weedlayer · 2014-10-09T08:49:05.550Z · LW(p) · GW(p)

Obviously your mileage may vary, but I find it helps to imagine a stranger as someone else's family/friend. If I think of how much I care about people close to me, and imagine that that stranger has people who care about them as much as I can about my brother, then I find it easier to do things to help that person.

I guess you could say I don't really care about them, but care about the feelings of caring other people have towards them.

If that doesn't work, this is how I originally though of it. If a stranger passed by me on the street and collapsed, I would care about their well being (I know this empirically). I know nothing about them, I only care about them due to proximity. It offends me rationally that my sense of caring is utter dependent on something as stupid as proximity, so I simply create a rule that says "If I would care about this person if they were here, I have to act like I care if they are somewhere else". Thus, utilitarianism (or something like it).

It's worth noting that another, equally valid rule would be "If I wouldn't care about someone if they were far away, there's no reason to care about them when they happen to be right here". I don't like that rule as much, but it does resolve what I see as an inconsistency.

Replies from: None
comment by [deleted] · 2014-10-09T16:13:23.703Z · LW(p) · GW(p)

Thank you. That seems like a good way of putting it. I seem to have problems thinking of all 7 billion people as individuals. I will try to think about people I see outside as having a life of their own even if I don't know about it. Maybe that helps.

comment by MugaSofer · 2014-10-10T13:57:39.741Z · LW(p) · GW(p)

I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".

So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-10-13T16:21:48.000Z · LW(p) · GW(p)

you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Why should I act this way?

Replies from: MugaSofer
comment by MugaSofer · 2014-10-18T16:02:07.775Z · LW(p) · GW(p)

To better approximate a perfectly-rational Bayesian reasoner (with your values.)

Which, presumably, would be able to model the universe correctly complete with large numbers.

That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.

comment by hyporational · 2014-10-09T17:51:04.847Z · LW(p) · GW(p)

What makes you care about caring?

comment by Unnamed · 2014-10-08T19:06:30.852Z · LW(p) · GW(p)

Two possible responses that a person could have after recognizing that their care-o-meter is broken and deciding to pursue important causes anyways:

Option 1: Ignore their care-o-meter, treat its readings as nothing but noise, and rely on other tools instead.

Option 2: Don't naively trust their care-o-meter, and put effort into making it so that their care-o-meter will be engaged when it's appropriate, will be not-too-horribly calibrated, and will be useful as they pursue the projects that they've identified as important (despite its flaws).

Parts of this post seem to gesture towards option 2 (like the Daniel story, and section 8), while other parts seem to gesture towards option 1 (like the courage analogy, and section 5).

Replies from: So8res
comment by So8res · 2014-10-10T06:54:06.666Z · LW(p) · GW(p)

I definitely don't suggest ignoring the care-o-meter entirely. Emotions are the compass.

Rather, I advocate not trusting the care-o-meter on big numbers, because it's not calibrated for big numbers. Use it on small things where it is calibrated, and then multiply yourself if you need to deal with big problems.

comment by UriKatz · 2014-10-31T05:48:48.716Z · LW(p) · GW(p)

I think we need to consider another avenue in which our emotions are generated, and effect our lives. An immediate, short to medium term high is, in a way, the least valuable personal return we can expect from our actions. However, there is a more subtle yet long lasting emotional effect, which is more strongly correlated to our belief system, and our rationality. I refer to a feeling of purpose we can have on a daily basis, a feeling of maximizing personal potential, and even long term happiness. This is created when we believe we are doing the right thing, when we know there is till more to be done, and continue to make an effort. A good example of this is the difference between falling in love and being in love for a lifetime. Another example is raising children. Every few months I sit in front of my computer and punch in a bunch of numbers, which result in a donation to GiveWell. The immediate emotional impact of this is about on par with eating a mediocre sandwich. However, every day I remind myself that that day's work contributes to my ability to make bigger and bigger donation. Also, every so often I am hit with the realization that I, insignificant little me, have saved people's lives, and can save more. That perhaps my existence on this planet will do more good than harm. The contribution of this to my overall emotional well being cannot be overstated. I think we can redefine caring along these lines. Then we will see that we do care, not only in action, but also in feeling. Any emotion that actually matters is not a momentary peak or trough.

comment by snarles · 2014-10-16T13:58:48.655Z · LW(p) · GW(p)

Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.

Replies from: army1987, Jiro
comment by A1987dM (army1987) · 2014-10-17T11:29:57.562Z · LW(p) · GW(p)

Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds) -- and it'd be a very weird utility function indeed if the difference between the first and the second is much larger than one-thousandth the difference between the second and the third. And even if U did have such kinks, the fact that you don't know exactly how many oiled birds are there would smooth them away when computing EU(one fewer oiled bird) etc.

(IIRC EY said something similar in the sequences, using starving children rather than oiled birds as the example, but I can't seem to find it right now.)

Unless you also care about who is saving the birds -- but you aren't considering saving them with your own hands, you're considering giving money to save them, and money is fungible, so it'd be weird to care about who is giving the money.

Replies from: Jiro
comment by Jiro · 2014-10-17T16:37:04.985Z · LW(p) · GW(p)

Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds)

Nonlinear in what?

Daniel's utility for dollars is nonlinear in the total number of dollars that he has, not in the total number of dollars in the world. Likewise, his utility for birds is nonlinear in the total number of birds that he has saved, not in the total number of birds that exist in the world.

(Actually, I'd expect it to have two components, one of which is nonlinear in the number of birds he has saved and another of which is nonlinear in the total number of birds in the world. However, the second factor would be negligibly small in most situations.)

Replies from: army1987
comment by A1987dM (army1987) · 2014-10-18T07:49:21.405Z · LW(p) · GW(p)

IOW he doesn't actually care about the birds, he cares about himself.

Replies from: Jiro
comment by Jiro · 2014-10-18T09:23:49.818Z · LW(p) · GW(p)

He has a utility function that is larger when more birds are saved. If this doesn't count as caring about the birds, your definition of "cares about the birds" is very arbitrary.

Replies from: army1987
comment by A1987dM (army1987) · 2014-10-19T12:45:19.106Z · LW(p) · GW(p)

He has a utility function that is larger when more birds are saved.

He has a utility function that is larger when he saves more birds; birds saved by other people don't count.

Replies from: Jiro
comment by Jiro · 2014-10-19T15:33:59.657Z · LW(p) · GW(p)

If it has two components, they do count, just not by much.

comment by Jiro · 2014-10-16T17:35:09.189Z · LW(p) · GW(p)

Because Daniel has been thinking of scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of dollars: the internal feeling of satisfaction with gaining money can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about making lots of money, he shuts up and multiplies the joy of finding $20 by a million....

Replies from: Lumifer
comment by Lumifer · 2014-10-16T17:55:15.608Z · LW(p) · GW(p)

he expects his brain to misreport how much he actually cares

Um, that's nonsense. His brain does not misreport how much he actually cares -- it's just that his brain thinks that it should care more. It's a conflict between "is" and "should", not a matter of misreporting "is".

he shuts up and multiplies the joy of finding $20 by a million....

After which he goes and robs a bank.

Replies from: Jiro
comment by Jiro · 2014-10-16T18:39:16.713Z · LW(p) · GW(p)

Um, that's nonsense.

You do realize that what I said is a restatement of one of the examples in the original article, except substituting "caring about money" for "caring about birds"? And snarles' post was a somewhat more indirect version of that as well? Being nonsense is the whole point.

Replies from: Lumifer
comment by Lumifer · 2014-10-16T18:45:49.080Z · LW(p) · GW(p)

You do realize that what I said is a restatement of one of the examples in the original article

Yes, I do, and I think it's nonsense there as well. The care-o-meter is not broken, it's just that your brain would prefer you to care more about all these numbers. It's like preferring not have a fever and saying the thermometer is broken because it shows too high a temperature.

comment by DanielLC · 2014-10-15T21:20:59.872Z · LW(p) · GW(p)

I know the name is just a coincidence, but I'm going to pretend that you wrote this about me.

comment by PeterisP · 2014-10-15T16:01:23.065Z · LW(p) · GW(p)

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.

comment by LawrenceC (LawChan) · 2014-10-14T14:55:53.497Z · LW(p) · GW(p)

Upvoted for clarity and relevance. You touched on the exact reason why many people I know can't/won't become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.

I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I'd note how much I was going to donate and then donate to one of Givewell's top charities.

comment by Mayann Ranger (mayann-ranger) · 2022-09-28T19:29:28.362Z · LW(p) · GW(p)

Thank you for this explanation. Now it helps me to understand a little bit more of why so many people I know simply feel overwhelmed and give up. Personally as I am not in position to donate money, I work to tackle one specific problem set that I think will help open up and leave the solutions to other problems.
ShiraDest

comment by Decius · 2014-10-15T07:27:02.035Z · LW(p) · GW(p)

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.

People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.

Replies from: Wes_W, AmagicalFishy
comment by Wes_W · 2014-10-15T08:32:01.001Z · LW(p) · GW(p)

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

I can think of two categories of responses.

One is something like "I care by induction". Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it's not much of a leap to "I should just start caring about people before I meet them". After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.

The other is something like "The caring is much better calibrated than the not-caring". Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I've never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.

I've also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. "I don't care much about billions of people" is an almost totally unfounded wild guess, but "I care lots about individual people" has lots of solid evidence, so when they collide, the latter wins.

(Neither of these are ironclad, at least not as I've presented them, but hopefully I've managed to gesture in a useful direction.)

Replies from: Jiro, Decius
comment by Jiro · 2014-10-15T15:55:10.686Z · LW(p) · GW(p)

Your second category of response seems to say "my intuitions about considering a group of people, taken billions at a time, aren't reliable, but my intuitions about considering the same group of people, one at a time, are". You then conclude that you care because taking the billions of people one at a time implies that you care about them.

But it seems that I could apply the same argument a little differently--instead of applying it to how many people you consider at a time, apply it to the total size of the group. "my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good." The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups--that is, the second argument refutes the first. Going from "I care about the people in my life" to "I would care about everyone if I met them" is as inappropriate as going from "I know what happens to clocks at slow speeds" to "I know what happens to clocks at near-light speeds".

Replies from: Decius
comment by Decius · 2014-10-16T04:44:34.403Z · LW(p) · GW(p)

I'll go a more direct route:

The next time you are in a queue with strangers, imagine the two people behind you (that you haven't met before and don't expect to meet again and didn't really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.

If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.

comment by Decius · 2014-10-16T00:27:57.158Z · LW(p) · GW(p)

To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group "people I've never met" that I don't care about, and into the group of people that I do care about.

Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.

comment by AmagicalFishy · 2014-11-23T22:39:05.856Z · LW(p) · GW(p)

I second this question: Maybe I'm misunderstanding something, but part of me craves a set of axioms to justify the initial assumptions. That is: Person A cares about a small number of people who are close to them. Why does this equate to Person A having to care about everyone who isn't?

Replies from: lalaithion
comment by lalaithion · 2014-11-23T23:32:52.772Z · LW(p) · GW(p)

For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn't changed anything about them. Similarly, becoming friends with someone doesn't usually change the person that much, but increases how much I care about them an awful lot.

Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn't

Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.

Replies from: AmagicalFishy, Decius
comment by AmagicalFishy · 2014-11-24T05:30:36.803Z · LW(p) · GW(p)

For the most part, I follow—but there's something I'm missing. I think it lies somewhere in: "It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn't."

Is the underlying "axiom" here that you wish to maximize the number of effects that come from the caring you give to people, because that's what an altruist does? Or that you wish to maximize your caring for people?

To contextualize the above question, here's a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.

I know it's silly, but let me explain: A person usually doesn't want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario's conclusion seems silly. Similarly, I don't feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn't make me feel as if I should.

If, on the other hand, the axiom is true—then why bother considering your intuitive "care-o-meter" in the first place?

I think there's something fundamental I'm missing.

(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

Replies from: Decius
comment by Decius · 2014-11-24T23:59:59.900Z · LW(p) · GW(p)

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

I care about self-consistency, but being self-consistent is something that must happen naturally; I can't self-consistently say "This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent"

Replies from: AmagicalFishy
comment by AmagicalFishy · 2014-11-25T01:12:50.832Z · LW(p) · GW(p)

... Oh.

Hm. In that case, I think I'm still missing something fundamental.

Replies from: Decius, lalaithion
comment by Decius · 2014-11-28T06:11:40.687Z · LW(p) · GW(p)

I care about self-consistency because an inconsistent self is very strong evidence that I'm doing something wrong.

It's not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away.

The general case of "find a self-inconsistency, make the minimum change to remove it" is not error-correcting.

comment by lalaithion · 2014-11-25T17:11:07.472Z · LW(p) · GW(p)

I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you're aware that you're being inconsistent, you can still alter your actions to try and correct for that fact.

comment by Decius · 2014-11-24T23:58:13.745Z · LW(p) · GW(p)

I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.

It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.

comment by dthunt · 2014-10-09T17:49:56.320Z · LW(p) · GW(p)

I would like to subscribe to your newsletter!

I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.

comment by [deleted] · 2014-10-08T20:43:21.994Z · LW(p) · GW(p)

Attempting to process this post in light of being on my anti-anxiety medication is weird.

There are specific parts in your post where I thought 'If I was having these thoughts, it would probably be a sign I had not yet taken my pill today.' and I get the distinct feeling I would read this entirely differently when not on medication.

It's kind of like 'I notice I'm confused' except... In this case I know why I'm confused and I know that this particular kind of confusion is probably better than the alternative (Being a sleep deprived mess from constant worry) so I'm not going to pick at it. Which is not a feeling I usually get, which is why I said it was weird.

However, pretending to view this from the perspective of someone who can handle anxiety effectively, I would say this an excellent post and I upvoted it even though I can't really connect to it on a personal level.

comment by Oleksandr Yashunin (oleksandr-yashunin) · 2021-08-26T17:20:23.179Z · LW(p) · GW(p)

This post is amazing, So8res! (My team and I have stumbled upon it in search for the all-time greatest articles on improving oneself and making a difference. Just in case you’re interested, you can see our selection at One Daily Nugget. We’ve featured this article in today’s issue.)

Here’s one question that we discussed, would love to get your take: You recommend that one starts with something one cares about, quantifies it, multiplies, and then trusts the result more than one’s intuition.

I love this approach. But how can we be sure that the first element in this algorithm is sound? Scope insensitivity provides that we cannot trust our intuition about large numbers. But can we trust our intuition about an individual unit?

For example, our intuition tells us that saving a dog is more valuable than saving a rat. But is our intuition a reliable guide here? What’s your recommendation on how to calibrate our care-o-meter correctly for the individual unit that provides the foundation for your scope insensitivity correction algorithm?

Thank you very much!

comment by [deleted] · 2014-11-30T10:05:13.200Z · LW(p) · GW(p)

Sorry I was rude, I just know how it is, to stand in the rain and try to get someone do something painless for the greater good and have them turn away for whatever reason.

On another point, here's a case study of lesser proportions.

Suppose you generally want to fight social injustice, save Our Planet, uphold peace, defend women's rights etc. (as many do when they just begin deciding what to do with themselves). A friend subscribes you to a NGO for nature conservation, and you think it might be a good place to start, since you don't have much money to donate, you are vaguely afraid of catching a disease from poor people, and it's safer to explain to your parents anyway. (I'm not saying it is morally right, I'm saying it is common.)

You, like Paris, are presented with a choice. You can give your all (money, career, way of living) to one out of three goals, each one (considered by many other people) noble in its own right, and be quietly damned for not picking any of the other.

Your criteria of choice are: Beauty, Harmony and Kinship (again, this is just empirical - I've seen many people start with these). Note that each of them gives you equally strong feeling of being in the right.

Beauty means you would protect flowers and birds and aesthetically pleasing things. It is easy to devise a way to target certain species if you know something about the threats to them. Let us say this is the species-oriented approach. Harmony means you would urge people to 'live green', educate masses about the value of the Earth, maybe rail against nuclear power plants and other clearly dangerous projects. This will be the people-oriented approach. Kinship means you would fight for abused animals (pets, victims of scientific experiments, circus animals, large mammals going extinct from poaching, etc.) This will be the problem-oriented approach (I know, lousy naming).

However, efficient nature conservation turns out to be quite different from your visions. Your priors turn out to be biases.

Because protecting single species (Beauty) almost always fallso short of the objective, since the major cause of species extinction (at least in terrestrial ecosystems) is habitat destruction. Curiously, many beginners find it easier to invest efforts into saving individual lives but not into ensuring there is a place for the organisms to live, propagate and disperse. A life (or often, one season of it) is something tangible; a possibility is not. (And it is statistically hard to fight against land developing companies and win more than a season's delay before the habitat in question is razed to the ground. Also, the danger to the activist is proportional to his impact. I should think it is so for human-oriented initiatives, too. It's one thing to raise funds for cancer treatment, it's another to investigate illegal trade in human organs.)

Because pursuing Harmony mostly gets you to discuss misconceptions about concervation (the deeper you dig the wilder they get), and protesting against power plants rarely succeeds at all. Not to mention that this way doesn't begin to cover the more common (and tawdry) threats to biodiversity.

Because Kinship is not about nature, it's about virtuousness and kindness.

In the end, you either shrug and say, 'I've tried' or specialize in some branch of ecology. Very few people start with science, but they are more likely to continue working. 'It is the good thing to do' is not a strong enough motivation for most. It's simply not efficient.

comment by lackofcheese · 2014-10-18T08:19:42.303Z · LW(p) · GW(p)

I think there's some good points to be made about the care-o-meter as a heuristic.

Basically, let's say that the utility associated with altruistic effort has a term something like this:
U = [relative amount of impact I can have on the problem] * [absolute significance of the problem]

To some extent, one's care-o-meter is a measurement of the latter term, i.e. the "scope" of the problem, and the issue of scope insensitivity demonstrates that it fails miserably in this regard. However, that isn't entirely an accurate criticism, because as a rough heuristic your care-o-meter isn't simply a measure of the second term; it also includes some aspects of the first term. Indeed, if one views the care-o-meter as a "call to action", then it would make much more sense for it to be a heuristic estimate of U than of absolute problem significance.

For example, if your care-o-meter says you care more about your friends than about people far away, or don't care much more about large disasters than smaller ones, then any combination of three things could be going on:
(1) I can't have as much relative impact on those problems.
(2) Those problems are simply less important.
(3) My care-o-meter is simply wrong.

I don't agree at all with (2), and I can see a lot of merit in the suggestion of (3). However, I think that for most people in most of human history, (1) has been relatively applicable. If you, personally, are only capable of helping other people a single person at a time, then it doesn't really matter if that person is a single person who has been hurt, or one out of a million suffering due to a major disaster. Also, you are in a unique position to help your friends more so than other people, and thus it makes plenty of sense to spend effort on your friends more so than on random strangers.

Of course, it is nonetheless true that this kind of care-o-meter miscalibration has always been an issue. At the very least, there have always been people who have had much more power than others, and thus have been able to make larger impacts on larger problems.

More importantly, in modern times (1) is far less true than it used to be for a great many people. It is genuinely possible for many people in the world to have a significant impact on what you refer to as distant invisible problems, and thus good care-o-meter calibration is essential.

comment by spatiality · 2014-10-08T15:58:56.445Z · LW(p) · GW(p)

Thank You for this write-up; I really like the structure of it actually managing to present the evolution of an idea. Agreeing with more or less of the content, I often find myself posing the question whether I - and seven billion others - could save the world with my, our own hands. (I am beginning to see utilons even in my work as an artist, but that belongs into a wholly different post) This is a question for the ones like me, not earning much, and - without further and serious reclusion, reinvention and reorientation - not going to earn much, ever: Do I a) maximise and donate the small amounts I receive now, b) maximise my future income while minimising donations for now to spend on self-improvement and donate some highly uncertain, possibly huge sum in the future or c) use my resources to directly change something now? Let's not make it an overly complex discussion, so feel free to message me instead of commenting.

Concerning mother Theresa and other saints, I think we all know somebody who was an especially vociferous denier of her sanctity. I think it helps if I model myself as an instinctly selfish creature, and then go on and use my selfish instincts to push myself in a good direction. (I did this - on a small scale - with my smoking problem and told myself: Ok, so you wanna smoke, hm?? So go on and smoke - when you have won the next competition. So here's what I do whenever I feel the urge: Oh, I wanna smoke; Oh I can't, so how do I optimise my chance of smoking? Oh, I should go and work on my project) I think this technique - how darksided and dangerous it ever may be - can be used to propel myself towards even bigger goals.

comment by [deleted] · 2014-11-29T19:25:41.528Z · LW(p) · GW(p)

Dear Daniel of the OP, nice of you to wake up. If you cared a fig about a single bird (which you don't, or you'd have donated three dollars), you would have contacted the NGO raising funds for some piteous pictures, written an inflammatory blogpost and started a petition. Believe me, they actually help somewhat. They don't solve issues, but then this isn't an issue that can be solved - however many birds get rescued, there will be birds (and other creatures, but birds are flashy, they appeal to human images of beauty and freedom, they are symbols) that will invariably drown. It's a question of damage control, and face it, Daniel, it is so for ALL of the issues that trouble you so (unless you rank the probability of humanity being wiped out by a hostile AI or aliens higher than it being decimated by biological agents like viruses and bacteriae, which actually already exist). If you want to get a warm feeling, go to the seashore for a weekend and wash them yourself; it will hardly delay the resolution of world crises. But a petition is more effective. You have seen for yourself that collecting money in the streets is of little help. Why not take a better route towards the goal? Some people honestly are incapable of action after they have failed somehow, but you, Daniel, haven't even tried.

...I liked the post, but it reminded me of Marius Pontmercy being introduced to Les Amis d'ABC. Only Marius did go to the barricade later.