comment by niplav ·
2021-04-09T21:31:30.104Z · LW(p) · GW(p)
The child-in-a-pond thought experiment is weird, because people use it in ways it clearly doesn't work for (especially in arguing for effective altruism).
For example, it observes you would be altruistic in a near situation with the drowning child, and then assumes that you ought to care about people far away as much as people near you. People usually don't really argue against this second step, but very much could. But the thought experiment makes no justification for that extension of the circle of moral concern, it just assumes it.
Similarly, it says nothing about how effectively you ought to use your resources, only that you probably ought to be more altruistic in a stranger-encompassing way.
But not only does this thought experiment not argue for the things people usually use it for, it's also not good for arguing that you ought to be more altruistic!
Underlying it is a theme that plays a role in many thought experiments in ethics: they appeal to game-theoretic intuition for useful social strategies, but say nothing of what these strategies are useful for.
Here, if people catch you letting a child drown in a pond while standing idly, you're probably going to be excluded from many communities or even punished. And this schema occurs very often! Unwilling organ donors, trolley problems, and violinists.
Bottom line: Don't use the drowning child argument to argue for effective altruism.
Replies from: habryka4
↑ comment by habryka (habryka4) ·
2021-04-09T23:29:23.335Z · LW(p) · GW(p)
I don't know, I think it's a pretty decent argument. I agree it sometimes gets overused, but I do think given it's assumptions "you care about people far away as much as people closeby" and "there are lots of people far away you can help much more than people close by" and "here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way" are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics.
Like, you don't need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.Replies from: Dagon, niplav
↑ comment by Dagon ·
2021-04-10T02:52:41.424Z · LW(p) · GW(p)
On self-reflection, I just plain don't care about people far away as much as those near to me. Parts of me think I should, but other parts aren't swayed. The fact that a lot of the motivating stories for EA don't address this at all is one of the reasons I don't listen very closely to EA advice.
I am (somewhat) an altruist. And I strive to be effective at everything I undertake. But I'm not an EA, and I don't really understand those who are.Replies from: habryka4
↑ comment by habryka (habryka4) ·
2021-04-10T04:33:15.700Z · LW(p) · GW(p)
Yep, that's fine. I am not a moral prescriptivist who tells you what you have to care about.
I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don't think it's completely implausible you really don't care.
I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.
↑ comment by niplav ·
2021-04-10T09:54:15.567Z · LW(p) · GW(p)
Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one.
(for the record, I think the first premise is true)