Posts
Comments
Fellow effective altruists and other people who care about making things better, especially those of you who mostly care about minimising suffering: how do you stay motivated in the face of the possibility of infinities, or even just the vast numbers of morally relevant beings outside our reach?
I get that it's pretty silly to get so distressed over issues there's nothing I can do about, but I can't help feeling discouraged when I think about the vast amount of suffering that probably exists - I mean, it doesn't even have to be infinite to feel like a bottomless pit that we're pretty much hopelessly trying to fill. I have a hard time feeling genuinely happy about good news, like progress in eradicating diseases or reducing world hunger, because intuitively it all feels like such an insignificant part of all the misery that's going on elsewhere (and in other Everett branches of course, in case the MWI is correct).
I know this is a bit of a noob question and something everyone probably thinks about at some point, which is why I'm hoping to hear what kind of conclusions other people have reached.
Negative: a couple decided to go poly after some years in a stable monogamous relationship. It seemed to go well for a few months, but the guy apparently told a few white lies here and there, which then got completely out of control and eventually resulted in a disaster for pretty much everyone involved.
Neutral/negative: a couple was poly for maybe half a year or so, then decided it was "too much trouble" and returned to monogamy. I don't know them well enough to be able to provide more details, but they have been together for a few years after that and are now having a child, so nothing terrible probably happened.
I know plenty of other poly people as well, but don't know as much of what's going on in their individual relationships. The general feeling I get is that while a healthy poly relationship certainly isn't impossible, they are only rarely very stable and often seem to require significantly more attention and work to succeed even when they do (which of course are not negative things to everyone, and it can be worth it anyway in case the freedom and additional partners bring a lot of value). Problems arising from insufficient honesty are pretty common, even among those who would generally seem to value trust and openness, so that's probably an important thing to watch out for.
Thanks! No need for a lengthy debate, I'm just very curious about how people decide where to donate, especially when the process leads to explicitly non-EA decisions. Your reasons are in fact pretty close to what I would have guessed, so I suppose similar intuitions are quite common and might explain part of why an idea as obvious as effective altruism took so long to develop.
But yeah, a subthread about this in the OT sounds like a good idea (unless I can find lots of old discussions on the subject).
I am not completely sold on effective altruism and might also donate to the Red Cross or so.
Interesting, why is this? Do you mean effective altruism as a concept, or the EA movement as it currently is?
Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you "want to see humanity flourish indefinitely", you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.
just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive
Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch <=> rendering the branch irrelevant, pretty much.
which isn't so important.
To me it did feel like this is obviously what's important, and the branches where you don't exist simply don't matter - there's no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).
If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.
*Which is one main difference when comparing this to regular old population ethics, I suppose.
Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it's just not a big deal in an Everett multiverse?
(There's probably a lot that I've missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)
In general, vegetarians don't care as much about e.g. species flourishing as they do about the vast amounts of suffering that farmed animals are quite likely to experience. I see nothing strange in viewing animals as morally relevant and deeming their life a net negative, thus hoping they wouldn't have to exist.
Eating only free range or hunted meat is a pretty good option, although of course not entirely unproblematic, from the suffering-reduction point of view. It is very often brought up by non-vegetarians whenever the topic of animal suffering comes up - anecdotally I counted four people I know who I have heard using the argument when explaining or defending their meat eating. None of them actually even eats mainly free range or hunted meat. To me, it seems the whole point is unfortunately only ever used as a motte that people retreat to to avoid having to feel or look bad, before again just eating whatever as soon as they can stop thinking about it. This might not mean these people don't really care on some level: I'd guess it is more expensive cognitively to analyze and keep tabs on which meat products cause only acceptable amounts of suffering, without succumbing to rationalization and constant habit-breaking and eventually forgetting the project, than it is to just rule meat out of your diet and stop thinking about it.
Another reason why free-range and hunted meat are not quite equivalent to veg(etari)anism is that they don't seem to scale as easily to feed large populations for a reasonable land area and product price. That said, I for one would welcome a society which mostly eats plant-based food, but with the very occasional expensive hunted or ethically-farmed piece of meat or cheese, which indeed seems like what a non-factory-farming omnivore society could end up looking like. (Of course, for us embracing a more negative form of utilitarianism, wild-animal suffering would still be a problem, but that's beyond the scope of this discussion.)