Saving the world sucks
post by Defective Altruism (Elijah Bodden) · 2024-01-10T05:55:46.504Z · LW · GW · 29 commentsContents
29 comments
I don’t want to save the world. I don’t want to tile the universe with hedonium. I don’t want to be cuckolded by someone else’s pretty network-TV values. I don’t want to do anything I don’t want to do, and I think that’s what (bad) EAs, mother Teresa, and proselytizing Christians all get wrong. Doing things because they sound nice and pretty and someone else says they’re morally good suuucks. Who even decided that warm fuzzies, QALYs, or shrimp lives saved are even good axes to optimize? Because surely everyone doesn’t arrive at that conclusion independently. Optimizing such universally acceptable, bland metrics makes me feel like one of those blobby, soulless corporate automata in bad tech advertisements.
I don’t see why people obsess over the idea of universal ethics and doing the prosocial thing. There’s no such thing as the Universal Best Thing, and professing the high virtue of maximizing happiness smacks of an over-RLHFed chatbot. Altruism might be a “virtue”, as in most people’s evolved and social environments cause them to value it, but it doesn’t have to be. The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.
There comes a time in most conscientious, top-of-distribution kids’ lives when they decide to Save the World. This is very bad. Unless they really do get a deep, intrinsic satisfaction from maximizing expected global happiness, they’ll be in for a world of pain later on. After years of spinning their wheels, not getting anywhere, they’ll realize that they hate the whole principle they’ve built their life around. That, deep down, their truest passion doesn’t (and doesn’t have to) involve the number of people suffering malaria, the quantity of sentient shrimps being factory farmed, or how many trillion people could be happy in a way they aren’t 1000 years from now. I claim that scope insensitivity isn’t a bug. That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to. You have the freedom to care about whatever you want and shouldn’t feel social guilt for not liking the same values everyone else does. Their values are just as meaningful (or meaningless) as yours. Peer pressure is an evolved strategy to elicit collaboration in goofy mesa-optimizers like humans, not an indication of some true higher virtue.
Life is complex, and I really doubt that what you should care about can be boiled down to something so simple as quality-adjusted life-years. I doubt it can be boiled down at all. You should care about whatever you care about, and that probably won’t fit any neat moral templates an online forum hands you. It'll probably be complex, confused, and logically inconsistent, and I don't think that's a bad thing
Why do I care about this so much? Because I got stuck in exactly this trap at the ripe old age of 12, and it fucked me up good. I decided I’d save the world, because a lot of very smart people on a very cool site said that I should. That it would make me feel good and be good. That it mattered. The result? Years of guilt, unproductivity, and apathy. Ending up a moral zombie that didn’t know how to care and couldn’t feel emotion. Wondering why enlightenment felt like hell. If some guy promised to send you to secular heaven if you just let him fuck your wife, you’d tell him to hit the road. But people jump straight into the arms of this moral cuckoldry. Choosing and caring about your values is a very deep part of human nature and identity, and you shouldn’t let someone else do it for you.
This advice probably sounds really obvious. But it wasn’t for me, so I hope it’ll help other people too. Don’t let someone else choose what you care about. Your values probably won’t look exactly like everyone else’s and they certainly shouldn’t feel like a moral imperative. Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway. As an optimizing agent in an incredibly nebulous and dark world, the best you can do is what you think is personally good. There are lots of equally valid goals to choose from. Infinitely many, in fact. For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty. It feels good to learn more and uncover the truth, and I’m a hell of a lot happier and more effective doing that than puttering around pretending to care about the exact count of humans experiencing bliss. There are lots of other values too. You can optimize anything that speaks to you - relationships, cool trains and fast cars, pure hedonistic pleasure, number of happy people in the world - and you shouldn’t feel bad that it’s not what your culty clique wants from you. This kind of “antisocial” freedom is pretty unfashionable, especially in parts of the alignment/EA community, but I think a lot more people think it than say it explicitly. There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.
Save the world if you want to, but please don’t if you don’t want to.
29 comments
Comments sorted by top scores.
comment by Rafael Harth (sil-ver) · 2024-01-10T15:26:38.843Z · LW(p) · GW(p)
I can't really argue against this post insofar as it's the description of your mental state, but it certainly doesn't apply to me. I became way happier after trying to save the world, and I very much decided to try to save the world because of ethical considerations rather than because that's what I happened to find fun. (And all this is still true today.)
comment by Vanessa Kosoy (vanessa-kosoy) · 2024-01-11T14:47:00.865Z · LW(p) · GW(p)
I agree with the OP that: Utilitarianism is not a good description of most people's values, possibly not even a good description of anyone's values. Effective altruism encourages people to pretend that they are intrinsically utilitarian, which is not healthy or truth-seeking. Intrinsic values are (to 1st approximation) immutable. It's healthy to understand your own values, it's bad to shame people for having "wrong" values.
I agree with critics of the OP that: Cooperation is rational, we should be trying to help each other over and above the (already significant) extent to which we intrinsically care about each other, because this is in our mutual interest. A healthy community rewards prosocial behavior and punishes sufficiently antisocial behavior (there should also be ample room for "neutral" though).
A point insufficiently appreciated by either: The rationalist/EA community doesn't reward prosocial behavior enough. In particular, we need much more in the way of emotional support and mental health resources for community members. I speak from personal experience here: I am very grateful to this community for support in the career/professional sense. However, on the personal/emotional level, I never felt that the community cares about what I'm going through.
Replies from: LawChan↑ comment by LawrenceC (LawChan) · 2024-01-12T20:32:21.855Z · LW(p) · GW(p)
I agree with most of the points you're making here.
The rationalist/EA community doesn't reward prosocial behavior enough.
I think there's a continued debate about whether these groups should behave more like a professional circle or as a social community. (In practice, both spheres are a bit of both.) I think from the lens of EA/rats as a social group, we don't really provide enough emotional support and mental health resources. However, insofar as EA is intended to be a professional circle trying to do hard things, it makes sense why these resources might be deprioritized.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2024-01-13T13:36:56.913Z · LW(p) · GW(p)
There is tension between the stance that "EA is just a professional circle" and the (common) thesis that EA is a moral ideal. The latter carries the connotation of "things you will be rewarded for doing" (by others sharing the ideal). Likely some will claim that, in their philosophy, there is no such connotation: but it is on them to emphasize this, since this runs contrary to the intuitive perception of morality by most people. People who take up the ideology expecting the implied community aspect might understandably feel disappointed or even betrayed when they find it lacking, which might have happened to the OP.
As I said, cooperation is rational. There are, roughly speaking, two mechanisms to achieve cooperation: the "acausal" way and the "causal" way. The acausal way means doing something out of abstract reasoning that, if many others do the same, it will be in everyone's benefit, and moreover many others follow the same reasoning. This might work even without a community, in principle.
However, the more robust mechanism is causal: tit-for-tat. This requires that other people actually reward you for doing the thing. One way to reward is by money, which EA does to some extent: however, it also encourages members to take pay cuts and/or make donations. Another way to reward is by the things money cannot buy: respect, friendship, emotional support and generally conveying the sense that you're a cherished member of the community. On this front, more could be done IMO.
Even if we accept that EA is nothing more than a professional circle, it is still lacking in the respects I pointed out. In many professional circles, you work in an office with peers, leading naturally to a network of personal connections. On the other hand, AFAICT many EAs work independently/remotedly (I am certainly one of those), which denies the same benefits.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2024-01-13T16:29:47.567Z · LW(p) · GW(p)
There is tension between the stance that "EA is just a professional circle" and the (common) thesis that EA is a moral ideal.
It is a professional circle founded on a moral ideal. The former to be Effective, the latter to be Altruistic.
The latter carries the connotation of "things you will be rewarded for doing" (by others sharing the ideal).
It is an old saying that virtue is its own reward.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2024-01-13T17:28:51.259Z · LW(p) · GW(p)
"Virtue is its own reward" is a nice thing to believe in when you feel respected, protected and loved. When you feel tired, lonely and afraid, and nobody cares at all, it's very hard to understand why you should be making big sacrifices for the sake of virtue. But, hey, people are different. Maybe, for you virtue is truly, unconditionally, its own reward, and a sufficient one at that. And maybe EA is a community professional circle only for people who are that stoic and selfless. But, if so, please put the warning in big letters on the lid.
↑ comment by Richard_Kennaway · 2024-01-14T08:53:09.341Z · LW(p) · GW(p)
But, if so, please put the warning in big letters on the lid.
I have 700 warnings in big letters here!
The EA movement is rather like a church. (I have in mind the Catholic and Orthodox churches, not the new-fangled outfits that developed after the Reformation.) The prominent philosophers, like Peter Singer, are the prophets. The people who found and run organisations like GiveWell are the clergy. There are the rank and file members, who are the monks toiling on the work of the church. There are lay preachers, such as Scott Alexander. There are ordinary folk who do no more than tithe to the approved charities or turn vegetarian. And caught up in all that there are a few who have argued themselves into an unsustainable religious mania.
comment by AnthonyC · 2024-01-10T14:49:47.514Z · LW(p) · GW(p)
I think I agree with a lot of the object level content of the post (and I once fell into a years-long depression due to my own inability to live what I wanted my values to be), but I also would add that there needs to be a lot of initial and ongoing work done, by a lot of people who don't need or necessarily want to be doing it, in order for us to exist in a world where we (and most people) can have the kind of freedom you're talking about, in a meaningful way. You don't have to be the one doing it, but someone does, and we'd better hope they have the kind of ideals which involve valuing other peoples' well-being. Which, of course, we get mainly by encouraging the best, brightest, and most-likely-to-be-influential to have those kinds of values.
See John Adams:
I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.
And then what happens in the 4th generation? Someone in each generation needs to be at the John Adams equivalent object level, and manage to become powerful enough to keep the virtuous cycle going.
comment by sapphire (deluks917) · 2024-01-13T14:11:52.253Z · LW(p) · GW(p)
I will say I tried extremely hard to be a good EA. It basically drove me insane. Community is extremely unsupportive. I basically decided Im retired. I put in enough years in the misery mines and donated more than enough money (most of what I did ended in failure but I cashed out 7 figs of crypto and donated a lot). I will be a nice friendly generous person by the extremely low standards of my actually existing society. But otherwise I will just do what I want. I cried enough tears and sacrificed enough of the windfalls I was legally entitled to keep.
Replies from: Chris_Leong, Karthik Tadepalli↑ comment by Chris_Leong · 2024-01-14T03:32:37.538Z · LW(p) · GW(p)
Thank you for your service!
Replies from: deluks917↑ comment by sapphire (deluks917) · 2024-01-14T09:41:41.786Z · LW(p) · GW(p)
:heart:
Means a lot tbh
↑ comment by Karthik Tadepalli · 2024-01-16T17:30:13.163Z · LW(p) · GW(p)
For what it's worth, your "small and vulnerable" post is what convinced me that people can really have an unbelievable amount of kindness and compassion in them, a belief that made me much more receptive to EA. Stay out of the misery mines pls!
comment by Chris_Leong · 2024-01-11T12:50:23.691Z · LW(p) · GW(p)
In Defense of Values
I don't mean to be harsh, but if everyone in this community followed your advice, then the world would likely end. And you can call that the rational outcome if you want, but if that's the outcome, what value is rationality?
I don't like pressuring people, so in my AI Safety Movement Building, I try to only encourage people to do things if it's in line with their values and, while there is some advice I can offer here, I mostly just leave it to people to figure out their own values.
But we need people to choose to be prosocial and for that to happen, a community needs a norm that prosociality is considered better than anti-sociality and pro-social people should be esteemed more than those who are anti-social. A community that loses sight of this is bound to fail. I humbly submit that we should decide NOT to destroy our community.
I can see why you might find it frustrating, and why you might want your choices to be considered the equal of those who are making significant sacrifices to save the world, but unfortunately, according to my values (and I suspect the values of most people here) it's absolutely vital that we do not do that.
For this reason, I down-voted this post. I expect this will be controversial, so I'll explain my reasoning:
While I'm generally against downvoting a post based on values, I feel it's valid in these circumstances. If this post was a piece of analytical philosophy making an argument that given certain assumptions we can conclude a particular thing about morality, then I think the post should be evaluated by how well the conclusions follow from the premises and how plausible the premises are, even if it were arguing for something that I hate.
Whilst this post does gesture at some of these arguments, it is best characterized as more an exhortation than a philosophical argument. It's not trying to make detailed and rigorous arguments, so I can't judge the post on this basis and indeed imposing this standard would be unfair to the author.
This forces me to find another basis on which to judge the post. Given this, I think it's fine to just upvote or downvote based on whether I think we should adopt or oppose this exhortation[1] and I believe this exhortation is negative, so I'm downvoting it. Telling people that they probably really, truly want to be selfish is bad. I'm not saying that we should bully people into being pro-social, but the OP is encouraging people who would be pro-social by default not to be.
I'd encourage other people to consider the post on the same basis and upvote or downvote accordingly. I'll admit that I'm not neutral here: I'm worried that people might disagree with this post, but feel it'd be wrong for them to downvote it, and so not do so. I'm here to say that it's not wrong; but also, if you think that these are the values we should adopt, then you should feel free to upvote it instead.
One point I want to emphasize: you don't need to be able to identify a specific wrong argument in order to downvote a post. The problem with this principle is that then a post could be mostly immune to down-voting by vaguely gesturing to some dubious assumptions instead of explicitly stating them where they might be subject to criticism. And I claim that if you pay attention, you'll notice that something in the implicit framing of this post is off, even if you can't quite put it into words.
- ^
If this post was a comment with separate upvote/downvote and agree vote/disagree vote, then I'd probably just disagree vote instead. However, it isn't, so I've got to work with what I've got.
↑ comment by Causal Chain (causal-chain) · 2024-01-14T01:06:19.355Z · LW(p) · GW(p)
I interpret upvotes/downvote as
- Do I want other people to read this post
- Do I want to encourage the author and others to write more posts like this.
And I favour this post for both of those reasons.
I agree that this post doesn't make philosophical argument for it's position, but I don't require that for every post. I value it as an observation of how the EA movement has affected this particular person, and as criticism.
A couple of strongly Anti-EA friends of mine became so due to a similar moral burnout, so it's particularly apparent to me how little emphasis is put on mental health.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2024-01-14T01:54:40.421Z · LW(p) · GW(p)
I agree that this post doesn't make philosophical argument for it's position, but I don't require that for every post. I value it as an observation of how the EA movement has affected this particular person, and as criticism.
Just to make my position really clear: I never said this post needed to make a philosophical argument for its position, rather that if a post wasn't a philosophical argument we shouldn't judge it by the standards we apply to a philosophical argument.
Then I tried to figure out an alternative standard by which to judge this post.
comment by Ape in the coat · 2024-01-11T14:16:58.890Z · LW(p) · GW(p)
It is always painful to see burnt out people who didn't have the wisdom to attempt to save the world with less effort and then overcorrect in the opposite direction. I see it as a failure of applying consequentialism to the naive utilitarian reasoning. Yes, if you just dedicate 100% of your efforts towards saving the world it would be better than dedicating only 50% of your efforts, all things being equal. But things are unlikely to be equal. It's better to leave yourself some slack so that you could keep saving the world after 10 years, instead of deciding that it's all was a mistake and then agitating aginst saving the world, saying that it's only for some special people.
I think the correct balance isn't "Don't save the world, unless you want to do it for non-worldsaving reasons". I think it's more like "Save the world not more than 20% beyond what you are comfortable with. If you are in the situation where you feel constant guilt and stress because you are not yet a god - then you need to decrease the amount of effort you put towards attempts of world saving. If you have never even tried to save the world - then doing 10% EA pledge seems as a sound idea.
Exploring your limits is okay. Constantly pushing yourself beyond them, without any feeling of satisfaction is neither sustainable nor productive. Saving the world mustn't suck. Just keep it at "mildly inconvinient" level at worst.
comment by Joe Collman (Joe_Collman) · 2024-01-12T03:37:33.262Z · LW(p) · GW(p)
Some relevant thoughts are in Nate's Replacing Guilt sequence [? · GW].
I can understand the sentiments expressed here - particularly in terms of dealing with these things at the age of 12.
However, I'd draw a distinction between:
- Making the world better according to values that aren't your own, from a sense of obligation.
- Doing something to prevent the literal end of the world.
And I'd note that (2) does not rest on (1), on altruism (effective or otherwise), or on any particularly narrow moral view. Wanting some kind of non-paperclip-like world to continue existing isn't a niche value.
Nietzsche or Ayn Rand would be among the last people to be guilted into saving the world, but may well do it anyway. This is not because they cared deeply about shrimp! (not to my knowledge, that is)
But of course there's a lot to be said for understanding your values, and following a path you endorse on your own terms.
comment by quetzal_rainbow · 2024-01-11T17:35:42.730Z · LW(p) · GW(p)
Nobody linked that, so I probably should: https://www.lesswrong.com/posts/cujpciCqNbawBihhQ/self-integrity-and-the-drowning-child [LW · GW]
comment by mako yass (MakoYass) · 2024-01-12T03:14:06.608Z · LW(p) · GW(p)
I also have very little intrinsic desire to maximize aggregate utility, but I do seem to have an intrinsic desire to fight for the community that does. It's tricky, but this is how it always has been everywhere. You have to organize, but the organization and the individual will want different things, because the organization is a superagent trying to pursue a coherent agenda despite being composed of multiple individuals, and then some people pretend to themselves have the aggregate utility function, and they get promoted because they can be trusted to directly pursue the interests of the organization, but most of them are lying about their utility function in order to pursue promotion, and some of them don't even consciously recognize that about themselves, such as your former self, who no one could have helped.
comment by [deleted] · 2024-01-18T03:23:54.262Z · LW(p) · GW(p)
Real people will actually die. One can wash one's hands of this and there's nothing I can do to stop this, but real people will actually die if we don't try to help others. It's not a game.
comment by ProgramCrafter (programcrafter) · 2024-01-11T14:40:37.137Z · LW(p) · GW(p)
the best you can do is what you think is personally good
As far as you're an ideal optimizing agent with consistent values and full knowledge, otherwise actions based on your thoughts may end up worse than using social heuristics.
That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to.
Locally invalid. Values can be terminal (what you care about) and instrumental, and saving the world for most people is actually instrumental.
There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.
I think that's true since memes can be harmful, but there is also value in reminding that if more people worked to save improve the world on average, it would be better, and often a simpler way is to do that yourself instead of pushing that meme+responsibility out.
Save the world if you want to, but please don’t if you don’t want to.
I'd continue that with "but please don't destroy the world whichever option you choose, since that will interfere with my terminal goals and I'll care about your non-existence".
comment by Karthik Tadepalli · 2024-01-16T17:27:55.929Z · LW(p) · GW(p)
I've seen a lot of EAs who are earnest. I think they are in for hurt down the line. I am not earnest in that way. I am not committed to tight philosophical justifications of my actions or values. I dont follow arguments to the end of the line. But one day I heard will macaskill describe the drowning child thought experiment, thought "yeah that makes sense to me", and added that to my list of thoughts. When I realized I was on the path to an economics PhD (for my own passions), I figured it was worth looking up this EA stuff and seeing what it had to say. I figured there would be lots of useful things I could do. I think that was the right intuition. I have found myself in a good position where I only need to make minor changes to my path to increase my impact dramatically.
Saving the world only sucks when you sacrifice everything else for it. Saving the world in your free time is great fun.
comment by yanni kyriacos (yanni) · 2024-01-16T02:49:09.977Z · LW(p) · GW(p)
Probably a good time to leave this here: https://www.clearerthinking.org/tools/the-intrinsic-values-test
comment by yanni kyriacos (yanni) · 2024-01-16T02:32:34.976Z · LW(p) · GW(p)
Thank you for this post. It says to me that community builders need to be very careful in how they treat young, people. For better and worse, they are more impressionable.
comment by StartAtTheEnd · 2024-01-10T16:37:30.502Z · LW(p) · GW(p)
I can see where you're coming from with the dislike of social pressure. Life isn't life without agency (stated as opinion).
I want to "save the world" to the extent that I can transform it into something that I like more than what currently exists. It's like cleaning my house, I merely dislike the look of garbage.
My egoistic desires just happen to align somewhat with that of other people, since I'm a human as well. (Only somewhat. I'm not so naive that I think suffering is inherently bad)
There's two things that I feel are imporant here, though:
A: Other people do what they think is right, but 99% of people are idiots, so they are actually making things worse.
B: If fighting against the natural flow of things is a waste of time like the daoist seem to say, then so be it. But, I'm starting to see some bad disasters on the horizon, and I'm not even talking about AI risks or environmental change. I'm not sure if humanity deserves to survive, but I think it would be a shame to end things so early.
↑ comment by Jiro · 2024-01-10T22:13:17.599Z · LW(p) · GW(p)
I want to “save the world” to the extent that I can transform it into something that I like more than what currently exists.
The context seems to be saving the world from runaway AI, which can't be nontrivially described that way.
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2024-01-11T13:59:37.760Z · LW(p) · GW(p)
Correct, I should have had that in mind when I wrote my comment.
The alternative to runaway AI is an AI whose values are acceptable to us. I belive that this is a design problem. The perfect system should be something like what's described in The Fun Theory Sequence [LW · GW] and posts like When "yang" goes wrong [LW · GW] describes two extremes as anarchy and tyranny, and I think it's fair to assume that the perfect world must be a balance between these two extremes.
But this topic is not trivial at all, I will give you that. I also don't think we agree very much about what the ideal future or AI looks like. I think some naive "Minimize suffering" optimization target is the most popular, but if you ask me that's merely due to an old misunderstanding of Buddhism.
You could also say that any AI will become "Runaway", and that we merely get to influence the direction. I'm pushing in my direction of choice, even if the force is tiny.
But I will stop here, as I have a lot of opinions on the subject, which probably don't fit the consensus very well.
comment by Mitchell_Porter · 2024-01-11T11:50:44.764Z · LW(p) · GW(p)
It's interesting to read this post as if it's SBF, writing from jail...