Would You Work Harder In The Least Convenient Possible World?
post by Firinn · 2023-09-22T05:17:05.148Z · LW · GW · 98 commentsContents
98 comments
Part one of what will hopefully become the aspirant sequence.
Content note: Possibly a difficult read for some people. You are encouraged to just stop reading the post if you are the kind of person who isn’t going to find it useful. Somewhat intended to be read alongside various more-reassuring posts, some of which it links to, as a counterpoint in dialogue with them. Pushes in a direction along a spectrum, and whether this is good for you will depend on where you currently are on that spectrum. Many thanks to Keller [LW · GW] and Ozy [LW · GW] for insightful and helpful feedback; all remaining errors are my own.
Alice is a rationalist and Effective Altruist who is extremely motivated to work hard and devote her life to positive impact. She switched away from her dream game-dev career to do higher-impact work instead, she spends her weekends volunteering (editing papers), she only eats the most ethical foods, she never tells lies and she gives 50% of her income away. She even works on AI because she abstractly believes it’s the most important cause, even though it doesn’t really emotionally connect with her the way that global health does. (Or maybe she works on animal rights for principled reasons even though she emotionally dislikes animals, or she works on global health even though she finds AI more fascinating; you can pick whichever version feels more challenging to you.)
Bob is interested in Effective Altruism, but Alice honestly makes him a little nervous. He feels he has some sort of moral obligation to make the world better, but he likes to hope that he’s fulfilled that obligation by giving 10% of his income as a well-paid software dev, because he doesn’t really want to have to give up his Netflix-watching weekends. Thinking about AI makes him feel scared and overwhelmed, so he mostly donates to AMF even though he’s vaguely aware that AI might be more important to him. (Or maybe he donates to AI because he feels it’s fascinating, even though he thinks rationally global health might have more positive impact or more evidence behind it - or he gives to animal rights because animals are cute. Up to you.)
Alice: You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
Bob: Wow, Alice. It’s none of your business what I do with my own money; that’s rude.
Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have.
Bob: That doesn’t even seem true. If everyone is rude like you, then the Effective Altruism movement will get a bad reputation, and fewer people will be willing to join. What if I get so upset by your rudeness that I decide not to donate at all?
Alice: That kind of seems like a you problem, not a me problem.
Bob: You’re the one who is being rude.
Alice: I mean, you claim to actually seriously agree with the whole Drowning Child thing. If you would avoid doing any good at all, purely because someone was rude to you, then I think you were probably lying about being convinced of Effective Altruism in the first place, and if you’re lying then it’s my business. [LW · GW]
Bob: I’m not lying; I’m just arguing why you shouldn’t say those things in the abstract, to arbitrary people, who could respond badly. Sure, maybe they shouldn’t respond badly, but you can’t force everyone to be rational.
Alice: But I’m not going out and saying this to some abstract arbitrary person. Why shouldn’t you, personally, work harder and donate more?
Bob: I’m protecting my mental health by ensuring that I only commit an amount of money and time which is sustainable for me.
Alice: So you believe that good will actually be maximised by donating exactly the amount of money that will give you warm fuzzies, and no more, and volunteering exactly the amount of time that makes you happy, and no more?
Bob: Absolutely. If I tried to donate more time or money, I’d burn out. Then I’d do even less good. Under this view, I’m actually obligated not to donate any more than I do! [EA · GW]
Alice: You’re morally obligated to take the actions that happen to make you maximally happy? Wow, that seems like a really convenient coincidence [? · GW] for you, and that seems like a great reason to really challenge that belief [LW · GW]. Isn’t it possible that you could be slightly inconvenienced without significantly increasing your risk of burning out, or that you could do a significant amount more good while only increasing your burn-out risk by an acceptably small amount?
Bob: Who says I’m maximally happy? I’d probably be happier if I gave 0% to charity and bought a faster car, but I’m giving 10%! Nobody is perfect, and 10% is good enough [LW · GW]. Surely you should go and criticise some of the people who are giving 0%?
Alice: I criticise them plenty, and that doesn’t mean that I can’t also criticise you; that seems like a deflection. Nobody’s perfect, but some people are coming closer than others. [? · GW] I can’t really define whether you’re maximally happy, but I assume you would feel some guilt about donating 0%, or you’d miss out on some warm fuzzies, or you’d miss out on the various social benefits of being part of the community.
Bob: No, I donate 10% because I want to help others and I genuinely care about positive impact, and ethical obligations, and utilitarian considerations. I just set a lower standard. [LW · GW]
Alice: Regardless, I don’t think any of this really addresses my criticism. Donating 10% is perfectly consistent with being a total egoist who just happens to enjoy the warm fuzzies of donating some money to charity. But humans aren’t reflectively consistent, and I think if you were an actual utilitarian, you would probably believe that the ethical amount to give is higher than the amount you inherently personally want to give.
Bob: Sure, if there was a button which magically made me more ethical, and caused me to want to donate 30%, then I’d probably press it because I believe that’s the right thing to do. But the magical button doesn’t exist. I currently want to donate 10%, and I can’t make myself want to donate 30% any more than I can change my natural talents.
Alice: So your claim is that it’s okay to be lazy, or selfish, or hypocritical, because you can’t make yourself be any less of those things?
Bob: No, you’re just being rude again. I’m not lazy about doing my fair share of the dishes. I just think that, when it comes to allocating resources to altruism, you’ll burn out if you push yourself to do more good than you’re naturally inclined to.
Alice: I think if this was your true objection - your crux - then you would have probably put a lot of work in to understand burnout [LW · GW]. Some of the hardest-working people have done that work - and never burned out. Instead, you seem to treat it like a magical worst possible outcome, which provides a universal excuse [? · GW] to never do anything that you don’t want to do. How good a model do you have of what causes burnout? (I notice that many people think vacations treat burnout, which is probably a sign they haven’t looked at the research [EA · GW].) Surely there’s not a black-and-white system where working slightly too hard will instantly disable you forever; maybe there’s a third option [LW · GW] where you do more but you also take some anti-burnout precaution.. If I really believed I couldn’t do more without risking burnout, and that was the most important factor preventing me from fulfilling my deeply held ethical beliefs, I think I would have a complex model of what sorts of risk factors create what sort of probability of burnout, and whether there’s different kinds of burnout or different severity levels, and what I could do to guard against it.
Bob: Well, maybe that’s true. I definitely don’t want to work any harder than I currently do, so I guess I’d be motivated to believe that I’ll burn out if I do, and that could bias my thinking. But it’s still dangerous and rude to go around spouting this kind of rhetoric, because some people might have a lot of scrupulosity, and they could be really harmed by being told they’re bad people unless they work harder.
Alice: Seems like a fake justification [? · GW]. I’m sure some people should reverse any advice they hear, but I’m currently talking to you and I don’t think you have scrupulosity issues.
Bob: Even assuming I don’t have scrupulosity issues, if I overworked myself, I’d be setting a bad example to people who do have scrupulosity issues. I’d be contributing to bad social norms.
Alice: Weird, you don’t seem to think that I’m contributing to bad social norms by existing. Actually I think I’m a good role model for everyone else.
Bob: You’re really arrogant.
Alice: This conversation isn’t about my flaws, and also, I don’t think humility is always a virtue. [? · GW] For instance, you’re humble about how much you can realistically achieve, but since you haven’t really tested the question, I think it’s a vice [? · GW]. I actually think my mental health is pretty good, and the work that I do contributes to my positive mental health; I have a sense of purpose, a sense of camaraderie with other people in the community, I don’t really deal with any guilt because I genuinely think I’m doing the most I can do, and I like it when people look up to me.
Bob: Okay, but I can’t become you. I can only act in accordance with whatever values I really have. [LW · GW] I wouldn’t feel really good all the time if I worked hard like you. I’d just be miserable and burn out. I can’t change fundamental facts about my motivational system.
Alice: What if we lived in the least convenient possible world [LW · GW]? What if the techniques I use to avoid burnout - like meditating, surrounding myself with people who work similarly hard so that my brain feels it’s normal, eating a really healthy diet, coworking or getting support on tasks that I’m aversive about, practising lots of instrumental rationality techniques, frequently reminding myself that I’m living consistently with my values, avoiding guilt-based motivation, exercising regularly, seeing a therapist proactively to work on my emotional resilience, and all that - would actually completely work for you, and you’d be able to work super hard without burning out at all, and you’d be perfectly capable of changing yourself if you tried?
Bob: Just because they’d work for me, doesn’t mean they’d work for others. This is a potentially harmful sort of thing to talk about, because some fraction of people will hear this advice and overwork themselves and end up with mental health crises, and some people will think you’re a jerk and leave the movement, and some people will be unable to change themselves and will feel really guilty.
Alice: How sure are you that this isn’t also true about the opposite advice? Maybe some people work on a forks model rather than a spoons model, so they actually need to do tasks in order to improve their mental health, but they hear advice telling them to take breaks to avoid burnout - so they sit around being miserable, gaming and scrolling social media, wondering when resting is going to start improving their burnout problems, not realising that they aren’t burned out at all and they’d actually feel better if they worked harder and did rejuvenating tasks [LW · GW] and got into a success spiral [LW · GW]. Maybe some people are put off from the movement because they don’t think we’re hardcore enough, so they go off to do totally ineffective things like being a monk and taking a vow of silence because that feels more hardcore or real. Maybe the belief that you can’t change fundamental facts about yourself is harmful to some people with mental illnesses who feel like they’ll never be able to become happy or productive. In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sterner… would you work harder then?
Bob: I just kind of don’t really want to work harder.
Alice: I think we’ve arrived at the core of the problem, yes.
Bob: I don’t know what the point of this conversation was. You haven’t persuaded me to do anything differently, I don’t think you can persuade me to do anything that I don’t want to do, and you’ve kind of just made me feel bad.
Alice: Maybe I’d like you to stop claiming to be a utilitarian, when you’re totally not - you’re just an egoist who happens to have certain tuistic preferences. I might respect you more if you had the integrity to be honest about it. [LW · GW] Maybe I think you’re wrong, and there’s some way to persuade you to be better, and I just haven’t found it yet. (Growth mindset!) Maybe I want an epistemic community that helps me with my reasoning, and calls me out when I’m engaging in bias or motivated stopping, which means I want the kinds of things I’m saying here to be normal and okay to say - otherwise people won’t say them to me. Maybe I just notice that when people make type-1 errors in the working-too-hard-and-burning-out direction they usually get the reassurance they need from the community, and when people make errors in the type-2 not-working-hard-enough direction they don’t really get the callouts they need because it’s considered rude, and I’m just pushing in the direction of editing that social norm. Maybe I’d like you to be honest about this because I’d like to surround myself with a community of people who share my values, so I’d like to be able to filter out people like you - no offence, we can still be friends, it’s just that I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values.
Bob: Wait, are you claiming that I’m harming you, just by existing in your vague vicinity and not doing the maximum amount of good?
Alice: No, not really, maybe I’m just claiming that we have competing access needs. I mean, I don’t really know what the correct solution is. Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai [? · GW] - there’s a reason this is on LessWrong and not the EA forum. Maybe I’m in the minority and my needs aren’t realistically going to be met, in which case I will shrug and carry on trying to do the best that I can. Or maybe thinking about the potential positive impact on me is just the push you need to be better yourself. Maybe I don’t think you’re harming me, exactly, I just think you’re being rude [LW · GW] - and maybe that makes it okay for me to be a little rude, too.
Bob: I want to tap out of this conversation now.
98 comments
Comments sorted by top scores.
comment by Dagon · 2023-09-23T16:23:26.098Z · LW(p) · GW(p)
[note: I don't consider myself Utilitarian and sometimes apply True Scotsman to argue that no human can be, but that's mostly trolling and not my intent here. I'm not an EA in any but the most big-tent form (I try to be effective in things I do, and I am somewhat altruistic in many of my preferences). ]
I think Alice is confused about how status and group participation works. Which is fine, we all are - it's insanely complicated. But she's not even aware how confused she is, and she's making a huge typical mind fallacy in telling Bob that he can't use her preferred label "Utilitarian".
I think she's also VERY confused about sizes and structures of organization. Neither "the Effective Altruist movement" nor "rationalist community" are coherent structures in the sense she's talking about. Different sites, group homes, companies, and other specific groups CAN make decisions on who is invited and what behaviors are encouraged or discouraged. If she'd said "Bob, I won't hire you for my lab working on X because you don't seem to be serious about Y", there would be ZERO controversy. This is a useful and clear communication. When she says "I don't think you should call yourself Utilitarian", she's just showing herself as insecure and controlling.
Honestly, the most effective people (note: distinct from "hardest working") in sane organizations do have the most respect and influence. But that's not a binary, and it's not what everyone is capable of or seeks. MOST actual humans are members of multiple groups, and have many terms in their imputed utility function. How much of one's effort to give to a given part of life is a pretty wide continuum.
I did a lot of interviewing and interview training for a former large employer, and an important rule (handed down through an oral tradition because it can't really be written down and made legible) was "don't hire jerks". I'd rather work with Bob than Alice, and I'm sad that Alice probably won't understand why.
Replies from: Firinn↑ comment by Firinn · 2023-09-24T06:28:32.029Z · LW(p) · GW(p)
Hmm, does your response change if they're housemates or something like that?
I agree there'd be no controversy about Alice deciding not to hire Bob because he doesn't meet her standards, and I think there'd be little controversy over some org deciding to hire Bob over Alice because he's more likeable. But, if it makes the post work better for you, you can totally pretend that instead of talking about membership in "the rationalist community", they're talking about "membership in the Greater Springfield Rationalist Book Club that meets on Tuesdays in Alice and Bob's group house". I think Alice kicking Bob out of that would be much more contentious and controversial!
Replies from: Dagon↑ comment by Dagon · 2023-09-24T15:35:13.450Z · LW(p) · GW(p)
Part of my response is "this is very context-dependent", and that is overwhelmingly true for a group house or book club. Alice can, of course, leave either one if she feels Bob is ruining her experience. She may or may not convince others to kick Bob out if he doesn't shape up, depending on the style of group and charter for formal ownership of the house.
She'd be far better off, in either case, being specific about what she wants Bob to do differently, rather than just saying "work harder".
comment by Ape in the coat · 2023-09-25T07:06:50.922Z · LW(p) · GW(p)
I think the fact that the world where:
I can work extremely hard, doing things I don't particularly like, without burnouts, eat only healthy food without binge eating spirals, honestly enjoy doing exercises, have only meaningful rest without exausting my will power and generally be fully intellectually and emotionally consistent, completely subjugating my urges to my values...
is called the least convinient possible world - says something interesting about this whole discourse.
Honestly, the world where I'm already a god sounds extremely convinient. And pretending that we are there, demanding that we have to be there, claiming that we could've been there already if only I just tried harder doesn't sound helpful at all. Yes it's important to try to get there. One step at a time. Check whether it's possible to go faster occasionally, while being nice and careful towards yourself. But as soon as you find yourself actually having a voice in your head being mean to you because your are not as good as you wish to be, it seems that you've failed the nice and careful part.
Replies from: Firinn↑ comment by Firinn · 2023-09-26T00:39:01.465Z · LW(p) · GW(p)
If I could work extremely hard doing things I don't like, without any burnouts, eat only healthy food without binge eating spirals, honestly enjoy doing exercises, have only meaningful rest without exausting my will power and generally be fully intellectually and emotionally consistent, completely subjugating my urges to my values... but ONLY by being really mean and cruel and careless to myself...
Man, that would suck! That would be a really inconvenient world! That would be a world where I'm forced to choose either "I don't want to be mean to myself, even if I could save lots of people's lives by doing that, so I'm just going to deliberately leave all those people to die" or "I'm going to be mean to myself because I think it's ethically obligatory", and I really don't want to make that choice!
I much prefer a world where a choice like "I'm going to be nice and careful to myself because actually that's the best way to be more productive, and being mean isn't sustainable" is an option on the table. Way more convenient. I really hope it's the one we live in.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-09-26T07:50:46.039Z · LW(p) · GW(p)
I mean if you have successfully subjugated your urges to your values, thus you actually enjoy your new lifestyle thus you are not mean to yourself anymore and it's very convinient...
But, yeah, we can spin the inconvinience framework however we (don't) like. That's because reality doesn't actually run on inconvinience and this kind of speculation is rarely helpful. Saying that we believe X because it's convinient is easy because one can always find a framework according to which believing X is convinient and always demand attempts to find new clever solutions around all the objective reasons why X seems to be true. Let's go one step highter:
Carol: Hey, Alice, I've noticed that you spend couple of hours a day meditating instead of taking extra work and thus earning more money and donating them to charity. Don't you think hat you are being hypocritical and not consistent with your values?
Alice: Actually meditating is what helps me to keep my lifestyle at all. I do it specifically in order to be more productive.
Carol: Oh, how very convinient that the only way for you to be somewhat productive is to spend couple of hours a day doing nothing and not, say, self-flagelation. Have you actually tried to find a clever solution around this problem or just stopped as soon as you figured out a nice way, instead of actually efficient one?
The thing is, perceiving Alice (or Carol) as speaking the hard truths and Bob as a laizy motivated reasoner is wrong. Both of them are motivated reasoners! Both of them are rationalizing for their own convinience and both of them capture something true about the reality. And both of them are probably voices in your head. Sometimes you need to side more with Alice and sometimes with Bob. Finding the right balance is the difficult thing. But if you always find yourself as if you are Bob, who is defending themself against Alice - then something seems to be not working as it should.
Replies from: Firinn↑ comment by Firinn · 2023-09-26T20:07:03.392Z · LW(p) · GW(p)
Well, yes. The correct response to noticing "it's really convenient to believe X, so I might be biased towards X" isn't to immediately believe not-X. It's to be extra careful to use evidence and good reasoning to figure out whether you believe X or not-X.
comment by Richard_Ngo (ricraz) · 2023-09-27T19:32:31.538Z · LW(p) · GW(p)
I'm noticing it's hard to engage with this post because... well, if I observed this in a real conversation, my main hypothesis would be that Alice has a bunch of internal conflict and guilt that she's taking out on Bob, and the conversation is not really about Bob at all. (In particular, the line "That kind of seems like a you problem, not a me problem" seems like a strong indicator of this.)
So maybe I'll just register that both Alice and Bob seem confused in a bunch of ways, and if the point of the post is "here are two different ways you can be confused" then I guess that makes sense, but if the point of the post is "okay, so why is Alice wrong?" then... well, Alice herself doesn't even seem to really know what her position is, since it's constantly shifting throughout the post, so it's hard to answer that (although Holden's "maximization is perilous [EA · GW]" post is a good start).
Relatedly: I don't think it's an accident that the first request Alice makes of Bob (donate that money rather than getting takeout tonight) is far more optimized for signalling ingroup status than for actually doing good.
comment by Hastings (hastings-greer) · 2023-09-23T22:47:05.615Z · LW(p) · GW(p)
Alice: Our utility functions differ.
Bob: I also observe this.
Alice: I want you to change to match me: conditional on your utility function being the same as mine, my expected utility would be larger.
Bob: Yes, that follows from me being a utility maximizer
Bob: I won't change my utility function: conditional on my utility function becoming the same as yours, my expected utility as measured by my current utility function would be lower.
Alice: Yes, that follows from you being a utility maximizer
↑ comment by Firinn · 2023-09-23T22:51:31.811Z · LW(p) · GW(p)
If Bob isn't reflectively consistent, their utility functions could currently be the same in some sense, right? (They might agree on what Bob's utility function should be - Bob would happily press a button that makes him want to donate 30%, he just doesn't currently want to do that and doesn't think he has access to such a button.)
Replies from: hastings-greer↑ comment by Hastings (hastings-greer) · 2023-09-23T22:56:37.006Z · LW(p) · GW(p)
Certainly! Most likely, neither of them is reflectively consistent: "I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values." hints at this.
Replies from: Firinncomment by Unnamed · 2023-09-22T22:02:51.024Z · LW(p) · GW(p)
Multiple related problems with Alice's behavior (if we treat this as a real conversation):
- Interfering with Bob's boundaries/autonomy, not respecting the basic background framework where he gets to choose what he does with his life/money/etc.
- Jumping to conclusions about Bob, e.g., insisting that the good he's been doing is just for "warm fuzzies", or that Bob is lying
- Repeatedly shifting her motive for being in the conversation / her claim about the purpose of the conversation (e.g., from trying to help Bob act on his values, to "if you’re lying then it’s my business", to what sorts of people should be accepted in the rationalist community)
- Cutting off conversational threads once Bob starts engaging with them to jump to new threads, in ways that are disorienting and let her stay on the attack, and don't leave Bob space to engage with the things that have already come up
These aren't merely impolite, they're bad things to do, especially when combined and repeated in rapid succession. It seems like an assault on Bob's ability to orient & think for himself about himself.
Replies from: Seth Herd, Firinn↑ comment by Firinn · 2023-09-23T01:14:35.388Z · LW(p) · GW(p)
I don't think anyone would dispute that Alice is being extremely rude! Indeed she is deliberately written that way (though I think people aren't reading it quite the way I wrote it because I intended them to be housemates or close friends, so Alice would legitimately know some amount about Bob's goals and values.)
I think a real conversation involving a real Bob would definitely involve lots more thoughtful pauses that gave him time to think. Luckily it's not a real conversation, just a blog post trying to stay within a reasonable word limit. :(
Alice is not my voice; this is supposed to inspire questions, not convince people of a point. For instance: is there a way to achieve what Alice wants to achieve, while being polite and not an asshole? Do you think the needs she expresses can be met without hurting Bob?
comment by Vanessa Kosoy (vanessa-kosoy) · 2023-09-24T09:38:16.932Z · LW(p) · GW(p)
Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai [? · GW]...
This is a disturbing claim, although I realize that the author's opinions don't coincide with those of the "Alice" character. Personally, I'm not a utilitarian, nor do I want to be a utilitarian or think that I "should" be a utilitarian[1]. I do consider myself a person who is empathetic, honest and cooperative[2]. I hope this doesn't disqualify me from the rationalist community?
In general, I'm in favor of promoting societal norms which incentivize making the world better: such norms are obviously in everyone's interest. In this sense, I'm very sympathetic to effective altruism. However, these norms should still regard altruism as supererogatory: i.e., it should be rewarded and encouraged, but it's lack should not be severely punished. The alternative is much too vulnerable to abuse.
- ^
IMO utilitarianism is not even logically coherent, due to paradoxes with infinite ethics and Pascal's mugging.
- ^
In the sense of, trying to act according to superrationality [? · GW].
↑ comment by Ben Amitay (unicode-70) · 2023-10-01T18:49:10.123Z · LW(p) · GW(p)
I seem to be the the only one who read the post that way, so probably I read my own opinions into it, but my main takeaway was pretty much that people with your (and my) values are often shamed into pretending to have other values and invent excuses for how their values are consistent with their actions, while it would be more honest and productive if we take a more pragmatic approach to cooperating around our altruistic goals.
comment by reconstellate · 2023-09-23T16:05:05.534Z · LW(p) · GW(p)
This is a very good post and nearly all the replies here are illustrating the exact issue that Bob has, which is an inability to engage in the dialectic between these two perspectives without indignation as a defense against guilt.
Most people, including myself, are more Bob than Alice, but I've had a much easier time integrating my inner Alice and engaging with Alices I meet because I rarely, if ever, feel guilt about anything. Strong guilt increases the anticipated costs of positive self-change, and makes people strengthen defense mechanisms that boil down to "I don't owe anyone anything!" to avoid confronting that cost. Ironically this creates people who think they're not predisposed towards guilt, but absolutely are.
Don't get me wrong, Bobs often have pretty understandable reasons to be the way they are. A lot of Bobs got out of religious groups that were really aggressive with the guilting. But understandable reasons to be in an undesirable state does not increase the desirability of that state!
Having met a number of Alices, I think they need to invest more in the consequences of the manner in which they try to get other people to improve. I understand their frustration but the aggressiveness is really so counterproductive and just makes Bobs even worse. Bobs unfortunately need to be treated with kid gloves to get them to improve without feeling in danger of self-guilt-torture.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-23T16:15:39.067Z · LW(p) · GW(p)
It seems like you see only two possibilities: either (a) agreeing with Alice, or (b) secretly agreeing with Alice and feeling guilty about it. Do you not see any possibility of disagreeing with Alice? Thinking that she’s just wrong?
Strong guilt increases the anticipated costs of positive self-change, and makes people strengthen defense mechanisms that boil down to “I don’t owe anyone anything!” to avoid confronting that cost.
Do you see no possibility of someone thinking that the change in question is actually negative, not positive? Sincerely believing that one doesn’t owe anyone anything (or, at least, that one doesn’t owe the sorts of things that the Alices of the world claim that we owe), without guilt?
Replies from: Firinn↑ comment by Firinn · 2023-09-23T16:28:27.461Z · LW(p) · GW(p)
I think if someone wasn't indignant about Alice's ideas, but did just disagree with Alice and think she was wrong, we might see lots of comments that look something like: "Hmm, I think there's actually a 80% probability that I can't be any more ethical than I currently am, even if I did try to self-improve or self-modify. I ran a test where I tried contributing 5% more of my time while simultaneously starting therapy and increasing the amount of social support that I felt okay asking for, and in my journal I noted an increase in my sleep needs, which I thought was probably a symptom of burnout. When I tried contributing 10% more, the problem got a lot worse. So it's possible that there's some unknown intervention that would let me do this (that's about ~15% of my 20% uncertainty), but since the ones I've tried haven't worked, I've decided to limit my excess contributions to no more than 5% above my comfortable level."
I think these are good habits for rationalists: using evidence, building models, remembering that 0 and 1 aren't probabilities, testing our beliefs against the territory, etc.
Obviously I can't force you to do any of that. But I'd like to have a better model about this, so if I saw comments that offered me useful evidence that I could update on, then I'd be excited about the possibility of changing my mind and improving my world-model.
Replies from: SaidAchmiz, pktechgirl↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-23T16:37:22.285Z · LW(p) · GW(p)
I think if someone wasn’t indignant about Alice’s ideas, but did just disagree with Alice and think she was wrong, we might see lots of comments that look something like: …
The disagreement isn’t with Alice’s ideas, it’s with Alice’s claims to have any right to impose her judgment on people who aren’t interested in hearing it. What you describe here is instead an acceptance of Alice’s premises. I’m pointing out that it’s possible to disagree with those premises entirely.
I agree that “using evidence, building models, remembering that 0 and 1 aren’t probabilities, testing our beliefs against the territory, etc.” are good habits. But they’re habits that it’s good to deploy of your own volition. If someone is trying to pressure you into doing these things—especially someone who, like Alice, quite transparently does not have your best interests in mind, and is acting in the service of ulterior motives, and who (again, like Alice) is deceptively clothing these motives in a guise of trying to help you conform to your own stated values—then the first thing you should do is tell them to fuck off (employing as much or as little tact in this as you deem fit), and only then should you consider whether and what techniques of epistemic rationality to apply to the situation.
It is a foolish, limited, and ultimately doomed sort of rationality, that ignores interpersonal conflicts when figuring out what the world is like, and what to do about it.
Replies from: Jiro↑ comment by Jiro · 2023-09-27T16:50:06.798Z · LW(p) · GW(p)
The point of the post is to be about ideas. Alice is only there as a framework for presenting the post's ideas.
If Alice is expressing the ideas rudely, that's just a deficiency in how the post presents them. Saying "I'd ignore Alice because she's rude" is missing the point; it's as if the post had Alice be an angel and you replied "I'd ignore Alice because angels don't exist".
The proper reaction is "the post is flawed in that it attributes the ideas to a rude character, but in order to engage with the thesis of the post I should ignore this flaw and address the ideas anyway".
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-27T17:22:03.929Z · LW(p) · GW(p)
In this case, the ideas seem to be linked quite closely with the behavior of the ‘Alice’ character, so attempting to reply to a hypothetical alternate version of the post where the ideas are (somehow) the same but Alice is very polite… seems strange and unproductive. (For one thing, if Alice were polite, the whole conversation wouldn’t happen.)
Replies from: Jiro↑ comment by Jiro · 2023-09-27T18:43:59.550Z · LW(p) · GW(p)
I think you lack imagination if you think that Alice can't express those ideas without being rude. For instance, "Alice" and "Bob" could be a metaphor for conflicting impulses and motives inside your own head. Trying to decide between Alice-type ideas and Bob-type ideas doesn't mean that you're being rude to yourself.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-27T18:48:57.519Z · LW(p) · GW(p)
I think you lack imagination if you think that Alice can’t express those ideas without being rude.
Yep, could be. Show me a rewritten version of this dialogue which supports your suggestion, and we’ll talk. I think it would be different in instructive ways (not just incidental ones).
Thinking to yourself about the ideas expressed in the post by Alice would not mean you are being rude to yourself.
Well, perhaps “being rude to yourself” is an odd way of putting it, but something like this is precisely why I wouldn’t think these things to myself. I have no particular interest in conjuring a mental Insanity Wolf!
Replies from: Jiro↑ comment by Elizabeth (pktechgirl) · 2023-09-23T18:08:45.444Z · LW(p) · GW(p)
I believe that people who agreed with Alice and had worked to increase their capacity would be more indignant, and that's reason enough to never use this approach even if the goal is good. People hate having their work dismissed.
Replies from: Firinn↑ comment by Firinn · 2023-09-23T22:48:32.683Z · LW(p) · GW(p)
Huh, interesting! I definitely count myself as agreeing with Alice in some regards - like, I think I should work harder than I currently do, and I think it's bad that I don't, and I've definitely done some amount to increase my capacity, and I'm really interested in finding more ways to increase my capacity. But I don't feel super indignant about being told that I should donate more or work harder - though I might feel pretty indignant if Alice is being mean about it! I'd describe my emotions as being closer to anxiety, and a very urgent sense of curiosity, and a desire for help and support.
(Planned posts later in the sequence cover things like what I want Alice to do differently, so I won't write up the whole thing in a comment.)
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-23T23:17:06.101Z · LW(p) · GW(p)
I can picture ways people could bring up capacity-improvement-for-the-greater-good that I'd be really excited about. It's something I care about and most people aren't interested in. It's the way Alice (in this story, and by default in the real world) brings it up I think is counterproductive.
comment by MSRayne · 2023-09-23T12:32:27.881Z · LW(p) · GW(p)
If I were Bob I'd have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I'm probably not your target audience, am I?
Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it's any of her business what Bob does. It is not. Absolutely nobody likes being told they're not being ethical enough. It's why everyone hates vegans. As someone who doesn't like experiencing such judgmental demands, I would have the kneejerk emotional reaction to want to become less of an EA just to spite her. (I would not of course act on this reaction, but I would start finding EA things to be in an ugh field because they remind me of the distress caused by this interaction.)
comment by Lucie Philippon (lucie-philippon) · 2024-12-08T20:58:03.575Z · LW(p) · GW(p)
(I only discovered this post in 2024, so I'm less sure it will stand the test of time for me)
This post is up there with The God of Humanity, and the God of the Robot Utilitarians [LW · GW] as the posts that contributed the most to making me confront the conflict between wanting to live a good life and wanting to make the future go well.
I read this post while struggling half burnt out on a policy job, having lost touch with the fire that drove me to AI safety in the first place, and this imaginary dialogue brought back this fire I had initially found while reading HPMOR. I knew then that I could take no other choice than to move forward and continue fighting as hard as I could. Realizing that probably contributed ~25% of my productivity of the past two months.
I support the content note at the start. My fear based motivation has interacted badly with this urge to make the future go well, and led me into a cycle of burn out and demotivation. I wish there was a post that would help me make sense of how to stop shooting myself in the foot when I care so much.
I'd love a follow-up dialogue where instead of replying "I just kind of don’t really want to work harder.", Bob instead replied:
Bob: Part of my soul does want to follow your call, to work hard. I tried to do so in the past and badly burnt out. I'm afraid that if I take that as a goal again, I'll predictably end up burnt out and end up doing less than right now, so I've been protecting myself by not doing too hard. I now know that I won't ever be satisfied just doing my 10%, but I don't know how to proceed. What would you do in my place?
comment by Said Achmiz (SaidAchmiz) · 2023-09-22T19:05:15.735Z · LW(p) · GW(p)
Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have.
It seems to me that Bob has a moral obligation to respond in such a way as to ensure that Alice’s claim here is false, i.e. the correct response here is “lol fuck you” (and escalating from there if Alice persists). Alice’s behavior here ought not be incentivized; on the contrary, it should be severely punished. Bob is exhibiting a failure of moral rectitude, or else a failure of will, by not applying said punishment.
Replies from: nathaniel-monson↑ comment by Nathaniel Monson (nathaniel-monson) · 2023-09-22T20:14:21.805Z · LW(p) · GW(p)
Lots of your comments on various posts seem rude to me--should I be attempting to severely punish you?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-22T20:55:10.022Z · LW(p) · GW(p)
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions—that Bob has some obligation to explain himself, to justify his actions and his reasons, to Alice. It is that assumption which must be firmly and implacably rejected at once.
Bob should make clear to Alice that he owes her no explanations and no justifications. By indulging Alice, Bob is giving her power over himself that he has no reason at all to surrender. Such concessions are invariably exploited by those who wish to make use of others as tools to advance their own agenda.
Bob’s first response was correct. But—out of weakness, lack of conviction, or some other flaw—he didn’t follow up. Instead, he succumbed to the pressure to acknowledge Alice’s claim to be owed a justification for his actions, and thus gave Alice entirely undeserved power. That was a mistake—and what’s more, it’s a mistake that, by incentivizing Alice’s behavior, has anti-social consequences, which degrade the moral fabric of Bob’s community and society.
Replies from: nathaniel-monson, Jiro, Richard_Kennaway↑ comment by Nathaniel Monson (nathaniel-monson) · 2023-09-22T21:19:41.014Z · LW(p) · GW(p)
To me, it sounds like A is a member of a community which A wants to have certain standards and B is claiming membership in that community while not meeting those. In that circumstance, I think a discussion between various members of the community about obligations to be part of that community and the community's goals and beliefs and how these things relate is very very good. Do you
A) disagree with that framing of the situation in the dialogue
B) disagree that in the situation I described a discussion is virtuous, verging on necessary
C) other?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-22T21:56:27.856Z · LW(p) · GW(p)
Indeed, I disagree with that characterization of the situation in the dialogue.
For one thing, there’s no indication that Bob is claiming to be a member of anything. He’s “interested in Effective Altruism”, and he “want[s] to help others and … genuinely care[s] about positive impact, and ethical obligations, and utilitarian considerations”, and he also (according to Alice!) “claim[s] to really care about improving the world”, and (also according to Alice!) “claim[s] to be a utilitarian”. But membership in some community? I see no such claim on Bob’s part.
But also, and perhaps more importantly: suppose for a moment that “Effective Altruism” is, indeed, properly understood as a “community”, membership in which it is reasonable to gatekeep in the sort of way you describe.[1]
It might, then, make sense for Alice to have a discussion with Carol, Dave, etc.—all of whom are members-in-good-standing of the Effective Altruism community, and who share Alice’s values, as well as her unyielding commitment thereto—concerning the question of whether Bob is to be acknowledged as “one of us”, whether he’s to be extended whatever courtesies and privileges are reserved for good Effective Altruists, and so on.
However, the norm that Bob, himself, is answerable to Alice—that he owes Alice a justification for his actions, that Alice has the right to interrogate Bob concerning whether he’s living up to his stated values, etc.—that is a deeply corrosive norm. It ought not be tolerated.
Note that this is different from, say, engaging a willing Bob in a discussion about what his behavior should be (or about any other topic whatsoever)! This is a key aspect of the situation: Bob has expressed that he considers his behavior none of Alice’s business, but Alice asserts the standing to interrogate Bob anyway, on the reasoning that perhaps she might convince him after all. It’s that which makes Bob’s failure to stand up for his total lack of obligation to answer to Alice for his actions deplorable.
I think that this is not a trivial assumption, and in fact carries with it some strange, and perhaps undesirable, consequences—but not being any kind of Effective Altruist myself, perhaps that part is none of my business. In any case, we can make the aforesaid assumption, for argument’s sake. ↩︎
↑ comment by Firinn · 2023-09-23T05:39:44.962Z · LW(p) · GW(p)
Word of God, as the creator of both Alice and Bob: Bob really does claim to be an EA, want to belong to EA communities, say he's a utilitarian, claim to be a rationalist, call himself a member of the rationalist community, etc. Alice isn't lying or wrong about any of that. (You can get all "death of the author" and analyse the text as though Bob isn't a rationalist/EA if you really want, but I think that would make for a less productive discussion with other commenters.)
Speaking for myself personally, I'd definitely prefer that people came and said "hey we need you to improve or we'll kick you out" to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn't want Alice to just go talk to Carol and Dave without talking to me first!
But more importantly, I think there's a part of the dialogue you're not engaging with. Alice claims to need or want certain things; she wants to surround herself with similarly-ethical people who normalise and affirm her lifestyle so that it's easier for her to keep up, she wants people to call her out if she's engaging in biased or motivated reasoning about how many resources she can devote to altruism or how hard she can work, she wants Bob to be honest with her, etc. In your view, is it ever acceptable for her to criticise Bob? Is there any way for her to get what she wants which is, in your eyes, morally acceptable? If it's never morally acceptable to tell people they're wrong about beliefs like "I can't work harder than this", how do you make sure those beliefs track truth?
Those questions aren't rhetorical; the dialogue isn't supposed to have a clear hero/villain dynamic. If you have a really awesome technique for calibrating beliefs about how much you can contribute which doesn't require any input from anyone else, then that sounds super useful and I'd like to hear about it!
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-23T16:11:14.352Z · LW(p) · GW(p)
Word of God, as the creator of both Alice and Bob: …
Fair enough, but this is new information, not included in the post. So, all responses prior to you posting this explanatory comment can’t have taken it into account. (Perhaps you might make an addendum to the post, with this clarification? It significantly changes the context of the conversation!)
However, there is then the problem that if we assume what you’ve just added to be true, then the depicted conversation is rather odd. Why isn’t Alice focusing on these claims of Bob’s? After all, they’re the real problem! Alice should be saying:
“You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!”
And so on. But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
Now, Bob might very well respond with something like:
“Just who appointed you the gatekeeper of these identities, eh, Alice? Please display for me your ‘Official Enforcer of Who Gets To Call Themselves a Rationalist / EA / Utilitarian’ badge!”
And at that point, Alice would do well to dismiss talking to Bob as a lost cause, and convene at once the meeting of true EAs / rationalists / etc., to discuss the question of public shunning.
Speaking for myself personally, I’d definitely prefer that people came and said “hey we need you to improve or we’ll kick you out” to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn’t want Alice to just go talk to Carol and Dave without talking to me first!
That’s as may be, but Bob makes clear right at the start of the conversation (and then again several times afterwards) that he’s not really interested in being lectured like this. He just lacks the spine to enforce his boundaries. And Alice takes advantage. But the “whisper campaign” concern is misplaced.
Of course, as I say above, Alice doesn’t exactly make it clear that this whole thing is really about claiming group membership that you don’t properly have or deserve. Alice frames the whole thing as… well, various things, like policing Bob’s morality for his own good, her own “needs”, etc. She seems confused, so it’s only natural that Bob would also not get a clear idea of what the real point of the conversation is. If Alice were to approach the matter as I describe above, there would be no problem.
But more importantly, I think there’s a part of the dialogue you’re not engaging with. Alice claims to need or want certain things …
I’m not engaging with it because it seems totally irrelevant, and not any of Bob’s concern. Bob’s response to these complaints should be:
“Why are you telling me about any of this? Are you asking for my help, as a favor? Are you proposing a trade, where I help you achieve these goals of yours, and you offer me something I want in return? Or what? Otherwise it seems like you’re just telling me a list of things you want, then proceeding to try to force me to act in such a way that you get those things that you want. What do I get out of any of this? Why shouldn’t I tell you to go take a hike?”
In your view, is it ever acceptable for her to criticise Bob?
Sure. There’s lots of contexts in which it’s acceptable to criticize someone. This is really much too broad a question to usefully address.
Is there any way for her to get what she wants which is, in your eyes, morally acceptable?
It sounds like Alice wants to build a community of people, with certain characteristics. This is fine and well. She should focus on that goal (see above), and not distract herself with irrelevancies, like policing the morality of random people who aren’t interested in her project.
If it’s never morally acceptable to tell people they’re wrong about beliefs like “I can’t work harder than this”, how do you make sure those beliefs track truth?
Why should it be any of your business whether other people’s beliefs about whether they can work harder track truth?
You can’t force people to care about the things that you care about. You can, and should, work together with other people who care about those same things, to achieve your mutual goals. That’s what Alice (and all who are like her) should be focusing on: finding like-minded people, forming groups and communities of such, maintaining said groups and communities, and working within them to achieve their goals. Bobs should be excluded if they’re interfering, left alone otherwise.
Replies from: Vaniver, green_leaf↑ comment by Vaniver · 2023-09-23T21:12:35.287Z · LW(p) · GW(p)
But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
I agree with your sense that they should be directly arguing about "what are the standards implied by 'calling yourself a rationalist' or 'saying you're interested in EA'?". I think that they are closer to having that argument than not having it, tho.
I think the difficulty is that the conversation they're having is happening at multiple levels, dealing with both premises and implications, and it's generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Looking at the first statement by Alice:
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!
The main difference is that Alice's version seems like it's trying to balance "enforcing the boundary" and "helping Bob end up on the inside". She's not (initially) asking Bob to become a copy of her; she's proposing a specific concrete action tied to one of Bob's stated values, suggesting a way that he could make his self-assessments more honest.
Now, the next step in the conversation (after Bob rejected Alice's bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say "well, I'd rather you not lie about being one of us." (And, indeed, it looks to me like Alice says as much in her 4th comment.)
The remaining discussion is mostly about whether or not Alice's interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn't just say "you're not working hard enough" and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn't want to push him past his limits--she just doesn't yet believe that his limits are where he claims they are.)
That is, I think there's a trilemma: either Bob says he's not an EA/rationalist/etc., Bob behaves in an EA/rationalist/etc. way, or Bob defends his standards to Alice / whatever other gatekeeper (or establishes that they are not qualified to be a gatekeeper). I think Bob's strategy is mostly denying the standards / denying Alice's right to gatekeep, but this feels to me like the sort of thing that they should in fact be able to argue about, instead of Bob being right-by-default. Like, Bob's first point is "Alice, you're being rude" and Alice's response is "being this sort of rude is EA"!
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-23T22:30:45.014Z · LW(p) · GW(p)
Looking at the first statement by Alice:
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus:
“Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.”
Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values.
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment [LW(p) · GW(p)]) about Alice’s needs and wants and so forth.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters.
Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
If this is true, then so much the worse for EA!
When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself.
Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim. ↩︎
↑ comment by Vaniver · 2023-09-24T01:15:16.964Z · LW(p) · GW(p)
I could respond thus:
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob ("here's what I believe and why I believe it") and Bob is trying to make the conversation not happen because it is about Bob.
And so there's an interesting underlying disagreement, there! Bob believes in a peace treaty where people don't point out each other's flaws, and Alice believes in a high-performing-team culture where people point out each other's flaws so that they can be fixed. To the extent that the resolution is just "yeah, I prefer the peace treaty to the mutual flaw inspection", the conversation doesn't have to be very long.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more 'peace treaty' style. I think that's the same sort of conversation that's happening here.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment [LW(p) · GW(p)]) about Alice’s needs and wants and so forth.
Sure--in my read, Alice's needs and wants and so forth are, in part, the generators of the 'community standards'. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer 'big-tent EA' membership definitions.)
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it 'sharp' which is pretty ambivalent.
If I'm reading you correctly, the main thing that's going on here to condemn about Alice is that she's doing some mixture of:
- Setting herself as the judge of Bob without his consent or some external source of legitimacy
- Being insufficiently clear about her complaints and judgments
I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it's one of the points of disagreement--does Bob saying that he's "interested in EA" or "really cares about improving the world" give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
[Noting that Alice would be quick to point out Bob's interest in not having to change himself would also put a finger on the scales here.]
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-24T01:54:45.130Z · LW(p) · GW(p)
Right, and then you and Alice could get into the details.
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob.
I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach [LW(p) · GW(p)].
More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior.
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
Sorry, I don’t think I get the meaning here. Could you rephrase?
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
- Setting herself as the judge of Bob without his consent or some external source of legitimacy
- Being insufficiently clear about her complaints and judgments
Yes, basically this.
does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
Let me emphasize again [LW(p) · GW(p)] what the problem is:
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions—that Bob has some obligation to explain himself, to justify his actions and his reasons, to Alice. It is that assumption which must be firmly and implacably rejected at once.
Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
Replies from: Vaniver↑ comment by Vaniver · 2023-09-24T22:01:02.316Z · LW(p) · GW(p)
Sorry, I don’t think I get the meaning here. Could you rephrase?
I think EA is a mixture of 'giving people new options' (we found a cool new intervention!) and 'removing previously held options'; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.)
Like, EA is about deprioritizing 'ineffective' approaches in favor of 'effective' approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they're less good at deploying capital than others).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-24T23:15:54.636Z · LW(p) · GW(p)
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology.
However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me.
(And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2023-09-27T19:23:33.602Z · LW(p) · GW(p)
Just noting here that I broadly agree with Said's position throughout this comment thread.
↑ comment by green_leaf · 2023-09-23T23:30:45.521Z · LW(p) · GW(p)
What do I get out of any of this?
If Bob asked this question, it would show he's misunderstanding the point of Alice's critique - unless I'm missing something, she claims he should, morally speaking, act differently.
Responding "What do I get out of any of this?" to that kind of critique is either a misunderstanding, or a rejection of morality ("I don't care if I should be, morally speaking, doing something else, because I prefer to maximize my own utility.").
Edit: Or also, possibly, a rejection of Alice ("You are so annoying that I'll pretend this conversation is about something else to make you go away.").
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-24T00:00:42.612Z · LW(p) · GW(p)
Please reread my comment more carefully. That part (Bob’s “what do I get out of any of this” response) was specifically about Alice’s commentary on her personal wants/needs, i.e. the specifically non-moral aspect of Alice’s array of criticisms.
↑ comment by Jiro · 2023-09-28T15:35:07.656Z · LW(p) · GW(p)
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions
How does that apply if Alice and Bob are a metaphor for trying to decide between Alice-type and Bob-type things inside your head? Surely you have a claim on your own reasons for your actions.
↑ comment by Richard_Kennaway · 2023-09-23T17:10:07.425Z · LW(p) · GW(p)
I hear Alice saying, “Very well. We shall resume in an hour.”
comment by Elizabeth (pktechgirl) · 2023-09-23T23:11:58.918Z · LW(p) · GW(p)
Hot take: Bob should be bullying Alice to do less so she doesn't burn out.
Replies from: Vaniver↑ comment by Vaniver · 2023-09-24T01:18:32.963Z · LW(p) · GW(p)
hmm I think Alice wants to wrestle in that puddle of mud?
Like, these two sections are basically how Alice would respond to Bob saying "hey Alice, you're going to burn out":
I think if this was your true objection - your crux - then you would have probably put a lot of work in to understand burnout [LW · GW]. Some of the hardest-working people have done that work - and never burned out. Instead, you seem to treat it like a magical worst possible outcome, which provides a universal excuse [? · GW] to never do anything that you don’t want to do. How good a model do you have of what causes burnout? (I notice that many people think vacations treat burnout, which is probably a sign they haven’t looked at the research [EA · GW].) Surely there’s not a black-and-white system where working slightly too hard will instantly disable you forever; maybe there’s a third option [LW · GW] where you do more but you also take some anti-burnout precaution.. If I really believed I couldn’t do more without risking burnout, and that was the most important factor preventing me from fulfilling my deeply held ethical beliefs, I think I would have a complex model of what sorts of risk factors create what sort of probability of burnout, and whether there’s different kinds of burnout or different severity levels, and what I could do to guard against it.
Replies from: pktechgirlWhat if the techniques I use to avoid burnout - like meditating, surrounding myself with people who work similarly hard so that my brain feels it’s normal, eating a really healthy diet, coworking or getting support on tasks that I’m aversive about, practising lots of instrumental rationality techniques, frequently reminding myself that I’m living consistently with my values, avoiding guilt-based motivation, exercising regularly, seeing a therapist proactively to work on my emotional resilience, and all that - would actually completely work for you, and you’d be able to work super hard without burning out at all, and you’d be perfectly capable of changing yourself if you tried?
↑ comment by Elizabeth (pktechgirl) · 2023-09-24T05:14:14.985Z · LW(p) · GW(p)
It's hard because Alice is a fictional character in stylized dialogue the author says is intended to be a bad implementation. But in the real world if someone talked like Alice did (about herself and towards Bob) I'd place good money on burnout.
Probably Bob isn't actually the right person to raise this issue with Alice, because she doesn't respect him enough. But I don't think it's worse than what she's doing to him.
Replies from: Firinn↑ comment by Firinn · 2023-09-24T06:09:26.249Z · LW(p) · GW(p)
(I wrote way too much in this comment while waiting for my lentils to finish simmering; I apologise!)
I don't think it's necessarily intended to be bad or excessively stylized, but it's intended to be rude for sure. I didn't want to write a preachy thing!
Three kinda main reasons that I made Alice suck, deliberately:
Firstly, later in my sequence I want to talk about ways that Alice could achieve her goals better.
Secondly, I kind of want to be able to sit with the awkward dissonant feeling of, "huh, Alice is rude and mean and making me feel bad and maybe she shouldn't say those things, and ALSO, Alice being an infinitely flawed person would still not actually be a good justification for me to save fewer lives than I think I can save if I try (or otherwise fail according to my own values and my own ethics), and hm, holding those two ideas in juxtaposition feels uncomfy for me, let's poke that!"
I feel like a lot of truthseeking mindsets involve getting comfy with that sorta "huh, this juxtaposition is super uncomfy and I'm going to sit with it anyway" kinda mental state.
Thirdly, I have a voice in my head that gets WAY meaner than Alice! I totally sometimes have thoughts like, "Wow, I'm such a worthless hypocrite for preaching EA things online even though I don't have as much impact as I could if I tried harder, I'm totally just lying to myself about thinking I'm burned out because I'm lazy, I should go flog myself in penance!*"
*mild hyperbole for humour
I can respond by thinking something like, "Go away, stupid voice in my head, you're rude and mean and I don't want to listen to you." I could also respond by deliberately seeking out lots of reassuring blog posts that say "burnout is super bad and you're morally obligated to be happy!" and try to pretend that I'm definitely not engaging in any confirmation bias, no, definitely not, I definitely feel reassured by all of these definitely-true posts about the thing I really wanted to believe anyway. But maybe there's a way better thing where I can think, "Nope, I made really good models and rigorously tested this, so I'm actually for real confident that I can't be more ethical than I currently am, even after I looked into the dark and really asked myself the question and prepared myself to discover information that I might not like, and so I don't have to listen to this mean voice because I know that it's wrong."
But as long as I haven't ACTUALLY looked into the dark, or so long as I've been biased and incomplete in my investigations, I'll always have the little doubt in the back of my mind that says, "well, maybe Alice is right" - so I'll never be able to get rid of the mean Alice voice. There's about a thousand different rationality posts I could link here; generally LessWrong is a good place to acquire a gut feeling for "huh, just professing a belief sure does feel different to actually believing something because you checked".
I think that circles us back to your hot take: if Bob makes really good models about his capabilities and has really sought truth about all this, then maybe he'll BOTH be better able to refute Alice's criticisms AND even be able to persuade Alice to act more sustainably. But maybe he actually has to really do that work, and maybe that work isn't possible to do properly unless he's really truthseeking, and maybe really truthseeking would require him to also be okay with learning "I can and should do more" if that were to turn out to be true. Knowing that he was capable of concluding "I can and should do more" (if that were true) might be a prerequisite to being able to convince Alice that he legitimately reached the conclusion "I can't or shouldn't do more".
And if he actually does the truthseeking, then maybe he should bully Alice to do less! The interesting question for me then is: can I get myself to be curious about whether that's true, like really actually curious, like the kind of curious where I want to believe the truth no matter what the truth turns out to be, because the truth matters?
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-25T06:13:53.545Z · LW(p) · GW(p)
I agree this set of questions is really important, and shouldn't be avoided just because it's uncomfortable. And I really appreciate your investment in truthseeking even when it's hard.
But Alice doesn't seem particularly truthseeking to me here, and the voice in your head sounds worse. Alice sounds like she has made up her mind and is attempting to browbeat people into agreeing with her. Nor does Alice seem curious about why her approach causes such indignance, which makes me further doubt this is about pursuit of knowledge for her.
One reason people react badly to these tactics: rejecting assholes out of hand when they try to extract value from you is an important defense mechanism. If you force people to remove that you make them vulnerable to all kinds of malware (and you can't say "only remove it for good things" because the decision needs to be made before you know if the idea is good or not. That's the point). If Alice is going to push this hard about responsibility to the world she needs to put more thought into her techniques.
Maybe this will be covered in a later post but I have to respond to what's in front of me now.
Replies from: Firinn↑ comment by Firinn · 2023-09-25T16:38:31.142Z · LW(p) · GW(p)
yep, fair! Do you think the point would come across better if Alice was nice? (I wasn't sure I could make Alice nice without an extra few thousand words, but maybe someone more skilful could.)
I think a lot of us have voices in our heads that are meaner than Alice, so if you think Alice is going to cause burnout, I think we need a response that is better than Bob's (and better than "I'm just going to reject all assholes out of hand", because I can't use that on myself!)
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-25T18:57:44.306Z · LW(p) · GW(p)
I think being nicer would make truthseeking easier but isn't truthseeking in and of itself.
I also think it's a mistake to assume your inner Alice would shut up if only you came up with a good enough argument. The loudest alarm is probably false [LW · GW]. Truthseeking might be useful in convincing other parts of your brain to stop giving Alice so much weight, but I would include "is Alice updating in response to facts?" as part of that investigation.
comment by Elizabeth (pktechgirl) · 2023-09-23T17:48:18.853Z · LW(p) · GW(p)
I think this post raises important points and handles them reasonably well. I am of course celebrating that fact mostly by pointing out disagreements with it.
I wish Alice drew a sharper distinction between Bob being honest about his beliefs, Bob bringing his actions in line with his stated beliefs, and Bob doing what Alice wants. I think pushing people to be honest is prosocial by default (within limits). Pushing people to do what you want is antisocial by default, with occasional exceptions.
And Alice's methods can be bad, even if the goal is good. If I could push a button and have a community only of people on a long term growth trajectory, I would. But policing this does more harm than good, because it's hard for the police to monitor. Growth doesn't always look like what other people expect, and people need breaks. Demandng everyone present legible growth on a predictable cycle impedes growth (and pushes people to be dishonest).
My personal take here is that you should be ready to work unsustainably and miserably when the circumstances call for it, but the circumstances very rarely call for it, and those circumstances always include being very time-limited. "I'll just take the misery" is a plan with an inherent shelf life. But the capacity to tank the misery when you need to, or to do more work with less misery, is a moral virtue [LW(p) · GW(p)] and should be recognized as such. I imagine some of Alice's frustration is that she feels like even if Bob gets less social credit than her, the gap should be bigger to reflect her larger contribution, and people use personal capacity as a reason to shrink the gap. And I think that's a valid complaint, especially if Alice worked to create that sustainable capacity in herself where it didn't exist before.
comment by cata · 2023-09-22T20:53:16.904Z · LW(p) · GW(p)
I am mostly like Bob (although I don't make up stuff about burnout), but I think calling myself a utilitarian is totally reasonable. By my understanding, utilitarianism is an answer to the question "what is moral behavior." It doesn't imply that I want to always decide to do the most moral behavior.
I think the existence of Bob is obviously good. Bob is in, like, the 90th percentile of human moral behavior, and if other people improved their behavior, Bob is also the kind of person who would reciprocally improve his own. If Alice wants to go around personally nagging everyone to be more altruistic, then that's her prerogative, and if it really works, I am even for it. But firstly, I don't see any reason to single out Bob, and secondly, I doubt it works very well.
Replies from: Firinn↑ comment by Firinn · 2023-09-23T06:03:45.833Z · LW(p) · GW(p)
You doubt that it would work very well if Alice nags everyone to be more altruistic. I'm curious how confident you are that this doesn't work and whether you'd propose any better techniques that might work better?
For myself, I notice that being nagged to be more altruistic is unpleasant and uncomfortable. So I might be biased to conclude that it doesn't work, because I'm motivated to believe it doesn't work so that I can conveniently conclude that nobody should nag me; so I want to be very careful and explicit in how I reason and consider evidence here. (If it does work, that doesn't mean it's good; you could think it works but the harms outweigh the benefits. But you'd have to be willing to say "this works but I'm still not okay with it" rather than "conveniently, the unpleasant thing is ineffective anyway, so we don't have to do it!")
(PS. yes, I too am very glad that people like Bob exist, and I think it's good they exist!)
comment by Nathaniel Monson (nathaniel-monson) · 2023-09-22T06:18:57.399Z · LW(p) · GW(p)
I am genuinely confused why this is on lesswrong instead of EA. What do you think the distribution of giving money is like on each place, and what do you think the distribution of responses to drowning child is like on each?
Replies from: Firinn↑ comment by Firinn · 2023-09-22T06:50:37.088Z · LW(p) · GW(p)
Hmm, I think I could be persuaded into putting it on the EA Forum, but I'm mildly against it:
- It is literally about rationality, in the sense that it's about the cognitive biases and false justifications and motivated reasoning that cause people to conclude that they don't want to be any more ethical than they currently are; you can apply the point to other ethical systems if you want, like, Bob could just as easily be a religious person justifying why he can't be bothered to do any pilgrimages this year while Alice is a hotshot missionary or something. I would hope that lots of people on LW want to work harder on saving the world, even if they don't agree with the Drowning Child thing; there are many reasons to work harder on x-risk reduction.
- It's the sort of spicy that makes me worried that EAs will consider it bad PR, whereas rationalists are fine with spicy takes because we already have those in spades. I think people can effectively link to it no matter where it is, so posting it in more places isn't necessarily beneficial?
- I don't agree with everything Alice says but I do think it's very plausible that EA should be a big tent that welcomes everyone - including people who just want to give 10% and not do anything else - whereas my personal view is that the rationality community should probably be more elitist; we're supposed to be a self-improve-so-hard-that-you-end-up-saving-the-world group, damnit, not a book club for insight porn.
Also it's going to be part of a sequence (conditional on me successfully finishing the other posts), and I feel like the sequence overall belongs more on LW.
I genuinely don't really know how the response to the Drowning Child differs between LW and EA! I guess I would probably say more people on the EA Forum probably donate money to charity for Drowning-Child-related reasons, but more people on LW are probably interested in philosophy qua philosophy and probably more people on LW switched careers to directly work on things like AI safety. I don't suppose there's survey/census data that we could look up?
Replies from: Seth Herdcomment by Donald Hobson (donald-hobson) · 2023-09-23T16:55:26.549Z · LW(p) · GW(p)
I think Bob's answer should probably be.
Look, I care somewhat about improving the world as a whole. But I also care about myself as well.
And I would recommend you don't go out of your way to antagonize and reject allies with a utility function similar enough to yours that mutual cooperation is easy.
The number of people who are a genuine Alice is rather low.
Also, bear in mind that the human brain has a built in "don't follow that logic off a cliff" circuit. This is the circuit that ensures crazy suicide cults are so rare despite the huge number of people who "believe" in heaven. (evolution. For whatever reason, people that always took beliefs 100% seriously didn't survive as well.) It's a circuit to not take beliefs too seriously, and it might be something like a negotiation between the common sense heuristics part of the brain and the abstract reasoning part. That's the sort of hackish compromise evolution would make, given a common sense reasoning part that was reliable, but not too smart, and an abstract reasoning part that was smart and not reliable.
comment by Elizabeth (pktechgirl) · 2024-10-21T16:38:42.788Z · LW(p) · GW(p)
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don't think this post does a very good job of advocating for this position. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications. She's clearly confused about what convenient means (having to slow down to take care of yourself is very inconvenient), and I think this is significant and not just a poor choice of words. So I wrote my own version of the position.
Let's say Bob is right that the costs exceed the benefits of working harder or suffering. Does that need to be true forever? Could Bob invest in changing himself so that he could better live up to his values? Does he have an ~obligation[1] to do that?
We generally hold that people who can swim have obligations to save drowning children in lakes[2], but there's no obligation for non-swimmers to make an attempt that will inevitably drown them. Does that mean they're off the hook, or does it mean their moral failure happened when they chose not to learn how to swim?
One difficulty with this is that there are more potential emergencies than we could possibly plan for. If someone skipped the advance swim lesson where you learn to rescue panicked drowning people because they were learning wilderness first aid, I don't think that's a moral failure.
This posits a sort of moral obligation to maximally extend your capacity to help others or take care of yourself in a sustainable way. I still think obligation is not quite the right word for this, but to the extent it applies, it applies to long term strategic decisions and not in-the-moment misery.
comment by Seth Herd · 2023-09-23T13:43:38.420Z · LW(p) · GW(p)
No human being is a full utilitarian. Expecting them or yourself to be will bring disappointment or guilt.
But helping others can bring great joy and satisfaction.
The answer is obviously yes to Alice's question.
We should work harder in the most convenient world. The premise basically states that Bob would be happier AND do more good. He's an idiot for saying no, except to get bossy, controlling Alice off his back and not let her gaslight him into doing what she wants.
But is this that world? Probably not the least convenient/easiest. Where is it on the spectrum? What will lead to Bob's happiest life? That is the right question for Bob to ask, and it's not trivial to answer.
comment by Richard_Kennaway · 2023-09-22T19:24:53.437Z · LW(p) · GW(p)
Alice and Bob sound to me very like the two options of my variant 8 [LW · GW] of the red-pill-blue-pill conundrum. We can imagine Alice working as she describes for the whole of a long life, because we can imagine anything. A real Alice, I'd be interested to see in 10 years. I think there are few, very few, who can live like that. If Bob could, he'd be doing it already.
Replies from: Firinn↑ comment by Firinn · 2023-09-22T22:00:43.757Z · LW(p) · GW(p)
Alice is, indeed, a fictional character - but clearly some people exist who are extremely ethical. There's people who go around donating 50%, giving kidneys to strangers, volunteering to get diseases in human challenge trials, working on important things rather than their dream career, thinking about altruism in the shower, etc.
Where do you think is the optimal realistic point on the spectrum between Alice and Bob?
Do you think it's definitely true that Bob would be doing it already if he could? Or do you think there exist some people who could but don't want to, or who have mistaken beliefs where they think they couldn't but could if they tried, or who currently can't but could if they got stronger social support from the community?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-09-23T06:41:18.914Z · LW(p) · GW(p)
That will vary with the person. All these things are imaginable, but that is no limitation. Bob is presented as someone who talks the EA talk, but has no heart for walking the walk. If he lets Alice badger him into greatly upping his efforts I would not expect it to go well.
Replies from: Firinn↑ comment by Firinn · 2023-09-23T07:34:23.549Z · LW(p) · GW(p)
What specifically would you expect to not go well? What bad things will happen if Bob greatly ups his efforts? Why will they happen?
Are there things we could do to mitigate those bad things? How could we lower the probability of the bad things happening? If you don't think any risk reduction or mitigation is possible at all, how certain are you about that?
Can we test this?
Do you think it's worthwhile to have really precise, careful, detailed models of this aspect of the world?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-09-23T08:25:18.159Z · LW(p) · GW(p)
I would expect Bob, as you have described him, to never reach Alice's level of commitment and performance, and after some period of time, with or without some trauma along the way, to drop out of EA. But these are imaginary creatures, and we can make up any story we like about them. There is no question of making predictions. If Alice — or you — want to convert people like Bob, you and she will have to observe the results obtained and steer accordingly.
really precise, careful, detailed
Four intensifiers in a row!!!! Is it worthwhile to have, simply, models? Expectations about how things will go? Yes, as long as you track how well they're fitting.
comment by Michael Carey (michael-carey) · 2023-09-22T16:43:11.934Z · LW(p) · GW(p)
If in World A, the majority was an Alice ... not doing the job they loved ( imagine a teacher who thinks education is important, but emotionally dislikes students) , unreciprically giving away some arbitrary % of their earnings, etc...
Is that actually better than World B? A world where the majority are Bobs, sucessful at their chosen craft, giving away some amount of their earnings but maintaining a majority they are comfortable with.
I'm surprised Bob didn't make the obvious rebuttals:
-
Alice, why aren't you giving away 51% of your earnings? What methodology are you applying that says 50% is the best amount to give, can't any arbitrary person X, giving 51% and working harder than you, make the same arguments you are making to me, to you?
-
I've calculated that investing my earnings ( either back into the economy via purchasing goods or via stocks), will lead to the most good as I'm actually incentivized to prioritizing the growth of my investments (and commiting to donating it all after my death), whereas I cannot ensure the same for any orginzation I donate to.
Where Alice's argument is very strong, I would say is that she is arguing that being Generative, is a virtue, closely tied to Generosity.
The implication ( which Bob argues could be harmful/counterproductive) that follows is that if being Generative is a Virtue ( or a precurser to the virtue of Generosity) that implies that Sloth is a Sin, or atleast a prerequsite/enabler for the Sin of anti-Generosity - basically Greed.
Of course, telling someone that they are being too slothful, may or may not be counterproductive - which is effectively Bob and Alices discussion.
Perhaps, Bob is arguing from a principle of "do no harm" that doctors operate under, and so he would avoid harming people by calling them slothful. Essentially, he wouldn't pull the lever in this application of the trolley problem.
Whereas, Alice is arguing from a principle of, inaction is also an action, and so she views the non-action of not pulling the lever as worse than explicitly pulling the lever.
Or, she has a belief of the amount of net harm, being in the favor of risking harm via calling Bob slothful.
There is a compromise position, of establishing some kind of sufficient condition for determining how someone would respond to the points made above. Funny enough, people are having that very discussion about where to even post this article.
So, props for a job well done.
comment by Martin Randall (martin-randall) · 2023-09-22T13:20:00.622Z · LW(p) · GW(p)
I think this line of argument works okay until this point.
Alice: ... In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sterner…
Okay. Let's call the initial world Earth-1, with Alice-1 talking to Bob-1. Let's call the least convenient possible world Earth-2. Earth-2 contains Alice-2 and Bob-2. They aren't having this exact conversation, because that's not coherent. But they exist within the hypothetical world, having different hypothetical conversations.
Alice: ... would you work harder then?
Bob: I just kind of don’t really want to work harder.
Alice isn't very clear about what she means by "you" here, and Bob isn't thinking through the Earth-2 hypothetical completely.
Sure, if we isekai'd Bob-1 into Earth-2, he wouldn't immediately want to start working harder. His emotions and beliefs are formed in Earth-1 where (Bob claims) advice to work harder is more harmful, people are less fundamentally changeable, and the movement would have worse PR if it was sterner. Bob-1 will inevitably take some time to update based on being in Earth-2, traversing the multiverse isn't part of our ancestral environment. This doesn't prove that Bob-1 isn't utilitarian, it proves that Bob-1 is computationally bounded.
However, Alice is not specifying an isekai hypothetical, so the question she is asking is whether Bob-2 would work harder in Earth-2. Now, Bob-2 would be getting more RLHF to work harder, and less RLHF to rest more. RLHF would be more effective on Bob-2 because he is fundamentally more changeable. Also, working harder would increase the status of Bob-2's tribe, and I assume that Earth-2 humans still want to be in high status groups because they sure do in Earth-1.
I think Bob-2 would work harder in Earth-2, and would want to work harder. In other words:
Replies from: FirinnMa'am, when the universe changes, I change. What do you do?
↑ comment by Firinn · 2023-09-22T15:58:44.472Z · LW(p) · GW(p)
Hmm, this isn't really what I'm trying to get across when I use the phrase "least convenient possible world". I'm not talking about being isekaid into an actually different world; I'm just talking about operating under uncertainty, and isolating cruxes. Alice is suggesting that Bob - this universe's Bob - really might be harmed more by rest-more advice than by work-harder advice, really might find it easier to change himself than he predicts, etc. He doesn't know for certain what's true (ie "which universe he's in") until he tries.
Let's use an easier example:
Jane doesn't really want to go to Joe's karaoke party on Sunday. Joe asks why. Jane says she doesn't want to go because she's got a lot of household chores to get done, and she doesn't like karaoke anyway. Joe really wants her to go, so he could ask: "If I get all your chores done for you, and I change the plan from karaoke to bowling, then will you come?"
You could phrase that as, "In the most convenient possible world, would you come then?" but Joe isn't positing that there's an alternate-universe bowling party that alternate-universe Jane might attend (but this universe's Jane doesn't want to attend because in this universe it's a karaoke party). He's just checking to see whether Jane's given her REAL objections. She might say, "Okay, yeah, so long as it's not karaoke then I'll happily attend." Or she might say, "No, I still don't really want to go." In the latter case, Joe has discovered that the REAL reason Jane doesn't want to go is something else - maybe she just doesn't like him and she said the thing about chores just to be polite, or maybe she doesn't want to admit that she's staying home to watch the latest episode of her favourite guilty-pleasure sitcom, or something.
If "but what about the PR?" is Bob's real genuine crux, he'll say, "Yeah, if the PR issues were reversed then I'd commit harder for sure!" If, on the other hand, it's just an excuse, then nothing Alice says will convince Bob to work harder - even if she did take the time to knock down all his arguments (which in this dialogue she does not).
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2023-09-22T20:07:04.044Z · LW(p) · GW(p)
I confess that was not my reading of the text. I've been reading quite a few thought experiments recently, so I'm primed to interpret "possible worlds" that way. In my defense, the text links to Yudkowsky's Self-Integrity and the Drowning Child [LW · GW], which uses the "Least Convenient Possible World" to indicate a counterfactual / thought experiment / hypothetical worlds. Regardless, I missed Alice's point.
Since Alice was trying to ask about cruxes and uncertainty, here's an altered dialog that I think is clearer:
Alice: Okay, so your objections are (1) hard work might harm you (2) you can't change (3) social norms and (4) public relations. Is that it, or do you have other reasons?
Bob: Yes. I just kind of don’t really want to work harder.
Alice: I think we’ve arrived at the core of the problem.
Bob: We're going full-contact psychoanalysis [LW · GW], then. Are you sure you want to go there? Maybe you are a workaholic because you have an unresolved need to impress your parents, or because it gives you moral license to be rude and arrogant, or because you never fully grew out of your childhood faith.
Alice: Unlike you, Bob, I see a therapist.
Bob: You mentioned. So we have two hypotheses. Maybe I don't want to work harder and therefore have rationalized reasons not to. Maybe I have reasons not to work harder and therefore I don't want to. I suppose I could see a therapist and get evidence to distinguish these cases. Then what? If I learn that, really, I just don't want to work harder then you haven’t persuaded me to do anything differently, you’ve kind of just made me feel bad.
Alice: Maybe I’d like you to stop claiming to be a utilitarian, when you’re totally not - you’re just an egoist who happens to have certain tuistic preferences. I might respect you more if you had the integrity to be honest about it. Maybe I think you’re wrong, and there’s some way to persuade you to be better, and I just haven’t found it yet. [...]
Replies from: Firinn↑ comment by Firinn · 2023-09-22T21:42:04.926Z · LW(p) · GW(p)
This seems like you understood my intent; I'm glad we communicated! Though I think Bob seeing a therapist is totally an action that Alice would support, if he thinks that's the best test of her ideas - and importantly if he thinks the test genuinely could go either way.
comment by Elizabeth (pktechgirl) · 2024-12-13T18:52:20.684Z · LW(p) · GW(p)
I stand by what I said here [LW(p) · GW(p)]: this post asks an important question but badly mangles the discussion. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications.
comment by Chris_Leong · 2023-09-27T08:29:47.140Z · LW(p) · GW(p)
“Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum”
As the EA community has become less intense, sometimes I’ve wondered whether there would be value in someone starting an LW or EA adjacent group that’s on the more intense part of the spectrum.
I definitely see risks associated with this (people pushing themselves too hard, fanaticism) and I probably wouldn’t want to be part of it myself, but I imagine that it could be a good fit for some people.
Replies from: Richard_Kennaway, Richard_Kennaway↑ comment by Richard_Kennaway · 2023-09-27T09:50:49.114Z · LW(p) · GW(p)
Motto: "Maximising utility isn't everything, it's the only thing!"
↑ comment by Richard_Kennaway · 2023-09-29T12:36:07.767Z · LW(p) · GW(p)
Sounds like evaporative cooling [LW · GW] in reverse (although actually more in keeping with the literal meaning): the fieriest radicals boiling off to leave the more tepid behind.
comment by Richard_Kennaway · 2023-09-23T17:15:55.690Z · LW(p) · GW(p)
I have to wonder if you are posting this here in order to play Alice to our Bobs, distanced by writing it as a parable.
Replies from: Firinncomment by Caerulea-Lawrence (humm1lity) · 2023-09-22T18:35:17.360Z · LW(p) · GW(p)
Hello Firinn,
I can relate to this post, even when I was never part of the EA-movement. When I was younger, I did join a climate-organization, and also had an account on kiva.org. And I would say there was a lot of guilt and confusion around my actions at that point, whilst simultaneously trying to do a lot of 'better than'-actions.
Your post is very extensive, and as such I find myself engaged by just reading one of the external links and the post itself. Therefore, my comment isn't really a comment to the whole post, but sees the post through one entry-point I thought might be valuable. I hope it is still useful to the thematic you had in mind.
I focused on Child in the Pond and have used that as my pivot point - as well as reading the whole interaction between 'Alice' and 'Bob'.
Imagining being in the pond situation does fill me with emotions that would steer me towards taking action to alleviate the immediate suffering. But there are two things I believe that the Child in the pond text gets conflated, and which might also be relevant for the interactions between 'Alice' and 'Bob'.
The two are:
1. 'Why' you save the kid from drowning, but why you can't "save" lives.
2. And relatedly, why focusing on money as a metric for saving lives, can fuel the same situations the text implies we should avoid.
1. 'Why' you save the kid from drowning, but why you can't "save" lives.
To take the conflation first.
Why do you save the kid from drowning? Many might be motivated by 'compassion' - to want to alleviate the perceived suffering of the child. You perceive the situation, interpret it, feel an emotion, and you choose to act on it. Acting on this emotion, seems like the best course of action - the situation might be complex, there are a lot of things you don't know, but to do this would be 'the moral thing to do'.
But there is quite the leap between having the physical abilities, and being in a situation where you can save a child from drowning, and what the Child in the Pond text talks about, namely 'saving lives'. It says:
In other words, an arbitrary link is made between a situation in which you can 'act' to save a child from drowning, to a situation in which you can 'pay' to save a child.
But if it were the same, you could, If you so desired, save the toddler in the pond by simply pulling out your 'magical card of FixEveryProblem', swipe it, and the problem would be solved. If you really wanted to help more, you might even get the option of getting the toddler a 'good, caring parent', one/two/three very good friends and the premium helping package where they have healthy, fulfilling and enriching lives for themselves and everyone they come into contact with.
But you can't. You can't pay to save the toddler. You have to be there, see the situation, understand it, be willing to act and decide to act. An action that might naturally be followed up by you caring for the child and bringing it to its caretakers (what happened there btw..?), whilst dealing with the reactions the toddler has to the situation, be it anything from loud screaming, crying, to gut-wrenching misery and getting water on your face and clothes, or maybe even puked on. Do you still do it? Yes, I hope you would.
Yes, it is a conflation. It is also made a lot worse by the use of the word 'save', and the implicit 'guarantee' it hinges on your money - that it 'saves' lives.
If your only goal was to 'save' lives, the most rational choice I can see would be to try to minimize the amounts of people getting born - as every person 'born' is only guaranteed to 'lose' their life. You might buy for the vaccines, but they get lost in transport, or destroyed by an earthquake. Losing your life, on the other hand, is guaranteed. Remove religious elements like Jesus, and you have the perfect antidote for human suffering: Antinatalism.
- You can't 'save' lives, you can only 'prolong' life.
- You can't pay to prolong life, there are certain acts/resources that prolongs a life, alleviates various kinds of suffering and even increases well-being.
This might seem like a small problem by itself, but it creates a lot of stumbling-blocks when communicating effectively, because donating money isn't an action that save children from certain illnesses by itself.
2. And relatedly, why focusing on money as a metric for saving lives, can fuel the same situations the text implies we should avoid.
As I pointed out above, there is a conflation between the drowning child and donating money. It compares apples and oranges, it conflates two different things and compares them as being the same.
Now, in the same text, there is this story of the child Wang Yue dying in the streets, despite numerous people seeing her. What does this have to do with money? Well, if you start to argue that 'money' saves lives, then going to work on time, and leaving 'saving the person' to someone working in a charity, or to those paid by society to take care of her (Parents?), might arguably be the correct choice of action.
To a charity, the expression 'Money saves lives' is true in the sense that you create a product with ingredients like: distinct causal connection between an action and a result - Give us money >> less children die of malaria.
If you save the child Wang Yue, you might not have a product team, a PR team, a photographer or a film-crew on the ready to create a product that people can buy. In other words, you aren't guaranteed to make money. And since money saves lives, losing money might start looking akin to losing lives.
And it seems like both Alice and Bob have unwittingly bought into the concept of Money=lives.
What actually prolongs lives, isn't money, but resources, genetics and luck. You need resources like time, effort, skill, innovation, dedication, will, focus, materials, care, understanding and cooperation, to name a few. Money doesn't create these resources, it is used to direct them
System lens:
Alice: You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
This was the experience I had with climate organizations and kiva.org as well, that this conflation is very rampant. Ironically, people would on the one hand say that 'capitalism' is wrong, whilst on the other saying that donating is good. Which is odd.
If Capitalism is inherently unsustainable, how does monetizing more of human life, values and needs, create more sustainability?
Human resources like care, time, understanding, empathy, love, cooperation, friendships, intelligence, wisdom, skill etc. are interconnected with each other, and don't grow due to money. Money might give you access to certain resources - but the resources aren't there due to money.
A different kind of communication:
Reading the argument between Bob and Alice reminds me of discussions that go in circles. In a way, the issue they are arguing might be a totally different one, but finding out 'what is going on' needs a different approach.
I remember hearing about this process where people with different political views were to have a talk, but instead of the usual 'debate format', they were to explain how they got to believe what they believed in. It lead to much higher levels of mutual understanding and respect, but I haven't seen this replicated in the national debates.
Conclusion:
One thing is the conflation I spotted, which ties to a lot of conflicts, that seem like they are conflicts on one level - but are really about something else. But knowing that the issues are more 'fundamental' might not feel that reassuring, which is why I presented the point about a kind of communication that might bring more understanding and respect, whilst still exploring disagreements or different points of view.
My hope is for more understanding in general, and to see various skills people have applied in ways that increase felt meaning for the people participating in disagreement - as well as anyone listening to it.
Kindly,
Caerulea-Lawrence
↑ comment by Jay Bailey · 2023-09-23T02:29:22.262Z · LW(p) · GW(p)
I don't really understand how your central point applies here. The idea of "money saves lives" is not supposed to be a general rule of society, but rather a local point about Alice and Bob - namely, donating ~5k will save a life. That doesn't need to be always true under all circumstances, there just needs to be some repeatable action that Alice and Bob can take (e.g, donating to the AMF) that costs 5k for them that reliably results in a life being saved. (Your point about prolonging life is true, but since the people dying of malaria are generally under 5, the amount of QALY's produced is pretty close to an entire human lifetime)
It doesn't really matter, for the rest of the argument, how this causal relationship works. It could be that donating 5k causes more bednets to be distributed, it could be that donating 5k allows for effective lobbying to improve economic growth to the value of one life, or it could be that the money is burnt in a sacrificial pyre to the God of Charitable Sacrifices, who then descends from the heavens and miraculously cures a child dying of malaria. From the point of view of Alice and Bob, the mechanism isn't important if you're talking on the level of individual donations.
In other words, Alice and Bob are talking on the margins here, and on the margin, 5k spent equals one live saved, at least for now.
Replies from: humm1lity↑ comment by Caerulea-Lawrence (humm1lity) · 2023-09-23T10:08:46.191Z · LW(p) · GW(p)
Hello Jay Bailey,
Thanks for your reply. Yes, I seem to have overcomplicated the point made in this post by adding the system-lens to this situation. It isn't irrelevant, it is simply besides the point for Alice and Bob.
The goal I am focusing on is a 'system overhaul' not a concrete example like this.
I was also reminded of how detrimental the confrontational tone and haughtiness by Alice and the lack of clarity and self-understanding of Bob is for learning, change and understanding. How it creates a loop where the interaction itself doesn't seem to bring either any closer to being more in tune with their values and beliefs. It seems to further widen the gulf between their respective positions, instead of capitalizing on their respective differences to further improve on facets of their values-to-actions efficiency ratio that their opposite seems capable of helping them with.
But I didn't focus much on this point in my comment.
Kindly, Caerulea-Lawrence
↑ comment by Firinn · 2023-09-22T21:22:31.996Z · LW(p) · GW(p)
I'm sorry I don't have time to respond to all of this, but I think you might enjoy Money: The Unit Of Caring: https://www.lesswrong.com/posts/ZpDnRCeef2CLEFeKM/money-the-unit-of-caring [LW · GW]
(Sorry, not sure how to make neat-looking links on mobile.)
Replies from: humm1lity↑ comment by Caerulea-Lawrence (humm1lity) · 2023-09-23T09:42:39.432Z · LW(p) · GW(p)
Hello Firinn,
Thanks for the linked post, it was right on the money.
I see that I look at market-economy as a problem by itself, but I haven't really thought about money from a less idealistic point of view.
It is really hard to come to terms with the argument he makes, when the system money operates under is so flawed.
But maybe it is more of a general point. In the instance between Alice and Bob, they might not see or have the ability to try to change the system itself, and under those circumstances I have missed the point.
Again, thanks for the post.
Kindly, Caerulea-Lawrence
comment by SurfingOrca · 2023-09-25T01:39:12.057Z · LW(p) · GW(p)
I feel like the crux of this discussion is how much we should adjust our behavior to be "less utilitarian", to preserve our utilitarian values.
The expected utility that a person created could be measured by (utility created by behavior) x (odds that they will actually follow through on their behavior), where the odds of follow-up decrease as the behavior modifications become more drastic, but the utility created if followed through increases.
People are already implicitly taking this account when evaluating what the optimal amount of radicality in activism is. If PETA advocates for everyone to completely renounce animal consumption, conduct violent attacks on factory farms, and aggressively confront non-vegans, that (theoretically) would reduce animal suffering by an extremely large amount. But in practice almost nobody would follow through. On the other hand, if PETA mistakenly centers their activism on calling for people to skip a single chicken dinner, a completely realistic goal that many millions of people would presumably execute, they would also be missing on a lot of expected utility.
Alice is arguing that Bob could maximize expected utility by shifting his behavior to a part of the curve that involves more behavior change, and therefore utility, and less probability of follow-through. Bob is arguing that he's already at the optimal point of the curve.
comment by jmh · 2023-09-23T13:12:02.099Z · LW(p) · GW(p)
Alice strikes me as the poster child for the old saying about good intentions and roads to hell. Ultimately, I think she ends up causing much more harm via the toxic and negative experience those around her have than any good she can do herself.
Replies from: Firinn, Seth Herd↑ comment by Firinn · 2023-09-24T04:03:23.975Z · LW(p) · GW(p)
If you mean this literally, it's a pretty extraordinary claim! Like, if Alice is really doing important AI Safety work and/or donating large amounts of money, she's plausibly saving multiple lives every year. Is the impact of being rude worse than killing multiple people per year?
(Note, I'm not saying in this comment that Alice should talk the way she does, or that Alice's conversation techniques are effective or socially acceptable. I'm saying it's extraordinary to claim that the toxic experience of Alice's friends is equivalently bad to "any good she can do herself". I think basically no amount of rudeness is equivalently bad to how good it is to save a life and/or help avert the apocalypse, but if you think it's morally equivalent then I'd be really curious for your reasoning.)
Replies from: jmh↑ comment by jmh · 2023-09-30T16:25:14.867Z · LW(p) · GW(p)
If the "you other people need to work harder because I do and this is import" attitude starts pushing many people away in a setting that likely lives and dies from team/group efforts Alice will have to be an exceptional talent to make up for the collective loss. It might be well intended but can (and all too frequently does, hence the old saying) produce unintended consequences that prove to be counter productive.
Even if you're saying it nicely, if the message is basically your not being good enough it becomes a bit alienating. One can definitely lead by example and try to create an environment where people want to do more but we should respect the level of contribution each is willing to produce -- and certainly so if we're not in a role where we get to define what the minimum acceptable contributions are.
comment by Arwen (alzbeta-kubitova) · 2023-09-23T10:23:40.714Z · LW(p) · GW(p)
So I basically know Alice is right, yet I mostly act like a Bob. I'm probably neither a true rationalist (I am acting on emotions instead of the truth) nor a strong effective altruist. I donate money because it makes me feel good, volunteer mostly for the fuzzies and engage with my local EA group because it's a strong community with amazing and brilliant people.
Yeah, deep down I'm a selfish human. I don't think I'll change that about myself. But EA has still enabled me to have a large positive impact trhough effective giving and that's a net positive.
comment by Bojangles9272 · 2023-09-22T08:18:02.557Z · LW(p) · GW(p)
Strong upvote for this post! While I'd caution against linking this sequence to the Effective Altruism forum and movement in general - because I don't think placing explicit and extremely strong moral *obligations* about action makes for a healthy, self-confident or outward looking mass movement - I would definitely encourage Firinn to write more LessWrong posts in this vein.
The LessWrong community should be very enthusiastic about more articulate narratives and discussions on exemplary actions motivated toward saving the whole entire world! Posts digging into the notion of habitually taking such extraordinary actions are personally some of my favorite type of content on the site and are very inspiring. Props to OP for articulating very well a sense of staunch dedication, and also for using lots of handy hyperlinks.
comment by Dom Polsinelli (dom-polsinelli) · 2024-12-18T22:06:45.456Z · LW(p) · GW(p)
I have a vague sense that these two people live in my brain and are constantly arguing and that argument is fundamentally unproductive and actively harmful for whoever should be dominant, if either.
comment by Cheese Mann (cheese-mann) · 2024-12-16T00:26:07.991Z · LW(p) · GW(p)
I like this post a lot.
Personally I am an even less altruistic Bob, who is still slightly altruistic. I would guess that 1% or less of my effort and resources go to altruistic causes.
But I accept most of the logical arguments and most of the facts / conclusions.
The very reason I don't step up to at least become Bob is because of the Alices of the world.
If I identify with the good (pure?) too much, the good people then have in-group status which they can use as a tool to extract more value from me.
I think it's one of those things you can't logic people into doing, because there's always "well at the end of the day I'm going to die in 50 years, I am ultimately a meaningless monkey on a dirt rock, why not just get high, have fun and ignore all this dumb shit"
The idea of using goodness (really ideological purity) as leverage feels inherently wrong to me.
Maybe that's just a convenient though-terminating cliche. But maybe I don't really care, and that's the whole point.
comment by Geoffrey Wood (geoffrey-wood) · 2023-09-24T20:53:27.506Z · LW(p) · GW(p)
I couldn't read this straight. Alice is being an absolute asshole to Bob. This is incredibly off-putting.
I think you could have communicated better if you had tried to make Alice remotely human.
I think I get what you are trying to do with this, but I only got it after reading comments.
comment by Lycaos King (lycaos-king) · 2023-09-23T14:23:35.191Z · LW(p) · GW(p)
The correct moral choice is for both people to lower their EA contributions to 0%.