Personal relationships with goodness
post by KatjaGrace · 2018-05-14T18:50:01.310Z · LW · GW · 20 commentsContents
20 comments
Many people seem to find themselves in a situation something like this:
- Good actions seem better than bad actions. Better actions seem better than worse actions.
- There seem to be many very good things to do—for instance, reducing global catastrophic risks, or saving children from malaria.
- Nonetheless, they continually do things that seem vastly less good, at least some of the time. For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.
On the face of it, this is worrying. Why do you do the less good things? Is it because you prefer badness to goodness? Are you evil?
It would be nice to have some kind of a story about this. Especially if you are just going to keep on occasionally admiring kittens or whatever for years on end. I think people settle on different stories. These don’t have obviously different consequences, but I think they do have subtly different ones. Here are some stories I’m familiar with:
I’m not good: “My behavior is not directly related to goodness, and nor should it be”, “It would be good to do X, but I am not that good” “Doing good things rather than bad things is generally supererogatory”
I think this one is popular. I find it hard to stomach, because if I am not good that seems like a serious problem. Plus, if goodness isn’t the guide to my actions, it seems like I’m going to need some sort of concept like schmoodness to determine which things I should do. Plus I just care about being good for some idiosyncratic reason. But it seems actually dangerous, because not treating goodness as a guide to one’s actions seems like it might affect one’s actions pretty negatively, beyond excusing a bit of kitten admiring or choir attendance.
In its favor, this story can help with ‘leaving a line of retreat [LW · GW]‘: maybe you can better think about what is good, honestly, if you aren’t going to be immediately compelled to do it. It also has the appealing benefit of not looking dishonest, hypocritical, or self-aggrandizing.
Goodness is hard: “I want to be good, but I fail due to weakness of will or some other mysterious force”
This one probably only matches one’s experience while actively trying to never indulge in anything, which seems rare as a long term strategy.
Indulgence is good: “I am good, but it is not psychologically sustainable to exist without admiring kittens. It really helps with productivity.” “I am good, and it is somehow important for me to admire kittens. I don’t know why, and it doesn’t sound that plausible, but I don’t expect anything good to happen if I investigate or challenge it”
This is nice, because you get to be good, and continue to pursue good things, and not feel endlessly bad about the indulgence.
It has the downside that it sounds a bit like an absurd rationalization—’of course I care about solving the most important problems, for instance, figuring out where the cutest kittens are on the internet’. Also, supposing that fruitless entertainments are indeed good, they are presumably only good in moderation, and so it is hard for observers to tell if you are doing too much, which will lead them to suspect that you are doing too much. Also, you probably can’t tell yourself if you are doing too much, and supposing that there is any kind of pressure to observe more kittens under the banner of ‘the best thing a person can do’, you might risk that happening.
I’m partly good; indulgence is part of compromise: “I am good, but I am a small part of my brain, and there are all these other pesky parts that are bad, and I’m reasonably compromising with them” “I have many parts, and at least one of them is good, and at least one of them wants to admire kittens.”
This has the upside of being arguably relatively accurate, and many of the downsides of the first story, but to a lesser degree.
Among these, there seems to be a basic conflict between being able to feel virtuous, and being able to feel honest and straightforward. Which I guess is what you get if you keep on doing apparently non-virtuous things. But given that stopping doing those things doesn’t seem to be a real option, I feel like it should be possible to have something close to both.
I am interested to hear about any other such accounts people might have heard of.
20 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2018-05-14T21:48:13.435Z · LW(p) · GW(p)
I think this whole discussion so far hides dangerous amounts of confusion around the concept of "good," and any serious progress will involve unpacking this confusion in much more detail. Here are some other stories I think it's important to have in the mix when thinking about this.
Goodness is about signaling: You know this one. In the ancestral environment people wanted to signal that they would make useful allies, which involves having properties like standing up for your friends, keeping your promises, etc. Perhaps they even wanted to signal that they would be good leaders of the tribe, which involves having properties like looking out for the well-being of the tribe. Also, humans are bad at lying. All this adds up to a strong incentive to signal both to yourself and to others that you care about doing things that are "good" = things that would make you a desirable ally or leader, or whatever.
Goodness is about coordinating decisions about who to back in social conflicts: This is the side-taking hypothesis of morality. Read the link for more details. This is maybe the most horrifying idea I've come across in the last year.
Goodness is an eldritch horror / egregore: Some crazy societal / cultural process indoctrinated you with this concept of "good" for reasons that have basically nothing to do with what you want. Cf. people who have been indoctrinated with communism or a religion, or fictional people living in a dystopia. There is just this distributed entity running on a bunch of humans propagating itself through virulent memes, and who knows what it's optimizing for, but probably not what I want.
My story is some kind of complicated mix of these; many parts of it are nonverbal and verbalizing them would require some effort on my part. But if I had to try verbalizing, it might go something like this:
"Many people, including me, seem to have some concept of what it means for a person or action to be 'good.' It seems like a complicated concept and I notice I'm confused. When I try to label a person as 'good' or 'bad,' including myself, it feels like I am basically always making some kind of mistake, maybe a type error. I have some kind of desire to be able to label myself 'good,' which seems to come from some sense that if I am 'good' then I 'deserve' (another complicated concept I notice confusion around) to be happy, or 'deserve' other people's love, or something like that.
This concept I have of 'goodness' came from somewhere, and I'm not sure I trust whatever process it came from. I have some sense that my desire to use it is protecting something, but whatever that is I'd rather work with it directly.
What seems a lot less complicated than 'goodness' or 'badness' is thinking about what I want. I want a lot of things, many of which involve other people. I have some sense of what it means for people to be able to trust each other and cooperate in a way that makes both of them better off, and a lot of what I want revolves around this; I want to be a trustworthy person who can cooperate with other people in ways that make both of us better off, so I can get other things I want. I also want to continue existing so I can get all the other things I want. I in fact don't want to do a lot of the actions that I might naively want to label as 'bad' because they would make me less trustworthy and I don't want that.
I have the sense that I'm made out of a bunch of parts that want different things, and those parts are still in the process of learning how to trust and cooperate with each other so they can all get more of what they want."
One thing I've been playing with in the last few months is learning to stop being subject to [LW · GW] the concept of goodness. It's been very freeing; I feel a lot more capable of thinking through the actual consequences of my actions (including decision-theoretic consequences) and deciding if I want those consequences or not, as opposed to feeling shackled by a bunch of deontological constraints that were put into place by processes I don't trust.
Replies from: Raemon, catherio, Hazard↑ comment by Raemon · 2018-05-14T23:20:20.776Z · LW(p) · GW(p)
I highly agree with the overall premise of "you should take a step back from the frame this post is premised around", and agree that each of the frames listed here is an important piece of a puzzle.
I do still feel like there's a frame missing here, which is the "look at the deontological underpinnings Katja is pointing at at face value and take them seriously." I think it's a mistake to only look at those without the other frames you mention, but just as much of a mistake to not acknowledge them as an important element. I think your "here's my attempt at verbalizing" does end up including a bit of this, but feels incomplete.
My summary of my-own-version of this is something like...
...
"It's okay to want people to be better off, happy, fulfilled, independent of anything that has to do with cooperation. It's okay to actually just think 'yeah, there are suffering people in the world, or existential dangers on the horizon, and this is bad, and I have the power to help, and... it's not quite right not call that an obligation, but it's also not quite right to call that a whim or personal preference. It's okay to think this is not just 'a thing I want', but something that I think is... deeply good in some way. Not objectively good, but important in some way.
Because I'm a monkey running on weird hardware, my intuitions about this will not always be consistent, and figuring out how to make them consistent is important, but just because they're inconsistent doesn't mean that they're meaningless or suspect."
I think theunitofcaring.tumblr.com is the the place that most consistently embodies the spirit I'm pointing at. There's also a Rob Bensinger FB comment somewhere I can't find that argues "Effective Altruism is an oblitunity" which is maybe the single most succinct explanation of it (and slightly more accurate-feeling that Nate's altruistic motivations post).
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-05-15T00:09:31.169Z · LW(p) · GW(p)
So, yes, in addition to my own story I have more thoughts about what kind of story I want for people in general, roughly along these lines:
And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.
Or not, and that would also be fine.
I have strong intuitions about a thing which I'll roughly label "not skipping developmental stages." I think there is something like a developmental stage at which thinking about altruism is natural and won't slowly corrupt your soul, and I worry about something like people not knowing what stage they're at, not being at this stage, and trying to pretend to themselves and others that they are. The problem is roughly, I think most people are trying to do EA at Kegan 3, which is subject to tons of Goodharting / signaling issues, and it seems like a bad idea to me to seriously try to do EA until Kegan 4 or 5.
↑ comment by catherio · 2018-05-19T04:54:57.464Z · LW(p) · GW(p)
I hadn't read that link on the side-taking hypothesis of morality before, but I note that if you find that argument interesting, you would like Gillian Hadfield's book "Rules for a Flat World". She talks about law (not "what courts and congress do" but broadly "the enterprise of subjecting human conduct to rules") and emphasizes that law is similar to norms/morality, except in addition there is a canonical place that "the rules" get posted and also a canonical way to obtain a final arbitration about questions of "did person X break the rule?". She emphasizes that these properties enable third-party enforcement of rules with much less assumption of personal risk (because otherwise, if there's no final arbitration about whether a rule got broken, someone might punish *me* for punishing the rule-breaker). While other primates have altruism and even norms, they do not appear to have third-party enforcement. Anyway, consider this a book recommendation.
I'm a little perplexed about what you find horrifying about the side-taking hypothesis. In my view, the whole point of everything is basically to assemble the largest possible coalition of as many beings as we can possibly coordinate, using the best possible coordination mechanisms we collectively have access to, so that as many as possible of us can play this game and have a good time playing it for as long as we can. Of course we need to protect that coalition and defend it from its enemies, because there will always be enemies. But hopefully we can make there be fewer of them so that more of us can play.
If that's the whole point of everything, then a system in which we can constantly make coordinated decisions about which side is "the big coalition of all of us" and keep the number of enemies to a minimum seems like *fantastic* technology and I want us all to be using it.
As a side note, I saw recently somewhere in the blogsphere a discussion about whether the development of human intelligence was fueled by advantages in creating laws (versus "breaking laws" or "some other reason"), but I don't recall where that was and I would appreciate a reference if someone has one. The basic idea was that laws and morality both require a kind of abstract thinking - logical quantifiers like "for all people with property X" and "Y is allowed only if Z" - which, lo and behold, homo sapiens seems to have evolved for some reason, and that reason might've been to reason abstractly about social rules. (Indeed, people are much better at the Wason card-flipping task when policing a social rule rather than deducing abstract properties).
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-05-21T02:27:00.653Z · LW(p) · GW(p)
I'm a little perplexed about what you find horrifying about the side-taking hypothesis.
I think there was a part of me that was still in some sense a moral realist and the side-taking hypothesis broke it.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2018-05-21T20:55:49.938Z · LW(p) · GW(p)
Wow. Huge respect for noticing that and then just saying it outright. That's... hard to do. Or at least, rare.
Also the side-taking morality link is extremely thought-provoking; it led to one of those "wow how come I never thought of this before..." moments -- thanks.
↑ comment by Hazard · 2018-05-19T01:20:18.917Z · LW(p) · GW(p)
I notice that it feels easier for me to ask, "What is good?" than to ask, "What do I want?"
This is something that I don't super endorse and am trying to investigate more. It's also possible that I do have easy access to what I want, but my "Is this good?" sensor shoots down using that for decision making really fast.
comment by sarahconstantin · 2018-05-15T00:43:29.738Z · LW(p) · GW(p)
Here's my trajectory:
1.) Worry a lot about "I'm not good"
2.) Improve in some dimensions, also refactor my moral priorities so that I no longer believe some of my 'bad traits' are really bad
3.) Still worry a lot about "I'm not good" where "good" refers to some eldritch horror that I no longer literally endorse
4.) Learn the mental motion of going "fuck it", where I just rest my brain and self-soothe. Do that until I deeply do not give a fuck whether I'm good or not.
5.) Notice a mild but consistent desire to do things that are, not "good", but "constructive" -- i.e. contribute to the construction of a nice thing that takes time and effort to complete.
6.) Notice that the people around me mostly like it when I do "constructive" things, and call them "good."
Replies from: sarahconstantin↑ comment by sarahconstantin · 2018-05-15T00:57:10.654Z · LW(p) · GW(p)
In this context, thinking about whether you are "good" is not "constructive."
Thinking about whether you're doing something "constructive" is, by contrast, extremely constructive.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2018-05-21T20:58:17.440Z · LW(p) · GW(p)
Well said. At the risk of asking elaboration of an obvious point, do you have any examples of when this has paid off for you? Or, perhaps write a top-level post? On the one hand it's very easy to get one's mind around what you wrote... but I'd speculate there might have been some non-obvious takeaways?
It's a fascinating point. It'd be cool to read more about your perspective on it.
comment by ESRogs · 2018-05-14T20:03:26.989Z · LW(p) · GW(p)
I like the last option (goodness + compromise), but want to add two notes:
1. It really seems like someone's got to be doing some indulging (perhaps in moderation). Otherwise you're only saving children from malaria so that they can save other children from malaria, and it all adds up to a disneyland w/ no children. (Or, I guess there are children, but they're all working rather than enjoying the park.)
2. After watching Schindler's List, I resolved not to be like him in the last scene, where he laments that he could have done just a little bit more. I want to get within an order of magnitude of the most good I could do. By analogy with computer science algorithms, I want it to have the right complexity class. If there are quick wins, or cheap buys, I want to take them. Worrying about every last ounce of goodness doesn't seem like how I want to be.
comment by ozymandias · 2018-05-15T04:07:20.724Z · LW(p) · GW(p)
I am not permitted to engage in morality for exactly the same reason an alcoholic is not permitted to have a drink-- I can never stop at just one.
By coincidence, when I try as solemnly as possible to figure out what I genuinely want to do, one of the things I want to do is to be St. Basil the Great. Of course, I want to do very many unrealistic and mutually contradictory things: in addition to being St. Basil the Great, I also want to go to Disneyland about once a month, cook three delicious meals for myself every day from scratch, read my son every board book in existence, and date every pretty person I come across. But for me my desire to go to Disneyland interfering with my desire to be St. Basil the Great is not actually any different from my desire to go to Disneyland interfering with my desire to cook all my meals from scratch. So I try to fulfill as many of my desires as best I can.
Also, I keep doing things I don't want to do instead of things I want to do.
When I adopted this policy I was concerned that not wanting to be good would mean I would end up doing some things I would feel upset about doing, but it turns out that the whole reason I feel upset about those things is that I don't upon reflection want to do them (although I might have impulses to do them at the time). Otherwise, it would be like "if you stop caring about being good then you might have gay sex!" Yes, and in fact that is a selling point of not caring about being good.
comment by Dagon · 2018-05-14T22:35:43.476Z · LW(p) · GW(p)
I bite some bullets.
1) I am not a good person in any absolute sense. I perhaps do more overall good than many other humans, and I signal caring pretty well so many of my fellow humans put me in the "good guy" category, but I choose to waste resources on my own selfish experience pretty routinely. Suck it, future.
2) I don't value people equally. I do in fact care more about close friends and family than strangers, and still less about more distant strangers. My "optimization of goodness" weights my own experiences and my perception of people I interact with MUCH higher (like billions of times) than undifferentiated strangers across the world. (note that I do advocate for equality in terms of institutional design, I just don't think it applies at the personal level).
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2018-05-21T21:08:32.643Z · LW(p) · GW(p)
You know how people write 'lol' kind of casually on the internet?
I actually, literally, audibly laughed out loud at "Suck it, future." Thank you. I'm still chuckling, actually, that's a riot.
comment by Raemon · 2018-05-14T22:53:36.325Z · LW(p) · GW(p)
So, I think I can say things within the frame given here (I guess they'd mostly be similar to ESRogs – the main difference being that I'd object harder that the word "indulgence" is an appropriate word to describe what's going on. The whole point in my worldview is for people to be able to have good experiences. "Indulgence" conjures to my mind a "you're taking a break from the thing you're supposed to be doing." The experiencing-of-good-things is an important part of The Good)
But my inner Zvi is screaming at the entire framing of this [fake edit: so was my inner Qiaochu, although for different reasons, and he's already spoken for himself].
I will note that goodness is a fairly confusing concept, and is one of the places where I think it's good to catch up on some sequences if you haven't already.
Eliezer's Mere Goodness collection has a lot of theoretical background. (Much of this is oriented more towards "how do you make sure an AI is 'good'" as opposed to "what does 'good' mean as a human?" A lot of it is probably stuff you already know and I'm not sure which is which)
Nate Soare's Replacing Guilt series is more directly tailored as a response to the sort of question in the OP. I think a good post to start with to see if the series is a good fit for you is "Should" considered harmful.
(fyi, I kept running into Effective Altruist types who still seemed to have an unhealthy relationship with guilt, so I registered the domain doingguiltbetter.com to redirect to Nate's series)
comment by [deleted] · 2018-05-14T19:33:12.849Z · LW(p) · GW(p)
I don't think I have too much to add, but I thought it was nice to see these different takes fleshed out.
My own views are a mix of your last two. It goes a little like:
"At best, humans are operating off leaky abstractions when it comes to 'doing good'. It's not realistic to expect yourself to do The Goodest Thing (TM) all the time, even when it seems like it's doable. Just like how you can't always swim to the other side of the shore, even if you can see it from across the waves. There's a lot of other factors involved, and doing things like taking breaks are instrumental in the long run to avoid burnout."
comment by RedMan · 2018-05-15T06:47:13.681Z · LW(p) · GW(p)
Relevant and still funny: https://en.m.wikipedia.org/wiki/The_Goode_Family
Every study I'm aware of investigating the topic shows that 'doing good' mentally buys people rationalizations for bad behavior. So I would say, the difficulty of doing good increases as more good is done, and the slope of that rate of increase is so steep that most people end up pretty close to 'even' between doing good and doing not-good. Here's a recent popular press article referencing academic research: http://m.nautil.us/blog/why-doing-good-makes-it-easier-to-be-bad
Charity, particularly self-directed charity performed as pennance, is a downright disgusting practice when viewed in this light.
I try to have good habits like recycling so I am 'good without thinking about it'
comment by Evan_Gaensbauer · 2018-05-14T19:47:52.988Z · LW(p) · GW(p)
I found it helped to expand my definition of virtue. I learned that I am the best and perhaps only person willing to take care of myself in a way ensuring my long-term capability to do good. This is different than being the only person willing to take care of myself in the way I want, or I think is best. This allows me to defer to my own sense of what's best for me over other's advice, but because I'm only sovereign over myself, and not necessarily the best decision-maker in all instances, I find it easier to defer to advice from others than I used to when it makes more sense than anything I can think of to improve my own lot in life. This attitude expands beyond the self concentrically. For example, the people best-suited to ensuring the long-term well-being of a community are the people living in that community; in a country, the people of that country; and during a period of time, the people living during that period of time.
Realizing other people aren't best-suited to care for you even if they want to, perhaps more than you, makes it imperative to learn how to take care of oneself. In this case, selfish indulgence and virtue are inverted.
For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.
Admiring kittens, playing video games, or watching movies in moderation switches from "I should be doing good, but I'm going to indulge myself" to "I'm running out of spoons to do good today, and while I could run on willpower, and my peers would be eager to see me keep burning the candle at both ends, they don't internally I need to recharge my batteries to avoid burnout". "Indulgence" in moderation becomes reframed as responsible self-care.