Commonsense Good, Creative Good
post by jefftk (jkaufman) · 2023-09-27T19:50:07.486Z · LW · GW · 11 commentsContents
12 comments
Let's say you're vegan and you go to a vegan restaurant. The food is quite bad, and you'd normally leave a bad review, but now you're worried: what if your bad review leads people to go to non-vegan restaurants instead? Should you refrain from leaving a review? Or leave a false review, for the animals?
On the other hand, there are a lot of potential consequences of leaving a review beyond "it makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant instead". For example, three plausible effects of artificially inflated reviews could be:
Non-vegans looking for high-quality food go to the restaurant, get vegan food, think "even highly rated vegan food is terrible", don't become vegan.
Actually good vegan restaurants have trouble distinguishing themselves, because "helpful" vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food don't work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.
People notice this and think "if vegans are lying to us about how good the food is, are they also lying to us about the health impacts?" Overall trust in vegans (and utilitarians) decreases.
Despite thinking that it is the outcomes of actions that determine whether they are a good idea, I don't think this kind of reasoning about everyday things is actually helpful. It's too easy to tie yourself in logical knots, making a decision that seems counterintuitive-but-correct, except if you spent longer thinking about it, or discussed it with others, you would have decided the other way.
We are human beings making hundreds of decisions a day, with limited ability to know the impacts of our actions, and a worryingly strong capacity for self-serving reasoning. A full unbiased weighing of the possibilities is, sure, the correct choice if you relax these constraints, but in our daily lives that's not an option we have.
Luckily, humans have lived for centuries under these constraints, and we've developed ideas of what is "good" that turn out to be a solid guide to typical situations. Moral systems around the world don't agree on everything, but on questions of how to live your daily life they're surprisingly close: patience, respect, humility, moderation, kindness, honesty. I'm thankful we have all this learning on makes for harmonious societies distilled into our cultures to support us in our interactions.
On the other hand, I do think there is a very important place for this kind of reasoning: sometimes our normal ideas of "good" are seriously lacking. For example, they don't give us much guidance once scale is involved: a donation that helps a hundred people and one that equivalently helps a thousand people are both "good" from a commonsense perspective, even though I think it's pretty clearly ten times better to go with the second. Similarly, if you're trying to decide between working as a teacher in a poor school, therapist in a jail, a manager at a food pantry, or a firefighter in a disadvantaged community, common sense just says they're all "good" and leaves you there.
How do we reconcile this conflict, where carefully getting into the consequences of decisions can take a lot of time and risk strange errors, while never evaluating the outcomes of decisions risks having a much smaller positive effect on the world? I'd propose normally going for "commonsense good" and then in the most important cases going for "creative good".
The idea is, normally just do straightforwardly good things. Be cooperative, friendly, and considerate. Embrace the standard virtues. Don't stress about the global impacts or second-order altruistic effects of minor decisions. But also identify the very small fraction of your decisions which are likely to have the largest effects and put a lot of creative energy into doing the best you can. Questions like, what cause areas are most important, or what should I do with my time and/or money? On those decisions, make a serious effort to figure out what will have the best effects: read what other people have to say, talk to people who've made similar decisions, form your own views, consider writing up your conclusions, and stay open to evidence that you could be doing better.
For example, perhaps after a lot of thinking you decide that animals matter a lot more than most people seem to think they do, especially less-fuzzy ones like shrimp, and that improving their situation is one of the most urgent ways to make the world better. You might decide to start donating to support animal organizations, or even switch careers to work on it full time. You might decide to stop eating animal products as a way to show how important this is and build the world you want to see. But in your day-to-day life I'd recommend doing normal good things: maintain a good work-life balance even though your work matters a lot; give your friends honest health advice, not what will get them to minimize animal consumption; review restaurants as a consumer, not an animal advocate.
11 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2023-10-02T15:14:06.354Z · LW(p) · GW(p)
The idea is, normally just do straightforwardly good things. Be cooperative, friendly, and considerate. Embrace the standard virtues. Don't stress about the global impacts or second-order altruistic effects of minor decisions. But also identify the very small fraction of your decisions which are likely to have the largest effects and put a lot of creative energy into doing the best you can.
I agree with this, but would add that IMO, after you work out the consequentialist analysis of the small set of decisions that are worth intensive thought/effort/research, it is quite worthwhile to additionally work out something like a folk ethical account of why your result is correct, or of how the action you're endorsing coheres with deep virtues/deontology/tropes/etc.
There are two big upsides to this process:
- As you work this out, you get some extra checks on your reasoning -- maybe folk ethics sees something you're missing here; and
- At least as importantly: a good folk ethical account will let individuals and groups cohere around the proposed action, in a simple, conscious, wanting-the-good-thing-together way, without needing to dissociate from what they're doing (whereas accounts like "it's worth dishonesty in this one particular case" will be harder to act on wholeheartedly, even when basically correct). And this will work a lot better.
IMO, this is similar to: in math, we use heuristics and intuitions and informal reasoning a lot, to guess how to do things -- and we use detailed, not-condensed-by-heuristics algebra or mathematical proof steps sometimes also, to work out how a thing goes that we don't yet find intuitive or obvious. But after writing a math proof the sloggy way, it's good to go back over it, look for "why it worked," "what was the true essence of the proof, that made it tick," and see if there is now a way to "see it at a glance," to locate ways of seeing that will make future such situations more obvious, and that can live in one's system 1 and aesthetics as well as in one's sloggy explicit reasoning.
Or, again, in coding: usually we can use standard data structures and patterns. Sometimes we have to hand-invent something new. But after coming up with the something new: it's good, often, to condense it into readily parsable/remember-able/re-useable stuff, instead of hand spaghetti code.
Or, in physics and many other domains: new results are sometimes counterintuitive, but it is advisable to then work out intuitions [LW · GW] whereby reality may be more intuitive in the future.
I don't have my concepts well worked out here yet, which is why I'm being so long-winded and full of analogies. But I'm pretty sure that folk ethics, where we have it worked out, has a bunch of advantages over consequentialist reasoning that're kind of like those above.
- As the OP notes, folk ethics can be applied to hundreds of decisions per day, without much thought per each;
- As the OP notes, folk ethics have been tested across huge numbers of past actions by huge numbers of people. New attempts at folk ethical reasoning can't have this advantage fully. But: I think when things are formulated simply enough, or enough in the language of folk ethics, we can back-apply them a lot more on a lot of known history and personally experienced anecdotes ad so on (since they are quick to apply, as in the above bullet point), and can get at least some of the "we still like this heuristic after considering it in a lot of different contexts with known outcomes" advantage.
- As OP implies, folk ethics is more robust to a lot of the normal human bias temptations ("x must be right, because I'd find it more convenient right this minute") compared to case-by-case reasoning;
- It is easier for us humans to work hard on something, in a stable fashion, when we can see in our hearts that it is good, and can see how it relates to everything else we care about. Folk ethics helps with this. Maybe folk ethics, and notions of virtue and so on, kind of are takes on how a given thing can fit together with all the little decisions and all the competing pulls as to what's good? E.g. the OP lists as examples of commonsense goods "patience, respect, humility, moderation, kindness, honesty" -- and all of these are pretty usable guides to how to be while I care about something, and to how to relate that caring to l my other cares and goals.
- I suspect there's something particularly good here with groups. We humans often want to be part of groups that can work toward a good goal across a long period of time, while maintaining integrity, and this is often hard because groups tend to degenerate with time into serving individuals' local power, becoming moral fads, or other things that aren't as good as the intended purpose. Ethics, held in common by the group's common sense, is a lot of how this is ever avoided, I think; and this is more feasible if the group is trying to serve a thing whose folk ethics (or "commonsense good") has been worked out, vs something that hasn't.
For a concrete example:
AI safety obviously matters. The folk ethics of "don't let everyone get killed if you can help it" are solid, so that part's fine. But in terms of tactics: I really think we need to work out a "commonsense good" or "folk ethics" type account of things like:
- Is it okay to try to get lots of power, by being first to AI and trying to make use of that power to prevent worse AI outcomes? (My take: maybe somehow, but I haven't seen the folk ethics worked out, and a good working out would give a lot of checks here, I think.)
- Is it okay to try to suppress risky research, e.g. via frowning at people and telling them that only bad people do AI research, so as to try to delay tech that might kill everyone? (My take: probably, on my guess -- but a good folk ethics would bring structure and intuitions somehow, like, it would work out how this is different from other kinds of "discourage people from talking and figuring things out," it would have perceivable virtues or something for noticing the differences, which would help people then track the differences on the group commonsense level in ways that help the group's commonsense not erode its general belief in the goodness of people sharing information and doing things).
↑ comment by Tristan Williams (tristan-williams) · 2023-10-06T10:36:35.135Z · LW(p) · GW(p)
Just posted a comment in part in response to you (but not enough to post it as a response) and would love to have your thoughts!
comment by tlevin (trevor) · 2023-09-29T00:44:13.591Z · LW(p) · GW(p)
Just want to plug Josh Greene's great book Moral Tribes here (disclosure: he's my former boss). Moral Tribes basically makes the same argument in different/more words: we evolved moral instincts that usually serve us pretty well, and the tricky part is realizing when we're in a situation that requires us to pull out the heavy-duty philosophical machinery.
comment by Adam Kaufman (Eccentricity) · 2023-09-28T05:58:36.833Z · LW(p) · GW(p)
Yeah, in general, we are pretty compute limited and should stick to good heuristics for most kinds of problems. I do think that most people rely too much on heuristics, so for the average person the useful lesson is "actually stop and think about things once in a while", but I can see how the opposite problem may sometimes arise in this community.
comment by Tristan Williams (tristan-williams) · 2023-10-06T10:35:19.483Z · LW(p) · GW(p)
[Forum Repost] Didn't catch this until just now, but happy to see the idea expanded a bit more! I'll have to sit down and think on it longer, but I did have some immediate thoughts.
I guess at its core I'm unsure what exactly a proper balance of thinking about folk ethics[1] (or commonsense good) and reasoned ethics[2] (or creative good) is, when exactly you should engage in each. You highlight the content, that reasoned ethics should be brought in for the big decisions, those with longevity generally. And Ana [LW(p) · GW(p)] starts to map this out a bit further, saying reasoned ethics involves an analysis of "the small set of decisions that are worth intensive thought/effort/research" But even if the decision set is small, if it's just these really big topics, the time spent implementing major decisions like these is likely long and full of many day to day tradeoffs and choices. Sure, eating vegan is now a system one task for me, but part of what solidified veganism for me was bringing in my discomfort from reasoned ethics into my day to day for awhile, for months even. The folk ethics there (for me) was entirely in the opposite direction, and I honestly don't think I would have made the switch if I didn't bring reasoned ethics into my everyday decisions.
I guess for that reason I'm kind of on guard, looking for other ways my commonsense intuitions about what I should do might be flawed. And sure, when you set it up like "folk ethics is just sticking to basic principles of benevolence, of patience, honesty and kindness" few will argue adherence to this is flawed. But it's rarely these principles and instead the application of them where the disagreement comes in. My family and I don't disagree that kindness is an important value, we disagree on what practicing kindness in the world looks like.
In light of this, I think I'd propose the converse of Anna's comments: stick to folk ethics for most of the day to day stuff, but with some frequency[3] bring the reasoned ethics into your life, into the day to day, to see if how you are living is in accord with your philosophical commitments. This could look like literally going through a day wearing the reasoned ethics hat, or it could even look like taking stock of what what has happened over a period of time and reflecting on whether those daily decisions are in accord. Maybe this community is different, but I agree with Eccentricity [LW(p) · GW(p)] that I generally see way to little of this in the world, and really wish people engaged in it more.
- ^
I'll use folk ethics in place of commonsense good hereafter because I find the term compelling
- ^
I'll use reasoned ethics in place of creative good because I think this set (folk ethics and reasoned ethics) feels more intuitive. Sorry for changing the language, it just made it easier for me to articulate myself here.
- ^
Really unsure what's best here so I'm leaving it intentionally vague. If I had to suggest something, at least an annual review and time of reflection is warranted (I like to do this at the end of the calendar year but I think you could do it w/e) and at most I think checking in each week (running through a day at the end of the week really thinking if the decisions and actions you are taking make sense) might be good.
↑ comment by AnnaSalamon · 2023-10-06T23:58:16.331Z · LW(p) · GW(p)
Thanks for this response; I find it helpful.
Reading it over, I want to distinguish between:
- a) Relatively thoughtless application of heuristics; (system-1integrated + fast)
- b) Taking time to reflect and notice how things seem to you once you've had more space for reflection, for taking in other peoples' experiences, for noticing what still seems to matter once you've fallen out of the day-to-day urgencies, and for tuning into the "still, quiet voice" of conscience; (system-1-integrated + slow, after a pause)
- c) Ethical reasoning (system-2-heavy, medium-paced or slow).
The brief version of my position is that (b) is awesome, while (c) is good when it assists (b) but is damaging when it is acted on in a way that disempowers rather than empowers (b).
--
The long-winded version (which may be entirely in agreement with your (Tristan's) comment, but which goes into detail because I want to understand this stuff):
I agree with you and Eccentricity that most people, including me and IMO most LWers and EAers, could benefit from doing more (b) than we tend to do.
I also agree with you that (c) can assist in doing (b). For example, it can be good for a person to ask themselves "how does this action, which I'm inclined to take, differ from the actions I condemned in others?", "what is likely to happen if I do this?", and "do my concepts and actions fit the world I'm in, or is there a tiny note of discord [LW · GW]?"
At the same time, I don't want to just say "c is great! do more c!" because I share with the OP a concern that EA-ers, LW-ers, and people in general who attempt explicit ethical reasoning sometimes end up using these to talk themselves into doing dumb, harmful things, with the OP's example of "leave inaccurate reviews at vegan restaurants to try to save animals" giving a good example of the flavor of these errors, and with historical communism giving a good example of their potential magnitude.
My take as to the difference between virtuous use of explicit ethical reasoning, and vicious/damaging use of explicit ethical reasoning, is that virtuous use of such reasoning is aimed at cultivating and empowering a person's [prudence, phronesis, common sense, or whatever you want to call a central faculty of judgment that draws on and integrates everything the person discerns and cares about], whereas vicious/damaging uses of ethical reasoning involve taking some piece of the total set of things we care about, stabilizing it into an identity and/or a social movement ("I am a hedonistic utilitarian", "we are (communists/social justice/QAnon/EA)", and having this artificially stabilized fragment of the total set of things one cares about, act directly in the world without being filtered through one's total discernment ("Action A is the X thing to do, and I am an X, so I will take action A").
(Prudence was classically considered not only a virtue, but the "queen of the virtues" -- as Wikipedia puts it "Prudence points out which course of action is to be taken in any concrete circumstances... Without prudence, bravery becomes foolhardiness, mercy sinks into weakness, free self-expression and kindness into censure, humility into degradation and arrogance, selflessness into corruption, and temperance into fanaticism." Folk ethics, or commonsense ethics, has at its heart the cultivation of a total faculty of discernment, plus the education of this faculty to include courage/kindness/humility/whatever other virtues.)
My current guess as to how to develop prudence is basically to take an interest in things, care, notice tiny notes of discord, notice what actions have historically had what effects, notice when one is oneself "hijacked" into acting on something other than one's best judgment and how to avoid this, etc. I think this is part of what you have in mind about bringing ethical reasoning into daily life, so as to see how kindness applies in specific rather than merely claiming it'd be good to apply somehow?
Absent identity-based or social-movement-based artificial stabilization, people can and do make mistakes, including e.g. leaving inaccurate reviews in an attempt to help animals. But I think those mistakes are more likely to be part of a fairly rapid process of developing prudence (which seems pretty worth it to me), and are less likely to be frozen in and acted on for years.
(My understanding isn't great here; more dialog would be great.)
comment by dr_s · 2023-09-28T10:30:13.161Z · LW(p) · GW(p)
Luckily, humans have lived for centuries under these constraints, and we've developed ideas of what is "good" that turn out to be a solid guide to typical situations. Moral systems around the world don't agree on everything, but on questions of how to live your daily life they're surprisingly close: patience, respect, humility, moderation, kindness, honesty. I'm thankful we have all this learning on makes for harmonious societies distilled into our cultures to support us in our interactions.
In a sense, deontology is nothing if not well-tested consequentialism. Just rules selected for (sometimes even by quite literal evolutionary processes) because they work in terms of making societies thrive, turned into broad heuristics. That still doesn't mean they're always right, especially in such an out-of-distribution world as the modern one we've created, but yeah, for trivial stuff they work.
One thing to note though is that the review question you pose actually can be seen as a conflict between two very basic common sense principles. One is to be helpful and friendly to someone you've directly interacted with and that maybe was helpful and friendly to you, even though perhaps not a great cook. The other is to be truthful to perfect strangers, even if it means hurting slightly that person. Different moralities and cultures actually tend to produce different answers to these questions. There's an expression that was coined to describe a certain flaw widespread in some parts of Italian culture, "amoral familism" - the tendency to prefer simply helping your own, your tribe (family, friends, acquaintances) over any other value. That's overly skewed towards one end and often actively harmful for a healthy social fabric. At the other end though, even being always strictly rule-bound and never moved by sympathy can make for a fundamentally cruel demeanor. So in many ways there's still some balancing to do, probably based on weighing a sense of immediate consequences at least. Though of course trying to go too deep down the rabbit hole of multiple orders of effects leads to infinite recursion and paralysis.
comment by rpglover64 (alex-rozenshteyn) · 2023-09-28T14:00:07.564Z · LW(p) · GW(p)
This makes me think that a useful meta-principle for application of moral principles in the absence omniscience is "robustness to auxillary information." Phrased another way, if the variance of the outcomes of your choices is high according to a moral principle, in all but the most extreme cases, either find more information or pick a different moral principle.
Replies from: sven-schoene↑ comment by Coding2077 (sven-schoene) · 2023-09-29T03:45:16.326Z · LW(p) · GW(p)
This sounds intuitively interesting to me.
Can you maybe give an example or two (or one example and one counter example) to help illustrate how a moral principle displaying "robustness to auxiliary information" operates in practice, versus one that does not? Specifically, I'm interested in understanding how the variance in outcomes might manifest with the addition of new information.
Replies from: alex-rozenshteyn↑ comment by rpglover64 (alex-rozenshteyn) · 2023-09-30T18:08:00.001Z · LW(p) · GW(p)
Let's consider the trolley problem. One consequentialist solution is "whichever choice leads to the best utility over the lifetime of the universe", which is intractable. This meta-principle rules it out as follows: if, for example, you learned that one of the 5 was on the brink of starting a nuclear war and the lone one was on the brink of curing aging, that would say switch, but if the two identities were flipped, it would say stay, and generally, there are too many unobservables to consider. By contrast, a simple utilitarian approach of "always switch" is allowed by the principle, as are approaches that take into account demographics or personal importance.
The principle also suggests that killing a random person on the street is bad, even if the person turns out to be plotting a mass murder, and conversely, a doctor saving said person's life is good.
Two additional cases where the principle may be useful and doesn't completely correspond to common sense:
- I once read an article by a former vegan arguing against veganism and vegetarianism; one example was the fact that the act of harvesting grain involves many painful deaths of field mice, and that's not particularly better than killing one cow. Applying the principle, this suggests that suffering or indirect death cannot straightforwardly be the basis for these dietary choices, and that consent is on shaky ground.
- When thinking about building a tool (like the LW infrastructure) that could be either hugely positive (because it leads to aligned AI) or hugely negative (because it leads to unaligned AI by increasing AI discussions), and there isn't really a way to know which, you are morally free to build it or not; any steps you take to increase the likelihood of a positive outcome are good, but you are not required to stop building the tool due to a huge unknowable risk. Of course, if there's compelling reason to believe that the tool is net-negative, that reduces the variance and suggests that you shouldn't build it (e.g. most AI capabilities advancements).
Framed a different way, the principle is, "Don't tie yourself up in knots overthinking." It's slightly reminiscent of quiescence search in that it's solving a similar "horizon effect" problem, but it's doing so by discarding evaluation heuristics that are not locally stable.
Replies from: sven-schoene↑ comment by Coding2077 (sven-schoene) · 2023-09-30T20:09:58.038Z · LW(p) · GW(p)
Thanks for the explanation. This makes a lot of sense to me now. I'm glad I asked!
While I agree that there is value in "don't tie yourself up in knots overthinking", my intuition tells me that there is a lot of value in just knowing about / considering that there is more information about a situation to be had, which might, in theory, influence my decision about that situation in important ways. It changes how I engange with all kinds of situations beforehand, and also after the fact. So considering the motivations and backstories of the people in the trolley-problem does have value, even if in that particular moment I do not have the time to gather more information and a decision needs to be made quickly.
I don't think that this point needs to be made for people on this forum. It's more aimed at people who are looking for rules / strategies / heuristics to robotically and mindlessly apply them to their lives (and enforce those rules for others).