Choosing the Zero Point

post by orthonormal · 2020-04-06T23:44:02.083Z · LW · GW · 24 comments

Contents

24 comments

Summary: You can decide what state of affairs counts as neutral, and what counts as positive or negative. Bad things happen if humans do that in our natural way. It's more motivating and less stressful if, when we learn something new, we update the neutral point to [what we think the world really is like now].

A few years back, I read an essay by Rob Bensinger about vegetarianism/veganism, and it convinced me to at least eat much less meat. This post is not about that topic. It's about the way that essay differed, psychologically, from many others I've seen on the same topic, and the general importance of that difference.

Rob's essay referred to the same arguments I'd previously seen, but while other essays concluded with the implication "you're doing great evil by eating meat, and you need to realize what a monster you've been and immediately stop", Rob emphasized the following:

Frame animal welfare activism as an astonishingly promising, efficient, and uncrowded opportunity to do good. Scale back moral condemnation and guilt. LessWrong types can be powerful allies, but the way to get them on board is to give them opportunities to feel like munchkins with rare secret insights, not like latecomers to a not-particularly-fun party who have to play catch-up to avoid getting yelled at. It’s fine to frame helping animals as challenging, but the challenge should be to excel and do something astonishing, not to meet a bare standard for decency.

That shouldn't have had different effects on me than other essays, but damned if it didn't.


Consider a utilitarian Ursula with a utility function U. U is defined over all possible ways the world could be, and for each of those ways it gives you a number. Ursula's goal is to maximize the expected value of U.

Now consider the utility function V, where V always equals U + 1. If a utilitarian Vader with utility function V is facing the same choice (in another universe) as Ursula, then because that +1 applies to every option equally, the right choice for Vader is the same as the right choice for Ursula. The constant difference between U and V doesn't matter for any decision whatsoever!

We represent this by saying that a utility function is only defined up to positive affine transformations. (That means you can also multiply U by any positive number and it still won't change a utilitarian's choices.)

But humans aren't perfect utilitarians, in many interesting ways. One of these is that our brains have a natural notion of outcomes that are good and outcomes that are bad, and the neutral zero point is more or less "the world I interact with every day".

So if we're suddenly told about a nearby bottomless pit of suffering, what happens?

Our brains tend to hear, "Instead of the zero point where we thought we were, this claim means that we're really WAY DOWN IN THE NEGATIVE ZONE".

A few common reactions to this:

The thing about Rob's post is that it suggested an alternative. Instead of keeping the previous zero point and defining yourself as now being very far below it, you can reset yourself to take the new way-the-world-is as the zero point.

Again, this doesn't change any future choice a utilitarian you would make! But it does buy human you peace of mind. What is true is already so- the world was like this even when you didn't believe it.

The psychological benefits of this transformation:

A few last notes:

Now go forth, and make the world better than the new zero!

24 comments

Comments sorted by top scores.

comment by Isnasene · 2020-04-07T15:56:45.828Z · LW(p) · GW(p)

As an animal-welfare lacto-vegetarian who's seen a fair number of arguments along these lines, they don't really do it for me. In my experience, it's not really possible to separate human peace of mind from the actions you make (the former reflect an ethical framework and the latter reflect strategies and together they form an aesthetic feedback loop [LW · GW]) . To be explicit:

  • I don't think my moral zero-point was ever up for grabs. Moreover, it wasn't "the world I interact with every day." it was driven by an internal sense of what makes existing okay and what doesn't and extrapolating that over the universe. Raising/lowering my zero-point is therefore internally connected with my heuristic for whether more beings should exist or not and in this sense, the zero-point was only a proxy for my psychological anguish pointing at this concept. If I artificially inflate/depreciate my zero-point while maintaining awareness that this has no effect on whether or not the average being existing is good or bad, it won't actually change how I feel psychologically.
  • A vast amount of my anguish around having a very low zero-point was social angst. A low zero-point (especially when due to animal welfare) not only meant that the world was bad; it meant that barely anyone cared (and in my immediate bubble, literally no one cares). This stuff occurred to me when I was very young and can result in what I now know to be institutional betrayal trauma. Had I been an ordinary kiddo that didn't make real-time psychological corrections when my brain started acting funny, this would've happened to me.
    • Also, while I get what you're saying, having a different value of something psychologically linked to a normative claim about "when it is good to exist" or "the bare standard of human decency" will gaslight people traumatized by mismatches between those claims and people's actual actions. If you keep this zero-point alteration tool solely for the psychological benefits, it's not a big deal. But if you talk to people about ethics and think your moral statements might be reflective of a modified zero-point, then it can be an issue. In light of this, I'd recommend preambling your ethical statements with something like "if I seem insufficiently horrified, it is only because I am deliberately modifying my definition of the bare standard of human decency/zero-point for reasons of mental well-being". Otherwise, you'll mess a whole bunch of people up.
  • You've pointed out changing your zero-point gives you a number of psychological benefits. However, I think most of these psychological benefits come from the fact that people are more satisficing than utilitarian and this causes zero-point shifts to also cause nonlinear transformations of your utility function. If you're accustomed to being internally satisfied by the world having utility over threshold X and you change your zero-point for the world without changing that threshold, you'll predictably have more acceptance, relief and hope but this is because you've performed a de-facto nonlinear transformation of your utility function. Sometimes this, conditioned on being an irrational human, is a good thing to do to be more effective. Sometimes it makes you vulnerable to unbounded amounts of moral hazard. If you're arguing in favor of zero-point moving, you need to address the concerns implied by the latter possibility.
  • For evidence that these claim generalize beyond me, just look at your quote from Rob. He's talking about a "bare standard of human decency" but note that this standard is actually a set of strategies! As you pointed out, strategies are invariant if you change your utility function's zero point so the bare standard of human decency should be invariant too! As a non-utilitarian, this means you have four options with respect to your zero-point and each of them have their own drawbacks:
    • Not changing your zero-point and bite the bullet psychologically
    • Changing your zero-point but decoupling it from your sense of the "bare standard of human decency" which is held constant. This eliminates the psychological benefits
    • Changing your zero-point and allowing your "bare standard of human decency" to drift. This modifies your utility function.
    • Changing your zero-point and allowing your "bare standard of decency" to drift but decoupling your "bare standard of decency" from the actions you actually make. This will either eliminate the psychological benefits or break your sense of ethics
Replies from: orthonormal, orthonormal, orthonormal
comment by orthonormal · 2020-04-07T17:47:45.910Z · LW(p) · GW(p)

(Splitting replies on different parts into different subthreads.)

The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The "insufficiently horrified" framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.

Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?

Replies from: Isnasene
comment by Isnasene · 2020-04-07T23:39:41.655Z · LW(p) · GW(p)
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

I share your problem with purity ethics... I almost agree with this? Frankly, I have some issue with using the claim "a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!" and juxtaposing it with something kind-of like the claim "it's alright to not be very utilitarian!" The claims kind of invalidate each other. Don't get me wrong, there's definitely some sort of ethical pareto-frontier where you balance the strength of each claim individually but, unless that's qualified, I'm not thrilled.

For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The "insufficiently horrified" framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.

There are two things going on here -- the actual action of meat consumption and the internal characterization of horror. Actions that involve consuming less meat might point to short-term ethical improvements but people who are horrified of consuming meat point to much longer-term ethical improvements. If I had a choice between two people who cut meat by two-thirds and the same people doing the same thing while also kinda being horrified of what they're doing, I'd choose the latter.

Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?

For similar reasons, I'd prefer one vegan over two people who'd cut meat by 2/3. Being vegan points to a level of experienced horror and that points to them being a long-term ethical ally. Cutting meat by 2/3 points towards people who are kinda uncomfortable with animal suffering (but more likely health concerns tbh) but who probably aren't going to take any significantly helpful actions about it.

And in reverse, I'd prefer one meat-eater on the margin who does it out of physical necessity but is horrified by it to a vegan who does it because that's how they grew up. The long-term implication of the horror is sometimes better than the direct consequence of the action.

Replies from: orthonormal
comment by orthonormal · 2020-04-08T00:48:29.669Z · LW(p) · GW(p)

Thank you for confirming. I wanted to be sure I wasn't putting words in your mouth.

I think I just have a very different model than you of what most people tend to do when they're constantly horrified by their own actions.

I'm sorry about the animal welfare relevance of this analogy, but it's the best one I have:

The difference between positive reinforcement and punishment is staggering; you can train a circus animal to do complex tricks using either method, but only under the positive reinforcement method will the animal voluntarily engage further with the trainer. Train an animal with punishment and it will tend to avoid further training, will escape the circus if at all possible.

This is why I think your psychology is unusual. I expect a typical person filled with horror about a behavior to change that behavior for a while (do the trained trick), but eventually find a way to not think about it (avoid the trainer) or change their beliefs in order to not find it horrible any longer (escape the circus). I can believe that your personal history makes the horror an extremely motivating force for you. I just don't think that's the default way for people to respond to those sort of experiences and feelings.

It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions. And I expect that to help most people go farther.

Replies from: Isnasene, Isnasene
comment by Isnasene · 2020-04-08T23:24:36.373Z · LW(p) · GW(p)

Huh... I think the crux of our differences here is that I don't view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior -- I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn't really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn't modify the ethical framework/ultimate actions really perturbs me.

Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just "positive reinforcement vs punishment" (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.

Replies from: orthonormal
comment by orthonormal · 2020-04-09T00:00:50.137Z · LW(p) · GW(p)

I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That's not quite true, but it's more true than the idea of a human as a unitary agent.

I'm mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn't go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.

Reframing things to myself, in ways that don't change the truth value but do change the emphasis, is very useful. Other parts of me don't necessarily speak logic, but they do speak metaphor.

I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.

Replies from: Isnasene
comment by Isnasene · 2020-04-09T17:53:33.913Z · LW(p) · GW(p)

Thanks for confirming. For what it's worth, I can envision your experience being a somewhat frequent one (and I think it's probably actually more common among rationalists than the average Jo). It's somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There's no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it's easy.

Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.

I'll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it's definitely led to some unusual psychology -- I'm planning on doing a post on it one of these days.

comment by Isnasene · 2020-04-08T23:29:43.044Z · LW(p) · GW(p)
It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.

I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you'd have to be using some really nonstandard utilitarianism.

Replies from: orthonormal
comment by orthonormal · 2020-04-08T23:55:01.964Z · LW(p) · GW(p)

Of course you shouldn't plan to reset the zero point after actions! That's very different.

I use this sparingly, for observing big new facts that I didn't cause to be true. That doesn't change the relative expected utilities of various actions, so long as my expected change in utility from future observations is zero [LW · GW].

comment by orthonormal · 2020-04-07T17:41:23.948Z · LW(p) · GW(p)

(Splitting replies on different parts into different subthreads.)

Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.

That could be true for you, but it seems counter to the way most people work. Constant anguish tends not to motivate, it instead leads to psychological collapse, or to frantic measures when patience would achieve more, or to protected beliefs that resist challenge in any small part.

Replies from: Isnasene
comment by Isnasene · 2020-04-07T23:12:06.172Z · LW(p) · GW(p)
Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.

Load-bearing horror != constant anguish. There are ways to have an intuitively low zero point measure of the world that don't lead to constant anguish. Other than that, I agree with you -- constant anguish is bad. The extent of my ethics-related anguish is probably more along the lines of 2-3 hour blocks of periodic frustration that happen every couple weeks.

That could be true for you, but it seems counter to the way most people work. Constant anguish tends not to motivate, it instead leads to psychological collapse, or to frantic measures when patience would achieve more, or to protected beliefs that resist challenge in any small part.

Yeah, this is my experience with constant anguish as well (though the root cause of that was more school-related than anything else). I agree with your characterization (and as a mildly self-interested person), I also don't really think its reasonable to demand that people be in constant anguish at all -- regardless of the utilitarian consequences.

To play Devil's Advocate though, I (and many others) are not in the class of people who's psychological wellbeing or decision-making skills actually much contribute to ethical improvement at all; we're in the class of people who donate money. Unless the anguish of someone in this class is strong enough to impede wealth accumulation toward donating (which it basically can't once you have enough money that your stock market returns compete with your income), there's not really a reason to limit it.

comment by orthonormal · 2020-04-07T17:33:25.344Z · LW(p) · GW(p)

(Splitting replies on different parts into different subthreads.)

One part of this helped me recognize an important emendation: if many bad things are continuing to happen, then a zero point of "how things are right now" will still lead you inexorably into the negatives. I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class" as my reference point, but I didn't crystallize that and put it in my post. Thank you, I'll add that in.

Replies from: Isnasene
comment by Isnasene · 2020-04-08T23:34:04.924Z · LW(p) · GW(p)
I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class"

If you move your zero-point to reflect world-trajectory based on a random person in your reference class, it creates incentives to view the average person in your reference class as less altruistic than they truly are and to unconsciously normalize bad behavior in that class.

Replies from: orthonormal
comment by orthonormal · 2020-04-09T00:05:25.343Z · LW(p) · GW(p)

That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.) Impostor syndrome makes this doubly bad, if the people in one's reference class who are struggling don't make that fact visible.

There are two opposite pieces of advice here, and I don't know how to tell people which is true for them- if anything, I think they might gravitate to the wrong piece of advice, since they're already biased in that direction.

Replies from: Isnasene
comment by Isnasene · 2020-04-09T17:32:12.370Z · LW(p) · GW(p)
That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.)

Ah, understandable. I felt a similar way back when I was doing materials engineering -- and I admit I put a lot of work into figuring out how to connect my research with doing good before I moved on from that. I think that when you're working on something you're passionate about, you're much more likely to try to connect it to making a big positive impact and to convince yourself that your coworkers are making a big positive impact.

That being said, I think it's important to distinguish impressiveness from ethical significance and to recognize that impressiveness itself is a personally-selected free variable. If I described myself as a very skilled computational researcher (more impressive), I'd feel very good about my ethical performance relative to my reference class. But if I described myself as a financially blessed rationalist (less impressive), I'd feel rather bad.

There are two opposite pieces of advice here, and I don't know how to tell people which is true for them- if anything, I think they might gravitate to the wrong piece of advice, since they're already biased in that direction.

In any case, I agree with you at the object level with respect to academia. Because academic research is often a passion project, and we prefer our passions to be ethically significant, and academic culture is particularly conducive to imposter syndrome, overestimating the ethical contributions of our corresponding academic reference class is pretty likely. Now that I'm EtG in finance, the environmental consequences are different.

Actually, how about this -- instead of benchmarking against a world where you're a random member of your reference class, you just benchmark against the world where you don't exist at all? It might be more lax than benchmarking against a member of your reference-class in cases where your reference class is doing good things but it also protects you from unnecessary ethical anguish caused by social distortions like imposter syndrome. Also, since we really want to believe that our existences are valuable anyway, it probably won't incentivize any psychological shenanigans we aren't already incentivized to do.

comment by Kenny · 2020-04-07T01:58:09.558Z · LW(p) · GW(p)

This is powerful. It's such simple (and blatant) 'framing' ... and yet it really does make me feel better about considering whether the world is (much) worse than I now believe.

Thanks!

comment by orthonormal · 2021-12-14T20:00:27.689Z · LW(p) · GW(p)

You can see my other reviews from this and past years, and check that I don't [LW(p) · GW(p)] generally [LW(p) · GW(p)] say [LW(p) · GW(p)] this sort of thing:

This was the best post I've written in years. I think it distilled an idea that's perennially sorely needed in the EA community, and presented it well. I fully endorse it word-for-word today.

The only edit I'd consider making is to have the "Denial" reaction explicitly say "that pit over there doesn't really exist".

(Yeah, I know, not an especially informative review - just that the upvote to my past self is an exceptionally strong one.)

comment by philip_b (crabman) · 2020-04-08T14:22:58.806Z · LW(p) · GW(p)

I suggest not only shifting the zero point, but also scaling utilities when you update on information about what's achievable and what's not. For example, suppose you thought that saving 1-10 people in poor countries was the best you could do with your life, and you felt like every life saved was +1 utility. But then you learned about longtermism and figured out that if you try, then in expectation you can save 1kk lives in the far future. In such situation it doesn't make sense to continue caring about saving an individual life as much as you cared before this insight - your system 1 feeling for how good thing can be won't be able to do its epistemological job then. It's better to scale utility of saving lives down, so that +1kk lives is +10 utility, and +1 life is +1/100000 utility. This is related to Caring less [LW · GW].

However, this advice has a very serious downside - it will make it very difficult to communicate with "normies". If a person thinks saving a life is +1 utility and tells you that there's this opportunity to go and do it, and if you're like "meh, +1/100000 utility", they will see your reaction and think you're weird or heartless or something.

comment by TurnTrout · 2021-12-14T06:29:09.076Z · LW(p) · GW(p)

I really liked this post in 2020, and I really like this post now. I wish I had actually carved this groove into my habits of thoguht. I'm working on doing that now.

One complaint: I find the bolded "This post is not about that topic." to be distracting. I recommend unbolding, and perhaps removing the part from "This post" through "that difference."

Replies from: orthonormal
comment by orthonormal · 2021-12-14T19:39:19.361Z · LW(p) · GW(p)

Thank you!

Re: your second paragraph, I was (and am) of the opinion that, given the first sentence, readers were in danger of being sucked down into their thoughts on the object-level topic before they would even reach the meta-level point. So I gave a hard disclaimer then and there.

Your mileage varied, of course, but I model more people as having been saved by the warning lights than blinded by them.

comment by orthonormal · 2020-04-07T05:10:02.763Z · LW(p) · GW(p)

Changed the title because I realized it didn't match the terminology of the post. (I changed the post from a previous draft.)

comment by Donald Hobson (donald-hobson) · 2020-04-09T16:41:39.092Z · LW(p) · GW(p)
When it comes to personal virtue, the true neutral point for yourself shouldn't be "doing everything right", because you're consigning yourself to living in negative-land. A better neutral point is "a random person in my reference class". How are you doing relative to a typical [insert job title or credential or hobby here], in your effects on that community? Are you showing more discipline than the typical commenter on your Internet forum? That's a good starting point, and you can go a long way up from there.

If you take this literally, it will push you away from good reference classes. Don't join the Nazis, just because it is really easy to do more good than the average Nazi. Maybe choose a reference class that you can't change your membership of, like beings that started off biochemically human. But I'm not sure you should sit on your solid gold yacht, proudly boasting that the amount you give to charity is slightly above the global median. And if you are paralysed in a freak accident, constantly bemoaning that you can do almost no good doesn't seem sensible either. Reference classes are fiddly and prone to "reference class tennis". (people batting different reference classes back and forth) Set the zero to optimize mental health.

Replies from: orthonormal
comment by orthonormal · 2020-04-09T19:21:04.964Z · LW(p) · GW(p)

To respond to the Godwin example, if your reference class is "Germans in the 1930s", I assert that there are far more altruistically effective actions one can take than "be a sincere reformist Nazi", to a much greater extent than "become entirely vegan" is a more altruistic option than "reduce meat/egg consumption by 2/3".

I agree that choosing the right reference class is difficult and subjective. The alternative of "imagine if you never existed" is interesting, but has the problem of the valley of bad rationality: people realize "I've already caused a carbon footprint and animal suffering" long before they realize "the amount of work it takes to offset more than I've caused is actually not that much". That leaves them feeling like they're deep in the Negative Zone for too long, with the risks I've mentioned.

comment by andrew sauer (andrew-sauer) · 2023-02-23T09:50:57.237Z · LW(p) · GW(p)

So if we're suddenly told about a nearby bottomless pit of suffering, what happens?

Ideally, the part of me that is still properly human and has lost its sanity a long time ago has a feverish laugh at the absurdity of the situation. Then the part of me that can actually function in a world like this gets to calculating and plotting just as always.