Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do.

post by Chi Nguyen · 2024-02-23T06:10:05.881Z · LW · GW · 18 comments

Contents

18 comments

18 comments

Comments sorted by top scores.

comment by Ann (ann-brown) · 2024-02-23T16:52:49.867Z · LW(p) · GW(p)

Reminded me of "All happy families are alike; each unhappy family is unhappy in its own way."

I'm unsure it's true that "roughly everyone thinks suffering is bad". In the simplified/truism form maybe, but if you look at, for example, Christian theology, there's proposed utility to suffering in the ultimate effect it has on you; i.e., the most desirable states of yourself cannot be reached without also having suffering in the path.

comment by Signer · 2024-02-25T08:29:03.852Z · LW(p) · GW(p)

I think it’s extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.

In a survey from SSC I counted ~10% of answers that preferred <50% probability of heaven vs hell to certainty of oblivion. 10% is not "extremely rare".

Replies from: Chi Nguyen
comment by Chi Nguyen · 2024-02-26T03:27:45.856Z · LW(p) · GW(p)

Whoa, I didn't know about this survey, pretty cool! Interesting results overall.

It's notable that 6% of people also report they'd prefer absolute certainty of hell over not existing, which seems totally insane from the point of view of my preferences. The 11% that prefer a trillion miserable sentient beings over a million happy sentient beings also seems wild to me. (Those two questions are also relatively more correlated than the other questions.)

comment by Dagon · 2024-02-23T06:33:18.106Z · LW(p) · GW(p)

Point 2 lets me out. I suspect complex positive is a larger mass of value than simple positive or negative have, and I think a lot of complex negative is missing from the universe.

Replies from: Tapatakt, Slapstick
comment by Tapatakt · 2024-02-23T18:27:47.644Z · LW(p) · GW(p)

I think I'm at least close to agreeing, but even if it's like this now, it doesn't mean that the complex-positive-value-optimizer can produce more value mass than simple-negative-value-optimizer.

comment by Slapstick · 2024-02-23T17:26:10.814Z · LW(p) · GW(p)

Why do you think/suspect that?

Replies from: Dagon
comment by Dagon · 2024-02-23T21:56:57.859Z · LW(p) · GW(p)

Mostly intuition and introspection of complex positives I've experienced (joy, satisfaction, optimism) being far more durable than the simple positives and negatives, and even the "complex" negatives of depression and worry tend not to make overall life negative value.

Replies from: Slapstick
comment by Slapstick · 2024-02-23T22:31:38.187Z · LW(p) · GW(p)

Thanks for answering. I would personally expect this intuition and introspection to be sensitive to contingent factors like the range of experiences you've had, would you agree?

Personally my view leans more in the other direction, although it's possible I'm losing something in misunderstanding the complexity variable.

If my life experience leads me the view that 'suffering is worse than wellbeing is good', and your life experiences lend towards the opposite view, should those two data points be given equal weight? I personally would give more weight to accounts of the badness of suffering, because I see a fundamental asymmetry there, but would you say that's a product of bias given to my set of experiences?

If I were to be offered 300 years of overwhelmingly positive complex life in exchange for another ten years of severe anhedonic depression, I would not accept that offer. It wouldn't even be a difficult choice.

Assuming you would accept that offer for yourself, would you accept that offer on behalf of someone else?

Replies from: Dagon
comment by Dagon · 2024-02-23T23:04:45.446Z · LW(p) · GW(p)

I mean, at it's root, value is personal and incomparable.  There's no basis for claiming any given valuation applies outside the evaluator's belief/preference set.  As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.  That said, my beliefs resonate much more strongly with me than your description, so if you insist on having a single unified view, I'm going to weight mine higher.

That said, there is some (weak) evidence that those who claim suffering is more bad than joy/hope is good are confused about their weightings, as applied to themselves.  The rate of suicide is really quite low.  You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.  

Replies from: Slapstick, Shiroe
comment by Slapstick · 2024-02-23T23:48:24.592Z · LW(p) · GW(p)

The rate of suicide is really quite low. You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.

I think the expected value of the rest of my life is positive (I am currently pretty happy), especially considering impacts external to my own consciousness. If that stops being the case, I have the option.

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.

It's hard to know exactly what this is implying. Sure it's based on personal experience that's difficult to extrapolate and aggregate etc. But I think it's a very important question. Potentially the most important question. Worth some serious consideration.

People are constantly making decisions based on the their marginal valuations of suffering and wellbeing, and the respective depths and heights of each end of the spectrum. These decisions can/do have massive ramifications.

So I can try to understand your view better, would you choose to spend one year in the worst possible hell if it meant you got to spend the next year in the greatest possible heaven?

Given my understanding of your expressed views, you would accept this offer. If I'm wrong about that, knowing that would help with my understanding of the topic. If you think it's an incoherent question, that would also improve my understanding.

Feel free to disengage, I just find limited opportunities to discuss this. If anyone else has anything to contribute I'd be happy to hear it.

Replies from: Dagon
comment by Dagon · 2024-02-24T00:34:09.467Z · LW(p) · GW(p)

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

Sure, much as there are strong cultural/signaling reasons to expect people to overestimate pain and underestimate pleasure values.  I mean, none of this is in the territory, it's all filtered through brains, in different and unmeasurable ways.

Sure it's based on personal experience that's difficult to extrapolate and aggregate etc.

Not difficult.  Impossible and meaningless to extrapolate or aggregate.  I suspect this is the crux of my disagreement with most utilitarian-like frameworks.

Replies from: Slapstick
comment by Slapstick · 2024-02-24T06:05:28.449Z · LW(p) · GW(p)

Would you spend a year in the worst possible hell in exchange for a year in the greatest possible heaven?

Replies from: Dagon
comment by Dagon · 2024-02-24T06:47:49.058Z · LW(p) · GW(p)

I think so. I can’t really extrapolate such extremes, but it sounds preferable to two years of undistinguished existence.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%. I think, but it’s difficult to really predict, that I’d prefer the variance over two years at the median.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important.

Replies from: Slapstick
comment by Slapstick · 2024-02-24T21:39:41.616Z · LW(p) · GW(p)

Interesting. It is an abstract hypothetical, but I do think it's useful, and it reveals something about how far apart we are in our intuitions/priors.

I wouldn't choose to live a year in the worst possible hell for 1000 years in the greatest possible heaven. I don't think I would even take the deal in exchange for an infinite amount of time in the greatest possible heaven.

I would conclude that the experience of certain kinds of suffering reveals something significant about the nature of consciousness that can't be easily inferred, if it can be inferred at all.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile. So in that sense it's difficult for me to consider that question.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important

I think even if they're abstract semi-coherant questions they're very revealing, and I think they're very relevant to prioritization of s-risks, allocating resources, and issues such as animal welfare.

It makes it easier for me to understand how otherwise reasonable seeming people can display a kind of indifference to the state of animal agriculture. If someone isn't aware of the extent of possible suffering, I can see why they might not view the issue with the same urgency.

Replies from: Dagon
comment by Dagon · 2024-02-25T03:43:58.668Z · LW(p) · GW(p)

it reveals something about how far apart we are in our intuitions/priors.

Indeed!  And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time).  Or both.  Really makes for an uncertain basis of any sort of reasoning or decision-making.

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile.

Wait, what?  My guess is exactly the opposite - something like a logistic curve (X being the valence of experience, Y being the valuation), so there's a huge difference toward the middle or when changing sign, but only minor changes in value toward the tails. 

Once again, intuitions are a sketchy thing.   In fact, I should acknowledge that I'm well beyond intuition here - I just don't HAVE intuitions at this level of abstraction.  This is my attempt to reconcile my very sparse and untrustworthy intuition samples with some intellectual preferences for regularity.  My intuitions are compatible with my belief in declining marginal value, but don't really specify the rest of the shape.  It could easily be closer to a pure logarithm - X axis from 0 (absolute worst possible experience) to infinity (progressively better experiences with no upper limit), with simple declining marginal value.

Replies from: Slapstick
comment by Slapstick · 2024-02-25T05:36:47.525Z · LW(p) · GW(p)

And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time)

I don't think it's really all that complicated, I suspect that you haven't experienced a certain extent of negative valence which would be sufficient to update you towards understanding how bad suffering can get.

It would be like if you've never smelled anything worse than a fart, and you're trying to gauge the mass of value of positive smells against the mass of value of negative smells. If you were trying to estimate what it would be like in a small room full of decaying dead animals and ammonia, or how long you'd willingly stay in that room, your intuitions would completely fail you.

but only minor changes in value toward the tails.

I have experienced qualia that is just slightly net negative, feeling like non-existence would be preferable all else equal. Then I've experienced states of qualia that are immensely worse than that. The distance between those two states is certainly far greater than the distance between neutral and extreme pleasure/fulfillment/euphoria etc. Suffering can just keep getting worse and worse far beyond the point at which all you can desire is to cease existing.

Replies from: Dagon
comment by Dagon · 2024-02-26T16:08:53.321Z · LW(p) · GW(p)

Yeah, I think I'm bowing out at this point.  I don't disagree that my suffering has been pretty minor in the scheme of things, but that's kind of my whole point: everyone's range of experiences is unique and incommunicable.  Or at least mine is.  

comment by Shiroe · 2024-02-24T08:24:38.757Z · LW(p) · GW(p)

It's not easy to see the argument for treating your vales as incomparable with the values of other people, but seeing your future self's values as identical to your own. Unless you've adopted some idea of a personal soul.